---
title: Interactive annotation with Segment Anything Model
type: guide
tier: all
order: 10
hide_menu: true
hide_frontmatter_title: true
meta_title: Interactive annotation in Label Studio with Segment Anything Model (SAM)
meta_description: Label Studio tutorial for labeling images with MobileSAM or ONNX SAM.
categories:
- Computer Vision
- Object Detection
- Image Annotation
- Segment Anything Model
- Facebook
- ONNX
Use Facebook's Segment Anything Model with Label Studio!
In July 2024, Facebook released an update to the Segement Anything model, called SAM 2. To use this newer model for
labeling, see the segment_anything_2_image repo
Before you begin, you must install the Label Studio ML backend.
This tutorial uses the segment_anything_model example.
To start the server with the lightweight mobile version of SAM, run the following command:
docker-compose up
By default, the docker-compose file runs the model on the CPU. If you have a GPU, you can enable it by adding the following lines in docker-compose.yml:
environment:
- NVIDIA_VISIBLE_DEVICES=all
deploy:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
There are two models in this repo that you can use:
1. Advanced Segment Anything Model
2. ONNX Segment Anything Model
The Advanced Segment Anything Model introduces the ability to combine a
multitude of different prompts to achieve a prediction, and the ability to use
MobileSAM.
The ONNX Segment Anything Model gives you the ability to use either a single
keypoint or a single rectangle label to prompt the original SAM.
- This offers a much faster prediction than using the original Segment Anything
Model.
- The downside is that image size must be specified before using the ONNX model, and
cannot be generalized to other image sizes while labeling. Also, this does not yet
offer the mixed labeling and refinement that AdvancedSAM does.
Each model has different pros and cons. Consider which is best for your project:
ONNXSAM
The Label Studio SAM backend works best if you have Local
Storage enabled for
your project. It is also possible to set up shared local storage, but is not
recommended. Currently, the backend does not work with cloud storage (S3,
Azure, GCP).
You can enable local storage file serving by setting the following variables:
LABEL_STUDIO_LOCAL_FILES_DOCUMENT_ROOT=<path_to_image_data>
LABEL_STUDIO_LOCAL_FILES_SERVING_ENABLED=true
For example, if you're launching Label Studio with Docker, you can enable these variables with
docker run -it -p 8080:8080 \
-v $(pwd)/mydata:/label-studio/data \
--env LABEL_STUDIO_LOCAL_FILES_SERVING_ENABLED=true \
--env LABEL_STUDIO_LOCAL_FILES_DOCUMENT_ROOT=/label-studio/data/images \
heartexlabs/label-studio:latest
Note the IP address that you are running your Label Studio instance as theLABEL_STUDIO_HOST. This will be necessary for setting up the connection to your
SAM model.
Because you are hosting both Label Studio and the ML backend in
Docker containers, the hostname localhost will not resolve to the correct
address. There are a number of ways to determine your host IP address. These
can include calling either ip a or ifconfig from the command line and
inspecting the output, or finding the address that has been assigned to your
computer through the system network configuration settings.
Log into the Label Studio interface (in the example above, athttp://<LABEL_STUDIO_HOST>:8080).
Go to the Account & Settings
page, and make a note of the Access Token, which we will use later as
the LABEL_STUDIO_ACCESS_TOKEN.
Make a clone of this repository on your host system and move it into the working
directory.
git clone https://github.com/humansignal/label-studio-ml-backend
cd label-studio-ml-backend/label_studio_ml/examples/segment_anything_model
We suggest using Docker Compose to host and
run the backend. For GPU support, please consult the Docker Compose GPU Access
Guide to understand how to pass
through GPU resources to services.
Edit the docker-compose.yml file and fill in the values for theLABEL_STUDIO_HOST and LABEL_STUDIO_ACCESS_TOKEN variables for your particular
installation. Be sure to append the port that Label Studio is running on in
your LABEL_STUDIO_HOST variable, for example http://192.168.1.36:8080 if
Label Studio is running on port 8080.
Run the command docker compose up --build to build the container and run it
locally.
This step is only necessary if you are not using the Docker build for this model.
For MobileSAM install the weights using this
link
and place them in a folder (along with the advanced_sam.py and onnx_sam.py files)
For using regular SAM and/or ONNX- Follow SAM installation instructions with
pip. Then, install
the ViT-H SAM model
For the ONNX model install using python onnxconverter.py
You can download all weights and models using the following command:
./download_models.sh
Change your directory to this folder and then install all of the python requirements.
pip install -r requirements.txt
_wsgi.py depending on your choice of modelYou can set the following environment variables to change the behavior of the model.
LABEL_STUDIO_HOST sets the endpoint of the Label Studio host. Must begin with http://LABEL_STUDIO_ACCESS_TOKEN sets the API access token for the Label Studio host.SAM_CHOICE selects which model to use.
SAM_CHOICE=MobileSAM to use MobileSAM (default)SAM_CHOICE=SAM to use the original SAM model.SAM_CHOICE=ONNX to use the ONNX model.You can now manually start the ML backend.
python _wsgi.py
or
docker-compose up
to start the backend in a Docker container
or
MOBILESAM_CHECKPOINT=path/to/mobile_sam.pt label-studio-ml start segment_anything_model/
Note: If you see an error on MacOS, try set the environment variable
KMP_DUPLICATE_LIB_OK=True
Log into your Label Studio instance and perform the following steps.
If you're running Label Studio in Docker or on another host, you
should use the direct IP address of where the model is hosted (localhost
will not work). Be sure to include the port number that the model is hosted on
(the default is 9090). For example, if the model is hosted on 192.168.1.36,
the URL for the model would be http://192.168.1.36:9090
5. Click Validate and Save.
You can now upload images into your project and begin annotating.
The video also goes over this process, but does part of it while in the newly created project menu.
See this video tutorial
to get a better understanding of the workflow when annotating with SAM.
Use the Alt hotkey to alter keypoint positive and negative labels.
There may be a checkmark inside the image next to a generated prediction,
but do not use that one.
For some reason, the checkmark that is not on the
image itself cleans the other input prompts used for generating
the region, and only leaves the predicted region after being clicked (this is
the most compatible way to use the backend).
If you place multiple rectangle labels, the model will use the newest
rectangle label along with all other keypoints when aiding the model
prediction.
orig_img_size in onnx_converter.py that definesorig_img_size, and your image aspect ratios do not match what isdocker compose to launch the model, be sure to rebuild the hostorig_img_size BEFORE generating the ONNX model whenonnx_converter.py"orig_im_size": torch.tensor([#heightofimages, #widthofimages], dtype=torch.float),Alt hotkey to create negative keypoints.Base example:
```xml
Brush for manual labeling
<View className="column">
<HyperText value="" name="h2" className="help" inline="true">
<span title="1. Click purple auto Keypoints/Rectangle icon on toolbar. 2. Click Foreground/Background label here">
Keypoints for auto-labeling
</span>
</HyperText>
<View className="label">
<KeyPointLabels name="tag2" toName="image" smart="true">
<Label value="Foreground" smart="true" background="#FFaa00" showInline="true" />
<Label value="Background" smart="true" background="#00aaFF" showInline="true" />
</KeyPointLabels>
</View>
</View>
<View className="column">
<HyperText value="" name="h3" className="help" inline="true">
<span title="1. Click purple auto Keypoints/Rectangle icon on toolbar. 2. Click Foreground/Background label here">
Rectangles for auto-labeling
</span>
</HyperText>
<View className="label">
<RectangleLabels name="tag3" toName="image" smart="true">
<Label value="Foreground" background="#FF00FF" showInline="true" />
<Label value="Background" background="#00FF00" showInline="true" />
</RectangleLabels>
</View>
</View>
```
Label values for the keypoints, rectangle, and brush labels must correspond.
Other than that, make sure that smart="True" for each keypoint label and
rectangle label.
For the ONNX model:
<View>
<Image name="image" value="$image" zoom="true"/>
<BrushLabels name="tag" toName="image">
<Label value="Banana" background="#FF0000"/>
<Label value="Orange" background="#0d14d3"/>
</BrushLabels>
<KeyPointLabels name="tag2" toName="image" smart="true">
<Label value="Banana" smart="true" background="#000000" showInline="true"/>
<Label value="Orange" smart="true" background="#000000" showInline="true"/>
</KeyPointLabels>
<RectangleLabels name="tag3" toName="image" smart="true">
<Label value="Banana" background="#000000" showInline="true"/>
<Label value="Orange" background="#000000" showInline="true"/>
</RectangleLabels>
</View>
Original Segment Anything Model paper- @article{kirillov2023segany, title={Segment Anything}, author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross}, journal={arXiv:2304.02643}, year={2023} }
MobileSAM paper- @article{mobile_sam, title={Faster Segment Anything: Towards Lightweight SAM for Mobile Applications}, author={Zhang, Chaoning and Han, Dongshen and Qiao, Yu and Kim, Jung Uk and Bae, Sung-Ho and Lee, Seungkyu and Hong, Choong Seon}, journal={arXiv preprint arXiv:2306.14289}, year={2023} }