🤖 Auto labeling with Segment Anything (SAM2 + MobileSAM)
Auto labeling was built with ideas from AnyLabeling (opens in a new tab). There are two variants of Segment Anything Model: SAM2 and MobileSAM supported in AnyLabeling:
- 🚀 Segment Anything Model 2 (SAM 2) (opens in a new tab) is Meta's latest advancement in computer vision, building upon the success of its predecessor. This foundation model is designed to tackle promptable visual segmentation in both images and videos, representing a significant leap forward in visual understanding and processing.
- 📱 MobileSAM (opens in a new tab) is the variant introduced in Faster Segment Anything: Towards Lightweight SAM for Mobile Applications (opens in a new tab).
📝 Instructions
-
Select AI on the left side to activate auto labeling.
-
🤔 Select one of the Segment Anything Models from the dropdown menu Model. The model accuracy and speed are different depending on the model:
- ⚡️ Segment Anything Model (ViT-B): Fastest, lower accuracy
- ⚖️ Segment Anything Model (ViT-L): Balanced performance
- 🎯 Segment Anything Model (ViT-H): Highest accuracy, slower processing
- 💪 Quant indicates the quantization of the model
-
🛠️ Use Auto segmentation marking tools:
- +Point: Add a point that belongs to the object
- -Point: Remove a point to exclude from the object
- +Rect: Draw a rectangle around the object for automatic segmentation
- Clear: Reset all auto segmentation markings
- Finish Object (f): Complete the current marking, add label name and save
⚠️ Important Notes
First-time setup:
- Initial model download required - duration depends on your network speed
- First AI inference may take longer - please be patient
- Background "encoder" calculations will speed up future segmentations