Meta AI released the Segment Anything Model (SAM) this month, a foundation model for image segmentation.
We owed it to our name to explore the possibilities of this new model, so we launched an internal hackathon to bring two powerful new features enabled by SAM to Segments.ai.
We’re excited to announce that we’ve now integrated SAM as an edit mode and as a way to pre-label your images in our segmentation labeling interface, bringing lightning-fast one-click segmentation to you.
Label faster with Hover-and-click 🪄
With the Hover-and-click mode, you get a third ML-assisted labeling tool alongside our already available Superpixel and Autosegment tools.
Just hover with your mouse over the image to see suggested segmentation masks and click to confirm them. It can’t be any simpler! SAM works on any data, from street scenes to satellite data to medical images.
The Superpixel tool is still the perfect solution if you need to label regions with contiguous borders, while the Autosegment tool is great for segmenting fine/small objects or very high-resolution images.
While the Segment Anything Model (SAM) is an impressive segmentation model, unfortunately it doesn’t understand text and cannot easily do panoptic segmentation (= semantic + instance segmentation).
We combined SAM with two other state-of-the-art models, Grounding DINO and CLIPSeg, to make this possible. This means that you can now get automatic pre-labels in Segments.ai based on your dataset’s categories. Make sure to indicate in your label settings which categories have instances (car, person, traffic sign), and which don’t (road, sky, vegetation)!