Leverage 3D annotations to drastically speed up 2D labeling, and more

2 min read -
Avatar photo
- December 13th, 2023 -

In our last newsletter we announced our new multi-sensor labeling interface. This month, we’re excited to introduce a new feature that will save you lots of time labeling multi-sensor data: 3D-to-2D bounding box projections!

Leverage 3D annotations to speed up 2D labeling

Let’s say your car or robot has a lidar sensor and multiple cameras, and is driving past a static object like a traffic sign. Imagine that you now want to annotate this object with 3D bounding boxes in the lidar point cloud and with 2D bounding boxes in the camera images, while keeping consistent object IDs across time and sensors.

As you drive past the object, it is visible in 3 cameras for, say, 100 frames. This means you’d have to annotate 300 2D bounding boxes and make sure that their object IDs are identical. In 3D space, though, labeling this static, non-moving traffic sign only takes a single 3D bounding box annotation.

With our new 3D-to-2D projection feature, you can now project your 3D bounding boxes to the camera images as 2D bounding boxes, all with a single click! You get pre-labeled images with consistent annotations that only require minor corrections. This drastically speeds up your 2D labeling, with factors of up to 100x.

Copy-paste your cuboids

You could already label single images with bitmap segmentation labels, but we now also launched our labeling interface for segmenting image sequences. This interface is still in beta, so please reach out to us if you encounter any bugs or performance issues. More updates are coming soon.

Other features and improvements

  • It’s now easier to add and remove points from 2D polygon and polylines
  • Measure distances in 2D and 3D with our new ruler tool
  • Toggle a square or circular helper grid on the ground plane in the 3D interfaces
  • Toggle a visualization of the camera poses in the 3D interfaces for easier debugging of calibration issues
  • Click to lock the original/both/label coloring mode in the 3D segmentation interface
  • In the 3D interfaces, the min/max values for the height coloring gradient can be set automatically
  • You can now also search samples by UUID and datasets by owner
  • We now support point clouds in PLY format
  • The latest version of our Python SDK contains numerous improvements and bug fixes.

Curated reads for you

Case study: Data labeling for Scythe Robotics’ precise off-road perception models

Explore Scythe Robotics’ innovation in autonomous lawn mowing through our use case highlight.

Their commercial mowers, equipped with advanced sensors and deep learning, are challenged to efficiently identify drivable lawn areas and obstacles, maintaining over 50 data classes.

Read the Scythe Robotics case study

Meet Mark Hafner

As Segments.ai is growing, we are expanding our team.

Mark Hafner is the new Senior Account Executive at Segments.ai. Mark’s decision to join Segments.ai stems from his fascination with the constantly evolving fields of robotics and autonomous vehicles.

Get to know Mark in his interview

Academic spotlight: Semantic motif segmentation at Pompeii

Discover this research project by a team from Ca’ Foscari University of Venice, who used Segments.ai to create semantic segmentation masks for fresco fragments from Pompeii.

The project aims to understand and restore the broken artifact imagery using computer vision techniques.

Discover the project, a links to the paper video and 2 datasets.