In our last newsletter we announced our new multi-sensor labeling interface. This month, we’re excited to introduce a new feature that will save you lots of time labeling multi-sensor data: 3D-to-2D bounding box projections!
Leverage 3D annotations to speed up 2D labeling
Let’s say your car or robot has a lidar sensor and multiple cameras, and is driving past a static object like a traffic sign. Imagine that you now want to annotate this object with 3D bounding boxes in the lidar point cloud and with 2D bounding boxes in the camera images, while keeping consistent object IDs across time and sensors.
As you drive past the object, it is visible in 3 cameras for, say, 100 frames. This means you’d have to annotate 300 2D bounding boxes and make sure that their object IDs are identical. In 3D space, though, labeling this static, non-moving traffic sign only takes a single 3D bounding box annotation.
With our new 3D-to-2D projection feature, you can now project your 3D bounding boxes to the camera images as 2D bounding boxes, all with a single click! You get pre-labeled images with consistent annotations that only require minor corrections. This drastically speeds up your 2D labeling, with factors of up to 100x.
Copy-paste your cuboids
You could already label single images with bitmap segmentation labels, but we now also launched our labeling interface for segmenting image sequences. This interface is still in beta, so please reach out to us if you encounter any bugs or performance issues. More updates are coming soon.