Back to the articles

Multi-sensor datasets, image sequence segmentation, and more

September 29th, 2023 - 1 min -
Avatar photo

Labeling objects in point clouds and images is no easy job, especially if you need consistent object IDs across time and sensors. We’re excited to announce our new multi-sensor labeling interface, which makes this much easier.

Multi-sensor datasets

Where you previously had to create separate datasets for your 3D point cloud and 2D image labeling, we now also offer a new multi-sensor dataset where you can label all your sensor data in a single labeling interface. This brings several advantages:

  • Objects can be labeled with consistent object IDs across sensors. For example, if you have a robot with one lidar and 4 cameras, an object annotated with a 3D cuboid in the point cloud view can be given the same object ID as the same car annotated with a 2D bounding box in the 4 images.
  • A single person can label the data from all lidar and camera sensors of a recording, instead of it being split up across multiple labelers working in separate datasets. This makes for a more efficient and consistent labeling process for multi-sensor data.
  • You can leverage the 3D annotations to drastically speed up the 2D labeling. More about this in our next newsletter!

To set up a multi-sensor dataset, check out our docs on the required data format.

Image sequence segmentation

You could already label single images with bitmap segmentation labels, but we now also launched our labeling interface for segmenting image sequences. This interface is still in beta, so please reach out to us if you encounter any bugs or performance issues. More updates are coming soon.

A look behind the scenes

Curious how we built our synced camera feature with support for fisheye distortions? Check out our blog post on Simulating Real Cameras using Three.js for a technical look behind the scenes.

Other features and improvements

  • A performance optimization should cause point clouds to load a bit faster
  • You can now add sample-specific labeling instructions

  • Sequence datasets now have a button to split an existing track into two new tracks

  • In the point cloud interface, gradient coloring is now enabled by default

  • In the point cloud interface, the camera can now also be rotated when in bird’s-eye view mode

  • It’s now easier to select a specific page on the samples tab and to select a specific frame in a sequence dataset

  • If you have many datasets, the list of datasets should load faster now

  • The latest version of our Python SDK contains numerous improvements and bug fixes. We also wrote a blog post which takes a closer look at our Python SDK.

See all changes →

Share this article