Get consistent labels across all your sensors

  • Label multiple sensors simultaneously
  • Get accurate IDs across modalities and time
  • Save time on quality control and edits
Get your free trial
Book a demo

The best tool that I have worked with so far

Logo G2 review platform

Victor T. – G2 Review

Trusted by computer vision engineers across robotics and AV companies

Spend less time on quality control and corrections with multi-sensor labeling

Spend less time on labeling and more time building, running, and optimizing your ML algorithms. Segments.ai’s multi-sensor labeling platform lets you combine your 3D point cloud data and 2D image data in the same task, so you can get the clearest picture possible.

Upload your 3D data, then add 2D image data to make it easier to annotate and categorize your point clouds. Afterwards, Segments.ai will project those 3D labels to 2D labels so you can easily verify that every object is labeled, and that the labels match across both data sets.

The result: you’ll get consistent labels across modalities and time, and will have better, more accurate data to feed into your machine learning algorithms.

Get accurate data for your ML models

Keep your labeling process organized with consistent IDs across sensors, modalities, and time.

With Segments.ai, you can make sure that object annotations have the same IDs, whether they’re in a 3D point cloud or a 2D bounding box.

You’ll speed up your labeling process, reduce reconciliation time, and optimize model accuracy, all at the same time.

Key features

  • Get a consistent track ID for an object over multiple frames and across multiple sensors
  • Track temporarily occluded objects and merge split IDs
  • Constrain annotations to the same dimensions across a sequence

Label efficiently

  • Project 3D labels to 2D with one click
  • Get pre-labeled datasets that only require minor corrections
  • Export the release file to different formats

Save hours with efficient labeling processes

Segments.ai lets you project your 3D to 2D data automatically.
You’ll get pre-labeled images that only need minor corrections.

Upload your multi-sensor data

Visualize multiple sensors together

Give your labelers context and improve data accuracy. With Segments.ai you can overlay your 2D and 3D sensor data to make it easier for labelers to understand the complete picture.

Upload data from multiple camera feeds and lidar sensors, give the calibration parameters, and add more contextual understanding in minutes. You’ll be able to quickly see whether an object is a truck or a van, and can more accurately categorize a label in a sparse point cloud.

Improve label accuracy

  • Upload images with camera intrinsics and extrinsics alongside point clouds
  • View synced camera images on top of 3D point clouds to improve accuracy
  • Recognize objects in the point cloud quickly and with more context

It’s a great platform for us to get our data streams labeled. It is built in a transparent way such that we can easily track how the labeling is going

Willem Van de Mierop

Founder
AYES

One platform for 2D and 3D image data together

Image data labeling interface

Image Labeling interface

Make labeling images a breeze and create pixel-perfect annotations with ML-assisted tools like Superpixel 2.0 and Autosegment.

3D Point Cloud data labeling interface

Speed up point cloud labeling with ML-assisted features built by machine learning experts for machine learning teams.

multi-sensor data fusion labeling interface

Multi-sensor data fusion

Overlay 2D images with 3D point clouds to speed up the labeling process. Segments.ai will automatically project your 3D cuboids to 2D bounding boxes at the push of a button.

Get your 14-day free trial started

FAQ

Sensor Fusion is the process of combining data from multiple sensors to improve the accuracy and reliability of the information.

By combining information from different types of sensors (such as lidar, radar, and cameras) your system can create a more complete picture of the environment. This technology is used in various applications, including autonomous vehicles, robotics, and intelligent home systems.

Sensor fusion allows these systems to make better decisions and adapt quickly to environmental changes.

Additionally, Sensor Fusion can reduce the impact of errors or failures in any individual sensor. It’ll help you make sure the system continues to function effectively.

Sensors are devices that detect and respond to physical stimuli, such as light, heat, and sound. Different sensors use different technologies to perform their functions. Using different sensors and sensor technologies is becoming increasingly important in fields such as autonomous vehicles, robotics, and industrial automation.

For example, lidar and radar sensors use lasers and radio waves to sense their environments, while ultrasonic sensors use sound waves. Sensor fusion combines data from different sensors to create a complete picture of the environment.

One of the most common applications of sensor fusion is the creation of a 3D point cloud, a detailed model of a physical space that includes information about the location and orientation of objects within it.

Lidar (Light Detection and Ranging):
High accuracy, long-range, and fast data acquisition. Lidar is ideal for mapping and obstacle avoidance in autonomous vehicles and robots.

RADAR (Radio Detection and Ranging)
Radar sensors can detect objects through various weather conditions and at long distances, making them ideal for applications such as collision avoidance and autonomous driving.

SONAR (Sound Navigation and Ranging)
Sonar sensors emit sound waves and measure the time it takes for the waves to bounce back after hitting an object. In robotics and automotive they can be used to determine the distance to the object.

Structured Light
Structured light sensors use a 3D scanner to measure the 3D dimensions of an object. High resolution and accuracy make them suitable for 3D scanning and mapping applications.

Time-of-Flight (ToF) cameras
ToF sensors are used for measuring distance with depth sensing technology. They are fast and reliable, so they are a good choice for gesture recognition, object tracking, and robot navigation applications.

Stereo vision
Stereo vision sensors use two (or more) sensors to simulate human binocular vision. High precision depth sensing, good spatial resolution, and low cost make these suitable for obstacle avoidance, 3D mapping, and robot navigation applications.

Ultrasonic sensors
Ultrasonic sensors are low-cost, easy to use, and can detect a wide range of materials, making them suitable for applications such as parking assistance and object detection.

Segments.ai lets you combine your 2D image data and 3D point cloud data into a single view for effortless labeling. There are different ways to build your workflow, of course, but here is our recommendation:

Step 1: Overlay the data
The first step is to calibrate and align all the sensor views, then overlay all of your data into a single view. This new fused view will give your labeler more context: they’ll be able to tell what groups of point clouds represent, whether that’s a tree, a trash bin, or a park bench.

Step 2: Label the 3D point clouds first
We advise that you label in 3D before projecting to 2D for a few reasons, even though it may seem counterintuitive. Labeling in 3D first often proves to be significantly more efficient, even if your primary interest is in obtaining 2D labels.

Imagine you are driving past a stationary object like a traffic sign. Equipped with multiple cameras, this traffic sign remains visible in three of them as you pass by, spanning approximately 100 frames.

Annotating this scenario using 2D bounding boxes would require you to label a total of 300 instances. However, in the 3D space, annotating this static, non-moving traffic sign would merely involve a single cuboid annotation.

While labeling a 3D cuboid does take three times as long as annotating a 2D bounding box, it is still a staggering 100 times more efficient than labeling directly in the images.

Step 3: Project to 2D images
Once you’ve labeled your 3D point clouds, you can calibrate and align them to your 2D image data. Segments.ai will automatically copy over object IDs to the 2D data, saving you hours of drawing bounding boxes. You’ll only need to make minor adjustments during your quality control check.

The transition from a late fusion approach to an early fusion approach is becoming increasingly prevalent among robotics and autonomous vehicle (AV) companies in their machine learning (ML) models. But what exactly does this shift entail?

Late fusion involves the utilization of separate ML models for each sensor employed, which generates individual outputs. These outputs are then merged or fused in some manner to create a coherent 3D representation of the scene. Essentially, it involves running multiple models independently and combining their results afterward.

On the other hand, early fusion takes a more contemporary approach. Instead of using separate models for each sensor, all sensor data is fed into a single ML model. This unified model is designed to directly make predictions within the 3D space. This method has gained traction and is employed by companies like Tesla.

For early fusion to be effective, voxel grids prove to be an advantageous representation of the scene. Voxel grids exhibit a regular structure, making them well-suited for this approach. They can be conceptualized as tensors, allowing for end-to-end prediction. In other words, the entire process, from input to output, can be predicted using a single model.

Picture of someone labeling multi-sensor data on a point cloud

Labeling services

Easily manage labeling workforces, in-house or with an external workforce.

Need help figuring out where to start? We have experience with many labeling partners and will happily connect you with the best partner for your needs.

Reach out for a quick chat

A few of the labeling agencies we work with

Simplify multi-sensor labeling

Say hello to faster labeling, better quality data, and more time for your engineering team.
You’ve got access to a 14-day free trial to test it for yourself.

Get started on your 14 day free trial