Better 3D sequence labeling, deep linking, and more

February 14th, 2024 - 2 min -

We’ve been hard at work in the first weeks of the new year to deliver you another round of exciting product updates. Let’s dive right in.

Better 3D sequence labeling

When labeling 3D sequences, having to switch frames frequently can slow you down. Our batch mode feature already helps you label moving objects faster by providing you a scrollable close-up view of the object across the frames. But also in the regular view, it can be useful to see all annotations of the current object track at a glance.

That is exactly what our new feature does: by toggling the “Show all cuboids in active track” option in the visualization settings, you see all cuboid annotations for the selected track. By clicking on a cuboid corresponding to a different frame, you jump immediately to that frame. We also added an option to show a trajectory line of the track.

And last but not least, we’ve made it easier to keep track of your tracks (yes). Object tracks for which no annotation exists in the active frame are now still visible in the sidebar, grouped under a separate, collapsible section. This makes it easier to label objects that temporarily get occluded but later re-appear in the scene. This feature is also available in the 2D sequence interfaces.

Read the docs

Deep linking

Need to point your teammates to a specific frame and object within a sequence?

By appending e.g. ?frame=3&track=4 to a URL, you can now deep-link to a specific frame and track within a sample. The URL also automatically updates when you switch frames or select a different object, so you can easily copy-paste it.

When working in a multi-sensor dataset, the URL additionally includes a reference to the sensor you’re labeling.

Other features and improvements

  • In the dataset settings, you can now choose if a scene attribute is sequence-level (constant across the sequence) or frame-level (can vary from frame to frame)
  • New visualization options in the 3D cuboid interface: layout presets, change cuboid opacity, toggle follow mode to keep the selected object centered as you move through the sequence

  • In the 3D interfaces, you can now toggle the camera image viewers with the number hotkeys, and zoom out further in the image viewers
  • Improved visualization options in the 2D interfaces, including a new contrast slider and histogram normalization

  • You can now invert the color of the active points in the 3D segmentation interface
  • In sequence interfaces, move 5 frames at a time using Shift + left/right arrows
  • On the Samples tab, you can now select the number of samples to display per page
  • The latest version of our Python SDK contains numerous improvements and bug fixes

Curated reads for you

Case study: Data labeling for autonomous rail transportation with OTIV

To identify and predict objects that can suddenly cross the rail, OTIV needs pixel-perfect annotations.

With multi-sensor labeling, they increase the performance of their ML models to detect persons and vehicles on and off the tracks.

Read the Scythe Robotics case study

Building your internal data labeling software

Are you considering building internal data labeling software? Jeroen Claessens built an in-house data labeling platform at SkyeBase / I-Spect. Read the challenges and insights from someone who’s been there.

“We mainly built it in-house to have the flexibility of tailoring the tool to our needs.”

Computer vision engineer in front of 4 screens, with multi-sensor data labeling images on the screen

Share this article