iDrogue is a new project led by Ocius. The goal? Capture and release AUVs at a calm depth. Finding objects underwater is easier said than done. The underwater robot needs to identify objects underwater reliably and obtain the object’s position in all 6 degrees of freedom (x, y, z, roll, pitch, yaw).
To reliably identify an underwater object’s 3D location, the iDrogue combines data from an RGB image, a 3D point cloud, and a sonar. The algorithm then fuses this data to produce a final detection result.
But no algorithm is foolproof. And ocean environmental conditions meant point cloud data was low resolution and filled with noise. To evaluate and performance-test these algorithms, Ocius needed a way to compare the output of these automated detections against a ground truth. That’s when they started looking into 3D labeling.
Segments.ai’s multi-sensor platform enables the computer vision engineers to upload all the sensor’s data with the correct calibration in 1 interface.
A vital requirement in choosing a platform is to have the ability to simultaneously view a fusion of 2D image and 3D point cloud data.
The noise and poor definition of the point clouds meant that labeling was primarily done off the camera image, which was only made possible by the perspective overlay of the 2D and 3D data.
Data visualization is essential for evaluating multi-sensor systems. Raw data alone is often insufficient as some outputs are not human-interpretable. Despite complex multi-part robotics systems, identifying and developing meaningful data attributes to display early on using logs, graphs, or visual tools is crucial.
Ocius needed a 3D labeling platform that did more than 3D. The noise and poor definition of the point clouds meant they needed something that would let them overlay 2D image data with 3D data from the same perspective.
With Segments.ai, Ocius generates datasets of all their combined sensor data with the correct camera calibration. Then, they overlay the images on the point cloud, which made it easier for the team to manually label the ground truth of their target in all 6 degrees of freedom.
After labeling, results are exported and parsed through the data to find both the ground truth labels and the output of the system’s automated detections to compare the algorithm’s accuracy. Data visualizations in hand, it is easier to debug and evaluate the multi-sensor system — and continue the important work of making AUVs more reliable and easy to use.