Building your internal data labeling software

6 min read -
Avatar photo
- February 8th, 2024 -

Annotation or labeling tooling has multiple use cases. Think about labelling datasets to use for your ML pipeline or to highlight objects on an image to display on your platform.

Your platform might be in need of such a tool, but since it’s not the core of your business, you might consider building it yourself vs relying on an external platform.

Let me take you through my experience building a labeling tool in-house.

Disclaimer: I joined Segments.ai – a multi-sensor data labeling platform. I understand that you might assume my post will be biased. Before joining Segments.ai, I was responsible for developing the internal labeling tool at a previous company. I will strive to remain impartial in this post and simply share my experience creating an in-house tool and the ups and downs we went through.

Let’s start of by giving some context on why I think I’m in a good position to take you through some of my learnings.

SkyeBase, where I worked before Segments.ai is a drone inspection company that aims to digitalize the inspections on different type of large assets. Think about Ship-to-Shore cranes, bridges, storage tanks and piping. With their SaaS platform I-Spect they aim to build a centralized, intelligent multi-asset inspection platform.

One key part of I-Spect is the visualization and reporting of the captured footage during the drone inspection. The labeling tool we wanted to build was necessary to easily highlight point of interests on a part of an asset, whether or not assisted by AI.

The reason we needed something custom was that we want to keep track on where you are on the overview image of an asset. The easiest way to do that was to be able to export our annotations as clickable SVGs, something which didn’t seem to exist out of the box.

The building blocks of our data labeling platform

Our first building block was the Canvas API. You get all the good stuff directly in your browser. The native HTML and JavaScript offer excellent support for drawing 2D figures.

With the extension of the hardware-accelerated WebGL API, it is also possible to build 3D scenes. These APIs are very powerful but require the developer to do much of the heavy lifting since they are still relatively low-level APIs.

Luckily, some libraries reduce the complexity of these tasks. For 2D, we considered Fabric.js, Paper.js, and Konva. These tools make it very easy to draw shapes on the screen with their higher level APIs.
We finally picked Fabric.js because of their ability to export to different formats such as JSON and SVG which provided us the flexibility we needed. It’s quickly noticeable that these libraries are very good at drawing hardcoded, fixed shapes programmatically.

Next up: add user inputs like hotkeys and mouse events. Just drawing shapes programmatically isn’t very useful. We want to add user interactions so the user can create shapes, edit them by moving them around. Zooming in/out and undo/redo is probably also some features that we really want to add because we’re so used on having these available in all the tools we use.

Most libraries supply building blocks for that, but it still comes down to you, the developer, to glue these building blocks together. Even with these libraries, it is an extra challenge to keep the UX at the level of the tools we’re so used to like Paint, Adobe Illustrator, Gimp,….

One major setback in my building process was the limited support available when making the tool responsive to different screen sizes while keeping track of the scale of the shapes you had already drawn. And that’s just for the 2D part.

3D adds (literally) an extra dimension to the 2D drawing part in the form of rotation (ignoring cameras, lightning, materials, …). We looked into existing tools like potree which is built on top of the well known 3D library threejs. Loading large point clouds at the time seemed to be very challenging so we postponed looking into it further to prevent spending too much time away from our core.

For companies like Segments.ai it does make sense to go the extra mile to really optimize performance of these large point clouds with techniques like tiling and streaming but when there is a whole roadmap of other important stuff to work on, it’s not the highest priority to look in to such optimizations when it’s not your main focus..

Why did we initially build our platform in-house?

One upside of building your tool in-house is that you decide on the UX and UI. The platform entirely aligns with your ideas of how things should work. It’s one of the biggest upsides in my opinion to build it yourself, but on the other side it’s also a never ending loop of trying to perfect the UI/UX. Which, again, takes away important development time from the core. The main reason we started to built it in-house was to have the flexibility of really tailoring the tool to our needs. As a start-up where this is only part of the SaaS product you’re building this might not have been the best decision as we underestimated the whole part at the beginning before looking for integrations that could’ve helped us sooner.

Especially if you then compare it to tools like Segments.ai where you can customize hotkeys or use an SDK to fit the platform into your own workflow, you see features that you definitely can and want to build yourself. But then you’re looking a few years further down on your roadmap at which point maybe a competitor already has these features by buying it instead of building it.

When your core product takes priority

You initially want basic functionalities and only need a few fancy UX features. The above libraries are excellent starting points. But knowing where to stop developing the tool and shifting your focus back to the core is challenging.

The more people and roles involved in your platform, the more can-you-quickly-add-this-feature requests pop up. Those tend to conflict with previously built UX and features or create new bugs. And so your backlog quickly accumulates, while it isn’t the core of your product in the first place. And spending time on something that isn’t the core of your product, takes away important development time of the things that really do matter.

Integration in your workflows

At some point, you wonder about many of the repetitive tasks you do every day. There should be more automation possible within the tool. Especially today, when you can leverage AI to take tedious tasks from your plate. But that again adds extra complexity.

You need extra backend development to get the feedback from the model. You can already achieve many things by using openCV directly in the browser. However, the openCV API can be a challenge on its own. It’s a low-level API with its own quirks. You also want your tool to work within your existing workflow. This means you need to export your annotations and store them somewhere so you can reload them again later.

Do you want SVG’s, JSON’s or something different? In the case of JSON, you need to write your code to re-render your annotations starting from the JSON. You need to think about a data format you want to use that you can keep using in the future to avoid breaking things and so on.

Maintenance and the standstill

As you may have noticed so far, building a basic drawing tool isn’t super complicated but there are 2 major pitfalls. For one, it’s hard to keep the scope small and not give in to requests to add some fancy features. And second, as with all software engineering, building it correctly takes a lot of time and that’s where the real challenge lies.

Relying heavily on a 3rd party library for your basic drawing will help you gain initial speed. But it also has the risk of a vendor lock-in. If you didn’t abstract your code enough and a library ceases to exist – and we all know the volatility of libraries that come and go, especially in the JavaScript and web ecosystem – you can start all over.

Something else I underestimated is that whenever you return to developing your core product, your labeling tool stands still in time and vice versa. While developing the labeling tool, you subscribed to newsletters or great threads to get inspiration and insights for building your tool. And they keep informing you about the latest features that would also be useful for your labeling or visualization tasks. Those are all extras you could have had when integrating instead of building yourself. Which can be a bit frustrating at times.

Built in-house or bought externally, which way did we go in the end?

If it’s just for labeling a few 1000 2D images from 1 sensor, you’re probably right to build it yourself. Or use an open-source platform if it has what you need that you can integrate. The cost of the few days it takes to develop it, outweighs buying a tool with loads of fancy features you don’t need.

But if you’re labeling multiple sensors, you need to involve outside labeling teams or you need not thousands but tens of thousands of labels… We eventually concluded that licensing a tool costs us less. Both in the time you spend to keep up with the minimal market requirements and the loss of time developing an edge with your product.

So yes, the previous company I worked for ultimately decided to stop the project off building an in-house labeling tool, and to leave it at a very basic version but started looking at integrations the can speed up and improve that part of the SaaS platform without taking too much time away from building a stable I-Spect core. I, however, developed an interest for developing software that involves visual data like images, videos and 3D. Hence I’m very happy to have joined Segments.ai. Big thanks to the guys over at SkyeBase for the fantastic journey, wishing them a bright future!

Let me know if you’re still considering building your in-house labeling platform. I can share a few words of advice from my journey that are too specific for the above post.