Labelbox Unveils New Features in Collaboration With Databricks

Labelbox is a training data platform built with three core layers that let you orchestrate the entire process from labeling and collaboration to iteration.

San Francisco, CA, United States, 05/22/2021 / SubmitMyPR /

Labelbox, a training data platform that allows companies to quickly produce structured data from unstructured data, announces features for teams to prepare unstructured data for AI and analytics in Databricks. Combining Databricks and Labelbox gives users an end-to-end environment for unstructured data workflows – a query engine built around Delta Lake, fast annotation tools, and a powerful Machine Learning compute environment.

Data teams commonly use Apache Spark to analyze structured data but often struggle to apply the same analysis to unstructured, unlabeled data (e.g., images, video). Teams are forced to use expensive and manual processes to transform unstructured data into something more useful –they either pay a third party to label their data, buy a labeled dataset, or narrow the scope of their project to leverage public datasets.

With faster and more cost-effective ways to convert unstructured data into structured data, teams can now support more advanced use-cases built around their companies' unique, unstructured datasets.

AI is no longer new, and early adopters will soon lose their first-mover advantage. As more and more companies remake themselves with machine-learning magic, competition will boil down to who can iterate faster in their model development. And that depends on quicker management of data. Labelbox's new features enable this faster iteration.

Unstructured data is everywhere, but we have to collect it and process it into a form that can be consumed by algorithms. Unlocking the power of that data requires a new class of tools and technologies, and workflows.

The core focus for most organizations that are building AI systems today is how to take the proprietary information that they have created through applications or services and turn it into labeled data that can be fed into neural nets, which in turn can make decisions at a scale. That's why data prep has become one of the most important aspects of any AI company or any AI team.

But converting raw information into algorithmically consumable data is incredibly hard because it usually requires human intervention. Large teams of people are needed to hand label data, creating bounding boxes, or drawing lines or points. As with any human endeavor at scale, ensuring quality and consistency is a challenge.

The observant Andreessen's venture capital firm, Andreessen Horowitz, looked at how much time is spent in a variety of functions relative to completing a machine learning process. Labeling was the number-one, time-consuming element in all of the activities having to do with machine learning.

The world will be very, very different in five to ten years when most of the companies we interact with will be using some sort of an AI system. Marketing departments are already using AI to better target how products get to consumers. Sales departments are using AI to improve forecasting, and C-suites are using AI in corporate planning. Soon, engineering, finance, legal, and HR departments will all have elements of AI built into their systems to improve efficiency, effectiveness, and productivity.

Every software product is going to integrate AI components, and traditional software companies will become AI companies. In fact, every company is eventually going to be an AI company.

As these AI companies scale their products and services, the need for labeling data continues to rise proportionally. The emerging challenge is how to scale and automate the expertise that humans are providing right now.

There is a class of problems that can be solved by AI systems using commodity data, data that anybody can capture. Self-driving cars, for example, depend primarily on commodity data: anyone can put a camera on a car, drive around and collect data. The differentiator is simply scaled, so these problems are going to be solved by big companies like Google or Amazon.

The class of AI problems that most organizations ought to be focused on is those that create a competitive advantage by taking proprietary information and proprietary interpretations of that information and combining them to build AI models. No Google or Amazon can take that away from smaller organizations; that's what will differentiate companies in the emerging AI-powered market.

The cornerstone for success in software has been really quick iterations, and if you look at software development over the decades, we have built tools and workflows that enable incredibly fast iteration. Building AI systems is no different; it, too, is highly iterative.

Build, test, iterate, and test again. The goal is to increase that feedback loop. The faster we run this feedback loop and the faster we can iterate around sets of data, the greater efficiency and effectiveness we're going to have. To the extent that we can label more quickly, iterate more quickly, reduce the complexity of massive amounts of unstructured data, we are going to increase our competitiveness.

Automation is key. There is no need to label all of the data manually anymore. The bulk of cases can be addressed fairly easily with automation. But there will often be a long tail of edge cases that must be addressed by humans.

Automating inspection on an assembly line, where most of the parts look and feel the same, is a narrow problem. The environment is controlled, the lighting never changes, and the calibration of the sensors is fixed. Self-driving cars, on the other hand, are operating in the wild where edge cases abound. Building an AI system for the latter case is far more challenging.

The only way to cover the long tail of edge cases is to iterate at a very high speed. You need a system that can draw insights from any poor performance of your model and find the data that will fix the problem. Human intervention should only be used when it delivers the most return on investment.

To speed iteration, we need to blend human expertise with AI-driven automation. People are obviously a very important part of the mix in terms of doing the data labeling right now, but the future will depend largely on automated labeling.

The key is the infrastructure that enables this faster iteration. We are starting to see this merging of human and AI decision-making by using platforms using workflows to collect, process, and manage massive amounts of unstructured data.

Labelbox customers are already able to integrate their AI models and to train data development in order to leverage AI models for pre-labeling that a human can review.

Just as GitHub is a collaborative platform for software developers, Labelbox is a collaborative training data platform that enables labeling teams to annotate fast while giving data scientists the ability to manage data and workflows and diagnose errors in training data in an integrated way. A TDP will tighten the iteration loop for the increasingly competitive AI age


Editorial Contact

Flak42

Lonn Johnston

lonn@flak42.com

Original Source of the original story >> Labelbox Unveils New Features in Collaboration With Databricks