For Jae Lee, a data scientist by training, it never made sense that video—which has become a huge part of our lives with the rise of platforms like TikTok, Vimeo, and YouTube—is difficult to search due to technical barriers. represented by the understanding of the context. Searching by video titles, descriptions and tags has always been easy enough, requiring no more than a basic algorithm. But the search inside videos for specific moments and scenes was far beyond the capabilities of the technology, especially when those moments and scenes were not marked in an obvious way.
To solve this problem, Lee, together with friends from the technology industry, built a cloud service for video search and understanding. It became Twelve Labs, which raised $17 million in venture capital — $12 million of which came from an initial extension round that closed today. Radical Ventures led the expansion with participation from Index Ventures, WndrCo, Spring Ventures, Weights & Biases CEO Lukas Biewald and others, Lee told Root Devices in an email.
“Twelve Labs’ vision is to help developers build programs that can see, hear and understand the world like we do by giving them the most powerful video understanding infrastructure,” said Lee.
Introducing the capabilities of the Twelve Labs platform. Image Credits: Twelve laboratories
Currently in closed beta, Twelve Labs uses artificial intelligence to extract “rich information” from video, such as motion and action, objects and people, sound, on-screen text and speech, to identify relationships between them. The platform converts these different elements into mathematical representations called “vectors” and forms “temporal links” between frames, enabling applications such as video scene search.
“As part of achieving the company’s vision to help developers create intelligent video applications, the Twelve Labs team is building ‘foundational models’ for multimodal video understanding,” Lee said. “Developers will be able to access these models through a set of APIs, performing not only semantic search, but also other tasks such as ‘chaptering’ long-form videos, creating summaries, and video Q&A.”
Google takes a similar approach to understanding video with its MUM AI system, which the company uses to drive video recommendations on Google Search and YouTube by selecting topics in videos (eg, “acrylic painting materials”) based on audio, text, and images . content. But while the technology may be comparable, Twelve Labs is one of the first vendors to market with it; Google chose to keep MUM internal and refused to make it available through a public API.
That said, Google and Microsoft and Amazon offer services (i.e. Google Cloud Video AI, Azure Video Indexer, and AWS Rekognition) that recognize objects, places, and actions in videos and extract rich metadata at the frame level. There’s also Reminiz, a French computer vision startup that claims it can index any type of video and add tags to both recorded and live streaming content. But Lee argues that Twelve Labs is sufficiently differentiated — in part because its platform allows customers to fine-tune AI to specific categories of video content.

API mockup to fine-tune the model to work better with salad-related content. Image Credits: Twelve laboratories
“We found that narrow AI products built to detect specific problems show high accuracy in their ideal scenarios in a controlled environment, but don’t adapt as well to messy real-world data,” Lee said. “They operate more like a rule-based system and therefore lack the ability to generalize when deviations occur. We also see this as a limitation stemming from a lack of understanding of the context. Understanding context is what gives humans the unique ability to generalize across seemingly disparate real-world situations, and that’s where Twelve Labs comes into its own.”
In addition to search, Lee says Twelve Labs’ technology can drive things like ad insertion and content moderation, intelligently determining, for example, which videos featuring knives are violent or educational. It can also be used for media analytics and real-time feedback, he says, and to automatically create highlight reels from videos.
A little more than a year after its founding (March 2021), Twelve Labs has paying customers—Lee declined to disclose exactly how much—and a multi-year contract with Oracle to train AI models using Oracle’s cloud infrastructure. In the future, the startup plans to invest in developing its technology and expanding its team. (Lee declined to disclose the current size of Twelve Labs’ workforce, but LinkedIn data suggests it’s around 18.)
“For most companies, despite the enormous value that can be achieved with large models, it really doesn’t make sense to train, manage and maintain these models themselves. Using the Twelve Labs platform, any organization can take advantage of powerful video understanding capabilities with just a few intuitive API calls,” said Lee. “The future direction of AI innovation is headed squarely towards understanding multimodal video, and Twelve Labs is well positioned to push the boundaries even further in 2023.”
rootdevices.com/2022/12/05/twelve-labs-lands-12m-for-ai-that-understands-the-context-of-videos/