- Twelve Labs uses multimodal video-language foundation models to capture the fullsemantic and contextual content of video
- By integrating Twelve Labs' video AI technology into VidiNet products, searching for specific video moments through natural language queries is made possible without reliance on metadata
The primary objective of this partnership was to enhance the video browsing experience for clients, enabling easier navigation through video content and uncovering previously undetectable elements such as specific moves or player conversations, through advanced AI-driven analysis, that are not covered by classic metadata and filters. It quickly became clear that the integration had great potential to help a variety of customers from a wide range of sectors across the board and to improve and speed up working methods.
The integration between VidiNet products and Twelve Labs presents a solution where manual logging and metadata generation become obsolete. Users now have the ability to precisely locate specific moments within their video archives using natural language queries, seamlessly merge with the metadata indexed by VidiNet.. Integrating Twelve Labs’ video-language foundation models in MediaPortal changes the way users can search for material as it eliminates the need to index all static metadata fields in VidiCore.
"We are thrilled about our partnership with Twelve Labs. Their video understanding AI is a great addition to our portfolio. The incorporation of natural language search, combined with our MediaPortal search experience promises to streamline our customers' work processes and to save them a lot of valuable time," said Annika Kimpel, Partner Manager at Vidispine.
Would you like to find out more? Our team is happy to help: hello@vidispine.com