Home » London’s Unitary raises €14.31M to boost AI-powered video content moderation tech.

London’s Unitary raises €14.31M to boost AI-powered video content moderation tech.

by Alex Turner

 

Unitary, a startup in London that uses AI for visual moderation, announced on Tuesday that it had raised $15 million (or 14.31 million euros) in a Series A round of investment.

Get Posts Like This Sent to your Email
Iterative approaches to corporate strategy foster collaborative thinking to further the overall value.
Get Posts Like This Sent to your Email
Iterative approaches to corporate strategy foster collaborative thinking to further the overall value.

Creandum spearheaded this funding round and included participation from Plural and Paladin Capital Group.

“We’ve always focused on how we might be able to use AI to ensure a safer online experience and keep up with the pace of internet content,” says Sasha Haco, co-founder and CEO of Unitary. “We’ve always focused on how we might be able to use AI to keep up with the pace of internet content.”

The firm has developed AI technology that Haco and James Thewlis co-founded. This technology can perceive films and images like humans by analyzing various signals to comprehend content and context.

“This funding enables us to stay at the forefront of video understanding research and fulfill our mission of making the internet better,” Haco explains further.

“Unitary has emerged as clear early leaders in the important AI field of content safety, and we’re so excited to back this exceptional team as they continue to accelerate and innovate in content classification technology,” says Gemma Bloemen, Principal at Creandum and Unitary board member. “We’re so excited to back this exceptional team as they continue to accelerate and innovate in content classification technology.”

The financing comes at the same time as Unitary’s growth into various languages, the doubling of the size of its workforce, and a threefold rise in the daily video categorization from 2 million to 6 million. Research and development, additional expansion of the team, and relationships with prominent social platforms and brand safety organizations will all benefit from this funding.

Moderation with proactivity

It is anticipated that the complexity of video material, which accounts for 80 percent of all internet traffic, will increase by a factor of 10 between 2020 and 2025, providing a problem that cannot be solved by human reviewers alone.

According to Christopher Steed, Chief Investment Officer of Paladin Capital Group, “In a world where everything is done online, there is an immense need for a technology-driven approach to identify harmful content.”

To help platforms address the issue of online content moderation, Unitary offers a machine-learning solution that integrates analysis of visual, auditory, and textual material. This capacity is vital for platforms adjusting to new rules, such as the UK’s Online Safety Bill and the EU’s Digital Services Act, which mandate more proactive content moderation. Examples of such legislation include the GDPR and the Digital Services Act.

According to Ian Hogarth, Partner at Plural and Unitary board member, the company has already earned “seven figures” of yearly recurring revenue. This is an impressive accomplishment. After successfully raising $8 million (€7.63 million) in March, this exceptional accomplishment led to the swift follow-up investment round.

According to Hogarth, “From the very beginning, Unitary possessed some of the most powerful AI for categorizing potentially harmful content.” “We are confident that this is the team that is set to redefine how it is that we ensure the safety of visual content in this age of digital technology.”

The use of multimodal models

The novelty of Unitary AI can be seen in the use of multimodal models. Research on multimodal AI has been going on for years, but it is just now starting to have more applications in the real world. Unitary is positioned in this rapidly developing sector at the crossroads of sophisticated research and real-world applications.

“Rather than analyzing just a sequence of frames, you need to be able to imitate how a human moderator views the video to grasp the subtlety and determine if a video is, for example, artistic or violent. This will allow you to comprehend the nuance and determine whether a video is artistic or violent. According to what Haco shared in a recent interview, “We do that by analyzing text, sound, and visuals.”

Unitary’s methodology mimics how a human moderator watches films, which helps reduce false positives and improves accuracy compared to other systems that analyze data of a single category at a time. Customers’ settings for moderation and human teams frequently utilize Unitary in conjunction with it to reduce the amount of work that moderators have to do.

Multimodal moderation has growth potential in the market, addressing ongoing content moderation difficulties on social platforms, gaming firms, and digital channels. While visual-only models have been somewhat effective, the advent of multimodal moderation presents a growth opportunity in the industry.

You may also like

Leave a Comment