Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Reality Defender raises $15M to detect text, video and image deepfakes
October 20, 2023

Reality Defender raises $15M to detect text, video and image deepfakes

Reading Time: 4 minutes

Reality Defender, one of several startups developing tools to attempt to detect deepfakes and other AI-generated content, today announced that it raised $15 million in a Series A funding round led by DCVC with participation from Comcast Ventures, Ex/ante, Parameter Ventures and Nat Friedman’s AI Grant.

The proceeds will be put toward doubling Reality Defender’s 23-person team into the next year and improving its AI content detection models, according to co-founder and CEO Ben Colman.

Colman, a former Goldman Sachs VP, launched Reality Defender in 2021 alongside Ali Shahriyari and Gaurav Bharaj. Shahriyari previously worked at Originate, a digital transformation tech consulting firm, and the AI Foundation, a startup building AI-powered animated chatbots. Bharaj was a colleague of Shahriyari’s at the AI Foundation, where he led R&D.

Reality Defender began as a nonprofit. But, according to Colman, the team turned to outside financing once they realized the scope of the deepfakes problem — and the growing commercial demand for deepfake-detecting technologies.

Colman’s not exaggerating about the scope. DeepMedia, a Reality Defender rival working on synthetic media detection tools, estimates that there’s been three times as many video deepfakes and eight times as many voice deepfakes posted online this year compared to the same time period in 2022.

The rise in the volume of deepfakes is attributable in large part to the commoditization of generative AI tools.

Cloning a voice or creating a deepfake image or video — that is, an image or video digitally manipulated to convincingly replace a person’s likeness — used to cost hundreds to thousands of dollars and require data science know-how. But over the last few years, platforms like the voice-synthesizing ElevenLabs and open source models such as Stable Diffusion, which generates images, have enabled malicious actors to mount deepfake campaigns at little to no cost.

Just this month, users on the notorious chat board 4chan leveraged a range of generative AI tools, including Stable Diffusion, to unleash a blitz of racist images online. Meanwhile, trolls have used ElevenLabs to imitate the voices of celebrities, generating audio ranging in content from memes and erotica to virulent hate speech. And state actors aligned with the Chinese Communist Party have generated lifelike AI avatars portraying news anchors, commenting on topics such as gun violence in the U.S.

Some generative AI platforms have implemented filters and other restrictions to combat abuse. But, as in cybersecurity, it’s a cat and mouse game.

‘Some of the greatest risk from AI-generated media stems from use and abuse of deepfaked materials on social media,’ Colman said. ‘These platforms have no incentive to scan deepfakes because there’s no legislation requiring them to do so, unlike the legislation forcing them to remove child sexual abuse material and other illegal materials.’

Reality Defender purports to detect a range of deepfakes and AI-generated media, offering an API and web app that analyze videos, audio, text and images for signs of AI-driven modifications. Using ‘proprietary models’ trained on in-house data sets ‘created to work in the real world and not in the lab,’ Colman claims that Reality Defender is able to achieve a higher deepfake accuracy rate than its competitors.

‘We train an ensemble of deep learning detection models, each of which focuses on its own methodology,’ Colman said. ‘We learned long ago that not only does the single-model, monomodal approach not work, but neither does testing for accuracy in a lab versus real-world accuracy.’

But can any tool reliably detect deepfakes? That’s an open question.

OpenAI, the AI startup behind the viral AI-powered chatbot ChatGPT, recently pulled its tool to detect AI-generated text, citing its ‘low rate of accuracy.’ And at least one study shows evidence that deepfake video detectors can be fooled if the deepfakes fed into them are edited in a certain way.

There’s also the risk of deepfake detection models amplifying biases.

A 2021 paper from researchers at the University of Southern California found that some of the data sets used to train deepfake detection systems might under-represent people of a certain gender or with specific skin colors. This bias can be amplified in deepfake detectors, the coauthors said, with some detectors showing up to a 10.7% difference in error rate depending on the racial group.

Colman stands behind Reality Defender’s accuracy. And he asserts the company actively works to mitigate biases in its algorithms, incorporating ‘a wide variety accents, skin colors and other varied data’ into its detector training data sets.

‘We’re always training, retraining and improving our detector models so they fit new scenarios and use cases, all while accurately representing the real world and not just a small subset of data or individuals,’ Colman said.

Call me cynical, but I’m not sure if I buy those claims without a third-party audit to back them up. My skepticism isn’t impacting Reality Defender’s business, though, which Colman tells me is quite robust. Reality Defender’s customer base spans governments ‘across several continents’ as well as ‘top-tier’ financial institutions, media corporations and multinationals.

That’s despite competition from startups like Truepic, Sentinel and Effectiv, as well as deepfake detection tools from incumbents such as Microsoft.

In an effort to maintain its position in the deepfake detection software market, which was valued at $3.86 billion in 2020, according to HSRC, Reality Defender plans to introduce an ‘explainable AI’ tool that’ll let customers scan a document to see color-coded paragraphs of AI-generated text. Also on the horizon is real-time voice deepfake detection for call centers, to be followed b ay real-time video detection tool.

‘In short, Reality Defender will protect a company’s bottom line and reputation,’ Colman said. ‘Reality Defender uses AI to fight AI, helping the largest entities, platforms and governments determine whether a piece of media is likely real or likely manipulated. This helps combat against fraud in the finance world, prevent the dissemination of disinformation in media organizations and prevent the spread of irreversible and damaging materials on the governmental level, just to name three out of hundreds of use cases.’

Reference: https://techcrunch.com/2023/10/17/reality-defender-raises-15m-to-detect-text-video-and-image-deepfakes/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *