What can we do about the spread of AI-generated disinformation?
Reading Time: 3 minutesDisinformation is spreading at an alarming pace, thanks largely to openly available AI tools. In a recent survey, 85% of people said that they worry about online disinformation, and the World Economic Forum has named disinformation from AI as a top global risk.
Some high-profile examples of disinformation campaigns this year include a bot network on X targeting U.S. federal elections, and a voicemail deepfake of president Joe Biden discouraging certain residents from voting. Overseas, candidates in countries across South Asia have flooded the web with fake videos, images, and news articles. A deepfake of London mayor Sadiq Khan even incited violence at a pro-Palestinian march.
So what can be done?
Well, AI can help combat disinformation as well as create it, asserts Pamela San Martín, co-chair of Meta’s Oversight Board. Established in 2020, the Board is a semi-autonomous organization that reviews complaints of Meta’s moderation policies and issues recommendations on its content policies.
San Martín acknowledges that AI isn’t perfect. For instance, Meta’s AI product has mistakenly flagged Auschwitz Museum posts as offensive, and misclassified independent news sites as spam. But she is convinced that it’ll improve with time.
Of course, with the cost of sowing disinformation declining thanks to AI, it’s possible that even upgraded moderation models won’t be able to keep up.
Another participant on the panel, Imran Ahmed, CEO of the nonprofit Center for Countering Digital Hate, also noted that social feeds amplifying disinformation exacerbate its harms. Platforms such as X effectively incentivize disinformation through revenue-sharing programs — the BBC reports that X is paying users thousands of dollars for well-performing posts that include conspiracy theories and AI-generated images.
‘You’ve got a perpetual bulls— machine,’ Ahmed said. ‘That’s quite worrying. I’m not sure that we should be creating that within democracies that rely upon some degree of truth.’
San Martín argued that the Oversight Board has effected some change here, for example by encouraging Meta to label misleading AI-generated content. The Oversight Board has also suggested Meta make it easier to identify cases of nonconsensual sexual deepfake imagery, a growing problem.
But both Ahmed and panelist Brandie Nonnecke, a UC Berkeley professor who studies the intersection of emerging tech and human rights, pushed back against the notion that the Oversight Board and self-governance in a general sense can alone stem the tide of disinformation.
‘Fundamentally, self-regulation is not regulation, because the Oversight Board itself cannot answer the five fundamental questions you should always ask someone who has power,’ Ahmed said. ‘What power do you have, who gave you that power, in whose interest do you wield that power, to whom are you accountable, and how do we get rid of you if you’re not doing a good job. If the answer to every single one of those questions is [Meta], then you’re not any sort of check or balance. You’re merely a bit of PR spin.’
Ahmed’s and Nonnecke’s isn’t a fringe opinion. In an analysis in June, NYU’s Brennan Center wrote that the Oversight Board is confined to influencing only a fraction of Meta’s decisions because the company controls whether to enact policy changes and doesn’t provide access to its algorithms.
Meta has also privately threatened to pull back support for the Oversight Board, highlighting the precarious nature of the board’s operations. While the Oversight Board is funded by an irrevocable trust, Meta is the sole contributor to that trust.
Instead of self-governance — which platforms like X are unlikely to adopt in the first place — Ahmed and Nonnecke see regulation as the solution to the disinformation dilemma. Nonnecke believes that product liability tort is one way to take platforms to task, as the doctrine holds companies accountable for injuries or damages caused by their ‘defective’ products.
Nonnecke was also supportive of the idea of watermarked AI content so that it’s easier to tell which content has been AI-generated. (Watermarking has its own challenges, of course.) She suggested payment providers could block purchases of disinformation of a sexual nature and that website hosts could make it tougher for bad actors to sign up for plans.
Policymakers trying to bring industry to bear have suffered setbacks in the U.S. recently. In October, a federal judge blocked a California law that would’ve forced posters of AI deepfakes to take them down or potentially face monetary penalties.
But Ahmed believes there’s reason for optimism. He cited recent moves by AI companies like OpenAI to watermark their AI-generated images and to have content moderation laws like the Online Safety Act in the U.K.
‘It is inevitable there will have to be regulation for something that potentially has such damage to our democracies — to our health, to our societies, to us as individuals,’ Ahmed said. ‘I think there’s enormous amounts of reason for hope.’
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG