OpenAI, DeepMind and Anthropic to give UK early access to foundational models for AI safety research
Reading Time: 4 minutesFollowing the UK government’s announcement last week that it plans to host a ‘global’ AI safety summit this fall, prime minister Rishi Sunak has kicked off London Tech Week with another tidbit of news — telling conference goers that OpenAI, Google DeepMind and Anthropic have committed to provide ‘early or priority access’ to their AI models to support research into evaluation and safety.
Sunak has had an accelerated conversion to the AI safety topic in recent weeks, following a number of interventions by AI giants warning about the existential and even extinction-level risks the technology might pose if it’s not properly regulated.
‘We’re going to do cutting edge [AI] safety research here in the UK,’ pledged Sunak today. ‘With £100 million for our expert taskforce, we’re dedicating more funding to AI safety than any other government.
This AI safety taskforce will be focused on AI foundation models, the government also said.
‘We’re working with the frontier labs — Google DeepMind, OpenAI and Anthropic,’ added Sunak. ‘And I’m pleased to announce they’ve committed to give early or priority access to models for research and safety purposes to help build better evaluations and help us better understand the opportunities and risks of these systems.’
The PM also reiterated his earlier announcement of the forthcoming AI safety summit, seeking to liken the effort to the COP Climate conferences which aim to achieve global buy-in on tackling climate change.
‘Just as we unite through COP to tackle climate change so the UK will host the first ever Summit on global AI Safety later this year,’ he said, adding: ‘I want to make the UK not just the intellectual home but the geographical home, of global AI safety regulation.’
Evangelizing AI safety is a marked change of gears for Sunak’s government.
As recently as March it was in full AI cheerleader mode — saying in a white paper then that it favored ‘a pro-innovation approach to AI regulation’. The approach set out in the paper downplayed safety concerns by eschewing the need for bespoke laws for artificial intelligence (or a dedicated AI watchdog) in favor of setting out a few ‘flexible principles’. The government also suggested oversight of AI apps should be conducted by existing regulatory bodies such as the antitrust watchdog and the data protection authority.
Fast forward a few months and Sunank is now talking in terms of wanting the UK to house a global AI safety watchdog. Or, at the least, that it wants the UK to own the AI safety conversation by dominating research into how to evaluate the outputs of learning algorithms.
Speedy developments in generative AI combined with public pronouncements from a range of tech giants and AI industry figures warning the tech could spiral out of control appear to have led to a swift strategy rethink in Downing Street.
It’s also notable that AI giants have been bending Sunak’s ear in person in recent weeks, with meetings taking place between the PM and the CEOs of OpenAI, DeepMind and Anthropic shortly before the government mood music on AI changed.
If this trio of AI giants stick to their commitments to provide the UK with advanced access to their models there is a chance for the country to lead on research into developing effective evaluation and audit techniques — including before any legislative oversight regimes mandating algorithmic transparency have spun up elsewhere (the European Union’s draft AI Act isn’t expected to be in legal force until 2026, for example; although the EU’s Digital Services Act is already in force and already requires some algorithmic transparency from tech giants).
At the same time, there’s a risk the UK is making itself vulnerable to industry capture of its nascent AI safety efforts. And if AI giants get to dominate the conversation around AI safety research by providing selective access to their systems they could be well placed to shape any future UK AI rules that would apply to their businesses.
Close involvement of AI tech giants in publicly funded research into the safety of their commercial technologies ahead of any legally binding AI safety framework being applied to them suggests they will at least have scope to frame how AI safety is looked at and which components, topics and themes get prioritized (and which, therefore, get downplayed). And by influencing what kind of research happens since it may be predicated upon how much access they provide.
Meanwhile AI ethicists have long warned that headline-grabbing fears about the risks ‘superintelligent’ AIs could pose to humans are drowning out discussion of real world harms existing AI technologies are already generating, including bias and discrimination, privacy abuse, copyright infringement and environmental resource exploitation.
So while the UK government may view AI giants’ buy-in as a PR coup, if the AI summit and wider AI safety efforts are to product robust and credible results it must ensure the involvement of independent researchers, civil society groups and groups who are disproportionately at risk of harm from automation, not just trumpet its plan for a partnership between ‘brilliant AI companies’ and local academics — given academic research is already often dependent on funding from tech giants.
Reference: https://techcrunch.com/2023/06/12/uk-ai-safety-research-pledge/
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG