OpenAI leaders propose international regulatory body for AI
Reading Time: 2 minutesAI is developing rapidly enough and the dangers it may pose are clear enough that OpenAI’s leadership believes that the world needs an international regulatory body akin to that governing nuclear power — and fast. But not too fast.
In a post to the company’s blog, OpenAI founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever explain that the pace of innovation in artificial intelligence is so fast that we can’t expect existing authorities to adequately rein in the technology.
While there’s a certain quality of patting themselves on the back here, it’s clear to any impartial observer that the tech, most visibly in OpenAI’s explosively popular ChatGPT conversational agent, represents a unique threat as well as an invaluable asset.
The post, typically rather light on details and commitments, nevertheless admits that AI isn’t going to manage itself:
The IAEA is the UN’s official body for international collaboration on nuclear power issues, though of course like other such organizations it can want for punch. An AI-governing body built on this model may not be able to come in and flip the switch on a bad actor, but it can establish and track international standards and agreements, which is at least a starting point.
OpenAI’s post notes that tracking compute power and energy usage dedicated to AI research is one of relatively few objective measures that can and probably ought to be reported and tracked. While it may be difficult to say that AI should or shouldn’t be used for this or that, it may be useful to say that resources dedicated to it should, like other industries, be monitored and audited. (Smaller companies could be exempt so as not to strangle the green shoots of innovation, the company suggested.)
Leading AI researcher and critic Timnit Gebru just today said something similar in an interview with The Guardian: ‘Unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.’
OpenAI has visibly embraced the latter, to the consternation of many who hoped it would live up to its name, but at least as market leader it is also calling for real action on the governance side — beyond hearings like the latest, where Senators line up to give reelection speeches that end in question marks.
While the proposal amounts to ‘maybe we should, like, do something,’ it is at least a conversation starter in the industry and indicates support by the single largest AI brand and provider in the world for doing that something. Public oversight is desperately needed, but ‘we don’t yet know how to design such a mechanism.’
And although the company’s leaders say they support tapping the brakes, there are no plans to do so just yet, both because they don’t want to let go of the enormous potential ‘to improve our societies’ (not to mention bottom lines) and because there is a risk that bad actors have their foot squarely on the gas.
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG