Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk
May 30, 2023

OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk

Reading Time: 4 minutes

Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few — have added their names to a statement urging global attention on existential AI risk.

The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk.

Here’s their (intentionally brief) statement in full:

Per a short explainer on CAIS’ website the statement has been kept ‘succinct’ because those behind it are concerned to avoid their message about ‘some of advanced AI’s most severe risks’ being drowned out by discussion of other ‘important and urgent risks from AI’ which they nonetheless imply are getting in the way of discussion about extinction-level AI risk.

However we have actually heard the self-same concerns being voiced loudly and multiple times in recent months, as AI hype has surged off the back of expanded access to generative AI tools like OpenAI’s ChatGPT and DALL-E — leading to a surfeit of headline-grabbing discussion about the risk of ‘superintelligent’ killer AIs. (Such as this one, from earlier this month, where statement-signatory Hinton warned of the ‘existential threat’ of AI taking control. Or this one, from just last week, where Altman called for regulation to prevent AI destroying humanity.)

There was also the open letter signed by Elon Musk (and scores of others) back in March which called for a six-month pause on development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be devised and applied to advanced AI — warning over risks posed by ‘ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control’.

So, in recent months, there has actually been a barrage of heavily publicized warnings over AI risks that don’t exist yet.

This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms. Such as the tools’ free use of copyrighted data to train AI systems without permission or consent (or payment); or the systematic scraping of online personal data in violation of people’s privacy; or the lack of transparency from AI giants vis-a-vis the data used to train these tools. Or, indeed, baked in flaws like disinformation (‘hallucination’) and risks like bias (automated discrimination). Not to mention AI-driven spam!

It’s certainly notable that after a meeting last week between the UK prime minister and a number of major AI execs, including Altman and Hassabis, the government appears to be shifting tack on AI regulation — with a sudden keen in existential risk, per the Guardian’s reporting.

Talk of existential AI risk also distracts attention from problems related to market structure and dominance, as Jenna Burrell, director of research at Data & Society, pointed out in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT — where she argued we need to move away from focusing on red herrings like AI’s potential ‘sentience’ to covering how AI is further concentrating wealth and power.

So of course there are clear commercial motivates for AI giants to want to route regulatory attention into the far-flung theoretical future, with talk of an AI-driven doomsday — as a tactic to draw lawmakers’ minds away from more fundamental competition and antitrust considerations in the here and now. And data exploitation as a tool to concentrate market power is nothing new.

Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now.

OpenAI was a notable non-signatory to the aforementioned (Musk signed) open letter but a number of its employees are backing the CAIS-hosted statement (while Musk apparently is not). So the latest statement appears to offer an (unofficial) commercially self-serving reply by OpenAI (et al) to Musk’s earlier attempt to hijack the existential AI risk narrative in his own interests (which no longer favors OpenAI leading the AI charge).

Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI’, as Altman put it. So the company is actively positioning itself (and applying its investors’ wealth) to influence the shape of any future mitigation guardrails.

 

Elsewhere, some signatories of the earlier letter have simply been happy to double up on another publicity opportunity — inking their name to both (hi Tristan Harris!).

But who is CAIS? There’s limited public information about the organization hosting this message. However it is certainly involved in lobbying policymakers, at its own admission. Its website says its mission is ‘to reduce societal-scale risks from AI’ and claims it’s dedicated to encouraging research and field-building to this end, including funding research — as well as having a stated policy advocacy role.

An FAQ on the website offers limited information about who is financially backing it (saying its funded by private donations). While, in answer to an FAQ question asking ‘is CAIS an independent organization’, it offers a brief claim to be ‘serving the public interest’:

We’ve reached out to CAIS with questions.

In a Twitter thread accompanying the launch of the statement, CAIS’ director, Dan Hendrycks, expands on the aforementioned statement explainer — naming ‘systemic bias, misinformation, malicious use, cyberattacks, and weaponization’ as examples of ‘important and urgent risks from AI… not just the risk of extinction’.

‘These are all important risks that need to be addressed,’ he also suggests, downplaying concerns policymakers have limited bandwidth to address AI harms by arguing: ‘Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’ From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.’

The thread also credits David Krueger, an assistant professor of Computer Science at the University of Cambridge, with coming up with the idea to have a single-sentence statement about AI risk and ‘jointly’ helping with its development.

Reference: https://techcrunch.com/2023/05/30/ai-extiction-risk-statement/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *