Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Contrary to reports, OpenAI probably isn’t building humanity-threatening AI
November 28, 2023

Contrary to reports, OpenAI probably isn’t building humanity-threatening AI

Reading Time: 3 minutes

Has OpenAI invented an AI technology with the potential to ‘threaten humanity’? From some of the recent headlines, you might be inclined to think so.

Reuters and The Information first reported last week that several OpenAI staff members had, in a letter to the AI startup’s board of directors, flagged the ‘prowess’ and ‘potential danger’ of an internal research project known as ‘Q*.’ This AI project, according to the reporting, could solve certain math problems — albeit only at grade-school level — but had in the researchers’ opinion a chance of building toward an elusive technical breakthrough.

There’s now debate as to whether OpenAI’s board ever received such a letter — The Verge cites a source suggesting that it didn’t. But the framing of Q* aside, Q* in actuality might not be as monumental — or threatening — as it sounds. It might not even be new.

AI researchers on X (formerly Twitter) including Yann LeCun, Meta’s chief AI scientist Yann LeCun, were immediately skeptical that Q* was anything more than an extension of existing work at OpenAI — and other AI research labs besides. In a post on X, Rick Lamers, who writes the Substack newsletter Coding with Intelligence, pointed to an MIT guest lecture OpenAI co-founder John Schulman gave seven years ago during which he described a mathematical function called ‘Q*.’

Several researchers believe the ‘Q’ in the name ‘Q*’ refers to ‘Q-learning,’ an AI technique that helps a model learn and improve at a particular task by taking — and being rewarded for — specific ‘correct’ actions. Researchers say the asterisk, meanwhile, could be a reference to A*, an algorithm for checking the nodes that make up a graph and exploring the routes between these nodes.

Both have been around a while.

Google DeepMind applied Q-learning to build an AI algorithm that could play Atari 2600 games at human level… in 2014. A* has its origins in an academic paper published in 1968. And researchers at UC Irvine several years ago explored improving A* with Q-learning — which might be exactly what OpenAI’s now pursuing.

Tweets by RickLamers

‘OpenAI even shared work earlier this year improving the mathematical reasoning of language models with a technique called process reward models,’ Lambert said, ‘but what remains to be seen is how better math abilities do anything other than make [OpenAI’s AI-powered chatbot] ChatGPT a better code assistant.’

Mark Riedl, a computer science professor at Georgia Tech, was similarly critical of Reuters’ and The Information’s reporting on Q* — and the broader media narrative around OpenAI and its quest toward artificial general intelligence (i.e. AI that can perform any task as well as a human can). Reuters, citing a source, implied that Q* could be a step toward artificial general intelligence (AGI). But researchers — including Riedl — dispute this.

Riedl, like Lambert, didn’t guess at whether Q* might entail Q-learning or A*. But if it involved either — or a combination of the two — it’d be consistent with the current trends in AI research, he said.

‘These are all ideas being actively pursued by other researchers across academia and industry, with dozens of papers on these topics in the last six months or more,’ Riedl added. ‘It’s unlikely that researchers at OpenAI have had ideas that have not also been had by the substantial number of researchers also pursuing advances in AI.’

That’s not to suggest that Q* — which reportedly had the involvement of Ilya Sutskever, OpenAI’s chief scientist — might not move the needle forward.

Lamers asserts that, if Q* uses some of the techniques described in a paper published by OpenAI researchers in May, it could ‘significantly’ increase the capabilities of language models. Based on the paper, OpenAI might’ve discovered a way to control the ‘reasoning chains’ of language models, Lamers says — enabling them to guide models to follow more desirable and logically sound ‘paths’ to reach outcomes.

‘This would make it less likely that models follow ‘foreign to human thinking’ and spurious-patterns to reach malicious or wrong conclusions,’ Lamers said. ‘I think this is actually a win for OpenAI in terms of alignment … Most AI researchers agree we need better ways to train these large models, such that they can more efficiently consume information

But whatever emerges of Q*, it — and the relatively simple math equations it solves — won’t spell doom for humanity.

Reference: https://techcrunch.com/2023/11/27/contrary-to-reports-openai-probably-isnt-building-humanity-threatening-ai/

Ref: techcrunch

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *