Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Is It Too Late to Regulate A.I., or Too Soon?
May 19, 2023

Is It Too Late to Regulate A.I., or Too Soon?

Reading Time: 7 minutes

OpenAI CEO Sam Altman has a big idea for policing his industry. It’s not the only one., OpenAI hearing: Sam Altman has an idea for regulating AI. Is it the right option?

This article was co-published with Understanding AI, a newsletter that explores how A.I. works and how it’s changing our world.

When Silicon Valley executives testify before Congress, they normally get raked over the coals. But OpenAI CEO Sam Altman’s Tuesday appearance before the Senate Judiciary Committee went differently.

Senators asked Altman probing questions and listened respectfully to his answers. Afterward, the committee’s chairman, Sen. Richard Blumenthal of Connecticut praised Altman.

‘Sam Altman is night and day compared to other CEOs,’ Blumenthal told reporters after the hearing. ‘Not just in the words and rhetoric, but in actual actions, and his willingness to participate and commit to specific action.’

The centerpiece of Altman’s testimony was a call for a new licensing regime for powerful A.I. models.

‘I would form a new agency that licenses any effort above a certain scale of capabilities, and could take that license away and ensure compliance with safety standards,’ Altman said. He added these standards should be focused on ‘dangerous capabilities’ such as the ability to ‘self-replicate and self-exfiltrate into the wild.’

Altman’s proposal would represent a dramatic expansion of federal power over the A.I. sector. And as far as I can tell, there’s been little work done to flesh out what such a system might look like.

‘I’m a little perplexed by what he’s proposing,’ University of Colorado legal scholar Margot Kaminski told me on Wednesday. ‘It doesn’t map onto all the laws that are out there.’

Intuitively, it makes sense that a radically new technology like A.I. might need a new kind of regulatory framework. The problem is that Altman and others who think like him haven’t explained how such a licensing scheme would work. And given how fast the technology is changing, there’s a big risk of getting it wrong.

Another witness at Tuesday’s hearing was IBM executive Christina Montgomery, who defended a more conventional approach to regulating A.I. IBM calls it ‘precision regulation,’ and it focuses on overseeing the use of A.I. in high-stakes domains like criminal justice, hiring, and medicine, where you’d want to be especially careful about removing humans from any decision-making.

Kaminski told me that the European Union is currently developing a new A.I. Act that takes an approach consistent with IBM’s recommendations. It would classify A.I. applications by their level of risk and subject higher-risk applications to stricter regulation. Some of the highest-risk applications of A.I.—for example, tracking people in real time using biometric identifiers—would be banned outright.

The EU published an updated draft of the proposal last week. The new draft caused a stir because it proposed regulating providers of so-called foundation models—powerful machine-learning models like GPT-4 with a wide range of potential uses. Before a European company could build a product on top of a foundation model, the model’s creator would need to provide EU regulators with detailed information about how the model was trained, what it could do, and how potential risks were being mitigated.

Critics warn that this could create a schism in the A.I. world, since U.S.-based creators of foundation models might be unwilling or unable to comply with EU requirements. European companies could then be cut off from access to cutting-edge U.S. models, which could hamper the development of Europe’s A.I. sector. Critics also warn that it could limit the development of open-source foundation models, since their sponsors might not have the resources necessary to comply with the EU’s red tape.

Still, the European proposal mainly focuses on regulating consumer-facing applications of A.I. In contrast, Altman seems to be advocating for governments to create a licensing regime for foundation models themselves.

I suspect these competing proposals reflect the divergent philosophical approaches I wrote about recently. Altman’s proposal to directly regulate powerful language models reflects the singularist concern that sufficiently powerful A.I. models could become self-aware and wipe out the human race. In contrast, the IBM and EU proposals reflect a more physicalist approach: focusing on the harms that can occur when people apply A.I. to specific sectors of the economy.

Near the start of Tuesday’s hearing, Blumenthal said his ‘biggest nightmare’ about A.I. was ‘the effect on jobs.’ He asked Altman to share his own biggest A.I. nightmare and then comment on whether he expected A.I. to cause large-scale job losses.

‘Like with all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict,’ Altman said. ‘I believe that there will be far greater jobs on the other side of this and the jobs of today will get better.’

The panel’s third witness, psychologist and entrepreneur Gary Marcus, said he was also concerned about the potential for job losses. But he then pointed out that Altman didn’t actually reveal his biggest nightmare. So Blumenthal offered Altman another chance to respond.

‘My worst fears are that we, the field, the technology, the industry, cause significant harm to the world,’ Altman said. ‘It’s why we started the company [to avert that future]. I think if this technology goes wrong, it can go quite wrong.’

This is still pretty vague, but Altman’s past comments make it clear he’s worried that A.I. could threaten the survival of humanity. For example, in an interview earlier this year, Altman said that ‘the worst case is lights-out for all of us.’

Later in Tuesday’s hearing, Altman mentioned A.I.s designing ‘novel biological agents’ as one of the threats regulators should guard against.

Like Altman, Marcus favors a licensing regime for new A.I. models. He called for ‘a safety review like we use with the FDA’ to be conducted before a system like ChatGPT could be widely deployed.

Chatbots can produce a wide range of outputs that people might consider unsafe, from bad medical advice to instructions for committing crimes to biased or bigoted statements. Deciding when a chatbot’s responses are harmful enough to justify keeping it off the market seems like a political minefield.

For example, several senators expressed concern that A.I.-generated misinformation could undermine democracy. Sen. Amy Klobuchar of Minnesota raised concerns about ChatGPT giving voters inaccurate information about how to vote on election day. Others worried about generative A.I. systems generating ‘deep fake’ images, audio, or video that could deceive voters and influence how they vote.

But while almost every member of Congress probably agrees that disinformation is bad in the abstract, Republicans and Democrats are likely to disagree sharply about exactly how to define the concept. Moreover, Margot Kaminski told me that contemporary First Amendment jurisprudence would make it difficult for governments in the U.S. to limit A.I.-generated misinformation. For example, any law requiring a license to generate political speech using A.I. would likely be disallowed as unconstitutional prior restraint.

There’s also a major conceptual problem with using FDA-style testing to guard against dangerous, superintelligent A.I. A basic premise of singularist thought is that such systems will be skilled at manipulating and deceiving humans. Such a system could presumably trick government regulators into approving it by pretending to be less capable and more benign than it really is.

Even if an A.I. model isn’t dangerous on its own, it could be a significant component of a dangerous system. In recent weeks, people have been experimenting with ‘agentic’ A.I. systems like Auto-GPT and BabyAGI that effectively give large language models the ability to make plans and then carry them out autonomously. So far, these systems don’t work very well and don’t seem to pose a danger to anyone. But that could change as large language models get more sophisticated.

All of which is to say I’m not surprised Altman doesn’t have all the details of his licensing scheme worked out. Guarding against the worst-case consequences of A.I. seems like a legitimately difficult problem.

But if these details aren’t forthcoming, the result could be a big mismatch between what policymakers say they’re trying to accomplish and what they actually do.

Tuesday’s hearing made it clear that there’s a strong, bipartisan appetite in Congress for new A.I. regulations. Their sense of urgency is driven by the belief that A.I. could pose a serious threat to our jobs, our democracy, and perhaps even our survival as a species.

Yet concrete regulatory proposals tend to focus on more pedestrian goals. For example, last October the Biden administration published a ‘Blueprint for an AI bill of rights’ that included sections on privacy, nondiscrimination, and transparency. There’s also a section on ‘safe and effective systems’ that focuses on ensuring that physical systems like self-driving cars don’t malfunction and hurt people.

These are all worthy concerns, but I don’t think they’re the concerns that keep Sam Altman up at night.

A recurring theme of Tuesday’s hearing was that Congress moved too slowly to regulate social media and shouldn’t make the same mistake with A.I. I’m not sure I agree with this premise.

Today there’s a fairly broad consensus that social media has deepened partisan divisions and worsened mental health—especially for teenage girls. But it’s not obvious to me that Congress could have anticipated these problems 10 or 20 years ago. And even today, there’s no real consensus about how to solve them.

Right now, generative A.I. technology is changing so quickly that it’s difficult to predict what it will look like five or 10 years down the road. It’s harder to predict what social or economic problems A.I. is likely to cause, and still harder to anticipate what policy changes are likely to be helpful.

So it’s not obvious to me that Congress’s sense of urgency on this issue is justified. Enacting a licensing regime now could also cement the dominance of industry incumbents like Google and OpenAI by making it harder for startups to create foundation models of their own. It might make more sense to wait a year or two and see how A.I. technology evolves before passing a major bill to regulate A.I.

In the meantime, I think the best thing Congress could do is to fund efforts to better understand the potential harms from A.I. Earlier this month, the National Science Foundation announced the creation of seven new National Artificial Intelligence Research Institutes focused on issues like trustworthy A.I. and cybersecurity. Putting more money into initiatives like this could be money well spent.

I’d also love to see Congress create an agency to investigate cybersecurity vulnerabilities in real-world systems. It could work something like the National Transportation Safety Board, the federal agency that investigates plane crashes, train derailments, and the like. A new cybersecurity agency could investigate whether the operators of power plants, pipelines, military drones, and self-driving cars are taking appropriate precautions against hackers.
These precautions would make our systems more secure against attacks from humans as well as A.I.s. And it would also give us a margin of safety if Sam Altman’s nightmare is eventually realized.

Reference: https://slate.com/technology/2023/05/artificial-intelligence-sam-alterman-hearing-regulation-senate.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *