Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


What the President of Signal Wishes You Knew About A.I. Panic
May 18, 2023

What the President of Signal Wishes You Knew About A.I. Panic

Reading Time: 7 minutes

The Companies Profiting From A.I. Are Profiting From A.I. Panic, We don’t need killer robots for the threats to be existential., Meredith Whittaker interview: What A.I. risks we should really be worried about.

Over the past few weeks, there’s been some very public hand-wringing about artificial intelligence—a lot of it coming from people who have made A.I. their life’s work. Geoffrey Hinton, dubbed the ‘godfather of A.I.,’ recently left his job at Google to embark upon a sort of media tour warning about the dangers of the technology. And it’s not just him. There was a public letter from Elon Musk and others calling for a pause in A.I. development and an essay in Time from theorist Eliezer Yudkowsky saying generative A.I. can harm humanity—or even end it.

On Friday’s episode of What Next: TBD, I spoke with Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at NYU, to sort through the real threat of A.I. and what the doomerism discourse is missing. Our conversation has been edited and condensed for clarity.

What do you make of the concerns raised by Geoffrey Hinton and others when it comes to A.I. safety?

Meredith Whittaker: The risks that I see related to A.I. are that only a handful of corporations have the resources to create these large-scale A.I. systems, and corporations are driven by interest in profit and growth, not necessarily the public good. I think that the concerns that were raised by Geoff and others are often looking at hypothetical future scenarios in which these statistical systems somehow become hyperintelligent, and I don’t see any evidence backing those claims. It’s not that I don’t believe people are sincere in these beliefs—what I am concerned about is that the sort of Look over here into the vast future is playing into the hands of the corporations we need to be worried about right now.

What we’re calling machine learning or artificial intelligence is basically statistical systems that make predictions based on large amounts of data. So in the case of the companies we’re talking about, we’re talking about data that was gathered through surveillance, or some variant of the surveillance business model, that is then used to train these systems, that are then being claimed to be intelligent, or capable of making significant decisions that shape our lives and opportunities—even though this data is often very flimsy.

The data that feeds these systems is gathered from all over the web by crawling millions of websites—we’re talking about everything from news sites to hate speech. Why does that matter?

That data is then being wrapped up into these machine learning models that are being used in very sensitive ways with very little accountability, almost no testing, and backed by extremely exaggerated claims that are effectively marketing for the companies that stand to profit from them.

You work with the AI Now Institute,  which argues that nothing about artificial intelligence is inevitable. What does that mean?

Part of the narrative of inevitability has been built through a sleight of hand that for many years has conflated the products that are being created by these corporations—email, blogging, search—with scientific progress. The message, implicitly or explicitly, has been Do not put your finger on the scales of progress. Instead, let the technologists do the technology. For a long time, that staved off regulation. That intimidated people who didn’t have computer science degrees because they didn’t want to look stupid. That led us, in a large part, to where we are.

We are in a world where private corporations have unfathomably complex and detailed dossiers about billions and billions of people, and increasingly provide the infrastructures for our social and economic institutions. Whether that is providing so-called A.I. models that are outsourcing decision-making or providing cloud support that is ultimately placing incredibly sensitive information, again, in the hands of a handful of corporations that are centralizing these functions with very little transparency and almost no accountability. That is not an inevitable situation: We know who the actors are, we know where they live. We have some sense of what interventions could be healthy for moving toward something that is more supportive of the public good.

What most concerns you about the moment we’re in with A.I.?

There are many concerns we have to hold at once. This isn’t a zero-sum game. And of course, data bias and the fact that these systems will be shaped like the data they are informed by is a big one. Nitasha Tiku at the Washington Post did a really brilliant exposition looking at what actually goes into creating ChatGPT. Where does it learn how to predict the next word in a sentence, based on how many billions of sentences it’s been shown? It showed some gnarly things like neo-Nazi content and deeply misogynist content that ChatGPT was using. So data is a big concern. Who gets to author data? Who gets to determine what it means and how is that shaping an implicit worldview that is then parroted back through these A.I. systems? For me, there’s also a big concern about who gets to use these systems, who benefits from them, and who is harmed by them.

Because these systems require so much expensive computing power and so much data, they can really only exist in the hands of either very wealthy corporations or very wealthy individuals. What’s the strategy behind releasing generative A.I. to the public?

It costs billions of dollars to create and maintain these systems head-to-tail, and there isn’t a business model in simply making ChatGPT available for everyone equally. ChatGPT is an advertisement for Microsoft. It’s an advertisement for studio heads, the military, and others who might want to actually license this technology via Microsoft’s cloud services. We already know who’s going to be able to actually use this, ultimately—who the business model will target. It’s not technology distributed democratically; it’s going to follow the current matrix of inequality in our world as it is shaped now.

There’s a sort of bifurcation in the generative A.I. criticism: On the one hand, you have Geoffrey Hinton, Eliezer Yudkowsky, etc., saying there is an existential threat here. And on the other hand, people like Timnit Gebru, Deb Raji, Joy Buolamwini, and perhaps you are saying the issue here is as much in how these things are built and trained as anything else. I saw Hinton call those concerns less existential in a CNN interview. What do you make of these sort of two different camps of thinking about how these models are disseminated into the wild, and what kind of harms they might do?

My concern with some of the arguments that are so-called existential, the most existential, is that they are implicitly arguing that we need to wait until the people who are most privileged now, who are not threatened currently, are in fact threatened before we consider a risk big enough to care about. Right now, low-wage workers, people who are historically marginalized, Black people, women, disabled people, people in countries that are on the cusp of climate catastrophe—many, many folks are at risk. Their existence is threatened or otherwise shaped and harmed by the deployment of these systems. We can look at these systems used in law enforcement. There’s a New York Times story from a few months back about a man who was imprisoned based on a false facial recognition match. That is deeply existential for that person’s life. That person was Black. People like Deb, Joy, Timnit have documented over and over again that these systems are more likely to misrecognize Black people. In a world where Black people are more criminalized, and there is inequality in law enforcement, that is going to have harms. So my concern is that if we wait for an existential threat that also includes the most privileged person in the entire world, we are implicitly saying—maybe not out loud, but the structure of that argument is—that the threats to people who are minoritized and harmed now don’t matter until they matter for that most privileged person in the world. That’s another way of sitting on our hands while these harms play out. That is my core concern with the focus on the long-term, instead of the focus on the short-term.

So then whats the next step? Shut it all down?

The next steps are, in my view, things like the Writer’s Guild of America winning, showing that we can put clear guardrails on the use of these systems, and those guardrails don’t have to come from entreating those who already have power. They can actually come from power building in workplaces and in communities. We also have some interesting proposals for more grounded regulation. I would look at Lina Khan’s New York Times op-ed recently that calls for structural separation of these companies. I would also look to the really grounded proposals that Amba Kak and Sarah Myers West at the AI Now Institute put out in their 2023 Landscape Report. Particularly, the proposal that looks at privacy legislation as something that could be beneficial in stopping some of the data-centric A.I. development. Because, of course, we have to get back to this core reality: A.I. is built on surveillance. It is a product of the surveillance business model.

Where is your hope for reining that in, in a policy realm? Do you see that coming from the FTC? Because I certainly don’t see Congress doing anything.

It is complicated. We can’t see legislation, regulation, and policymaking as disconnected from the rest of it. We know that these companies spend hundreds of millions of dollars lobbying and supporting astroturfed organizations that they can proxy their views through. We know that in the U.S., in a post-Citizens United world, it is very hard to get elected without a huge amount of money, and that money can be ultimately secretly contributed. So we’re in an ecosystem where policy doesn’t just spring de novo from Zeus’ forehead. There’s a huge amount of influence that goes into shaping policy. And there are folks who are really taking this seriously, but that doesn’t mean there aren’t fierce counterpressures. We still need people on the ground saying, ‘No, we don’t want facial recognition in our community.’ We need people lobbying for privacy. We need the California privacy law to be proven and to set the benchmark. We have to recognize that we have a lot of competition in applying that pressure.

How do you want people to think about what feels like a sea of A.I. headlines?

They’re not alone in being overwhelmed. It’s really confusing. There are so many claims in the headlines about what these things do and don’t do. If there’s one thing I would say, it’s to keep an eye on who benefits and who is likely to be harmed. When you see a headline about OpenAI, you need to always recognize that that’s talking about Microsoft. When you see a headline that’s about A.I., you need to remember that there are only a handful of entities in the world—corporations based in China or the U.S.—that have the resources to make A.I.

And remember that A.I. is not magic. It is based on concentrated computational power, concentrated data resources that are generated via surveillance, and the concentrated power of these companies. Again, we know where they live, we know where their data centers are, and it is eminently possible to put these technologies in check if there’s a will. So it’s not out of control, it is not out of our hands. And you don’t have to be a computer scientist to be able to have an informed opinion about how these are used, who gets to use them, and to what end.

Reference: https://slate.com/technology/2023/05/meredith-whittaker-interview-geoffrey-hinton-ai-threats.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *