Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


A.I.’s Groundhog Day
December 29, 2023

A.I.’s Groundhog Day

Reading Time: 10 minutes

The Discussion About A.I. Feels New and Scary. But We’ve Had This Conversation Many Times Before., The discussion around A.I. feels new and scary. But we’ve had this conversation many times before.

At the latest congressional hearing on A.I., the hype was high. ‘Since the release of ChatGPT just over a year ago, it’s become clear A.I. could soon disrupt nearly every facet of our economy,’ said Rep. Nancy Mace, chair of the U.S. congressional Subcommittee on Cybersecurity, Information Technology, and Government Innovation. ‘The A.I. genie is out of the bottle and it can’t be put back in.’

It’s an apt comparison. A.I. does seem like a genie: The technology is new and mysterious, we aren’t sure exactly how it works, and we know it is very powerful. We are also afraid of it: In a poll conducted in the summer of 2023, over half of Americans said they were more concerned than excited about A.I.; there is widespread speculation about what effects the technology will have on our economy, our jobs (lolsob), our education system, our art; and tech leaders have warned that the technology puts the fate of humanity at risk. A.I. can’t be stopped, it seems, so the question has become: How do we wield it for good and steer it away from becoming a tool for evil?

While the rush to do something about A.I. might feel new, it’s really just a continuation of a yearslong conversation about the unintended consequences and harms of algorithms. A.I. is just the latest ‘genie’ that won’t go back into the bottle, the newest technological point of no return.

There have been many others. In 2021, the genie was the algorithms used by social media platforms like Facebook that have accelerated polarization and the spread of democracy-threatening mis- and disinformation. In 2019, the genie was map app Waze’s algorithm routing more traffic through neighborhood streets. In 2016, Facebook’s algorithm unleashed the ‘fake-news genie.’ In 2015, it was financial companies using big data to cut credit limits. And, in 2011, Twitter was the ‘technological genie,’ creating new challenges to freedom of speech. ‘Barring extreme steps—such as entirely blocking access to open communications platforms and digital technologies—governments will have to come to grips with the fact that the beauty of the Information Age is also its curse: the information will flow,’ wrote Forbes columnist Adam Thierer.

These conversations have shaped our public dialogue and understanding about data privacy and the role algorithms play in our daily lives. Think back to 10 years ago: How hard did you think about what was being shown on your social media feeds, or who Facebook was selling your information to? These days, it’s well known that what we see online is determined by algorithms—or, as we now call it, ‘the algo’—and that digital privacy is virtually impossible. (There are even whole TikTok accounts dedicated to showing just how easy it is to find anything about just about anyone.) But the past decade has been rife with examples of how algorithms harm us: They steal our attention; they perpetuate inequalities in who receives a mortgage, who is eligible for parole, who gets a job interview, who receives accurate medical diagnoses; and they amplify mis- and disinformation. This is the climate in which generative A.I. like ChatGPT arrived.

To Corynne McSherry, legal director at the Electronic Frontier Foundation, the panic over A.I. feels like Groundhog Day. ‘It replicates the anxieties we’ve seen around social media for a long time,’ she says. And, so far, it seems like we’re taking the same tack we did around social media regulation. As McSherry puts it: ‘Someone in Congress hauls a bunch of CEOs to D.C. to testify about how they should be regulated.’

In September, the Senate’s A.I. working group invited a bevy of tech CEOs to discuss regulation, including ChatGPT developer OpenAI’s Sam Altman, Hugging Face’s Clément Delangue, Google’s Sundar Pichai, Microsoft’s Satya Nadella, Twitter’s (or X’s, if you will, but I won’t) Elon Musk, IBM’s Arvind Krishna, and Meta’s Mark Zuckerberg. Many of the same people and companies have been going to D.C. for years. Facebook CEO Mark Zuckerberg first faced Congress when Facebook was embroiled in the Cambridge Analytica trial in 2018. Two years later, he appeared with Twitter founder Jack Dorsey in 2020, then again with Dorsey and Google CEO Sundar Pichai the following year for a hearing on social media’s role in promoting extremism and misinformation.

Other experts have also testified in front of Congress about the need for more regulation. There was an FTC hearing in 2019 about protecting consumers’ data privacy, followed by a Senate hearing in 2020, a House hearing on facial recognition technology in 2019, and in 2021, ex-Facebook whistleblower Frances Haugen’s testimony in a Senate hearing on Facebook’s business practices, especially as they relate to children’s privacy. After each of these hearings, there was always some legislator quoted in the news saying the hearings presented new evidence that we need federal regulations to protect consumers—and democracy—from the impacts of these technologies. 

Years later, a handful of states and cities have adopted or considered policies addressing specific aspects of algorithmic regulation, like banning facial recognition technology or proposing consumer rights to data privacy, but there are still no comprehensive federal regulations on these issues. There have been attempts. In 2019, a trio of representatives introduced the Algorithmic Accountability Act; it languished in a congressional subcommittee, was reintroduced in 2022, again went nowhere, and then was reintroduced again in 2023. The trajectory of that bill is a good metaphor for the progress of tech regulation in general: Each version of the bill is savvier, more specific about what it demands of the companies it’s regulating, and assigns more detailed responsibilities to specific government agencies.

President Biden has tried to circumvent this lack of progress through executive orders.
In October 2022, he issued a data privacy order that limited the U.S. government’s surveillance of Europeans’ data, but stopped short of taking any action involving tech companies. A year later, in October 2023, President Biden issued another executive order, this time giving vague directives to curb A.I. bias, but provided little in the way of policy. If his 2022 executive order gives us any indication of how quickly these orders lead to change, we’ll still be waiting for quite some time: It took more than a year just to establish a group of judges to form a special court Biden created in his 2022 order.

What’s the holdup? Well, it’s complicated—and so is the technology in question, which only complicates things even further. In the past few years, we’ve seen the limits of some legislators’ and other government authorities’ understanding of even the most basic tech concepts, like the fact that Facebook’s business model relies on selling ads or how links on Twitter work. A fundamental lack of basic knowledge about technology often goes hand in hand with a lack of interest, and in that vacuum, other pressing issues take center stage.

Legislators interested in regulation are also competing with another powerful force: money. In 2022, Apple, Google, Microsoft, Meta, and Amazon spent $69 million on lobbying, which some legislators have pointed to as the primary reason tech regulation bills have stalled in past years. An investigation from The New York Times also turned up several members of Congress who have potential conflicts of interest between their congressional committee appointments and their ownership of tech stock.

In addition to the direct influence of capital, the loftier ideals of capitalism have also clashed with the idea of regulation. Global competition and ‘innovation’ have often been cited as reasons to avoid encumbering tech companies with regulation. In a 2021 hearing, for example, Chris Coons said it was fine that tech companies’ algorithms were designed to keep users glued to their platforms. ‘We don’t want to needlessly constrain some of the most innovative, fastest-growing businesses in the West,’ he said. ‘Striking that balance is going to require more conversation.’

So far, much of that conversation has, unsurprisingly, involved the tech companies themselves. ‘The industry has really been able to capture a lot of these conversations and put themselves in control,’ says Safiya Noble, director of the Center on Race and Digital Justice at the University of California, Los Angeles, and author of Algorithms of Oppression: How Search Engines Reinforce Racism. ‘After years of insisting that the tech industry could not be regulated—that it was going to harm American competitiveness or slow down innovation—the industry has managed to ask for policies that are favorable to them, and to their unbridled growth.’

On top of all that, many aspects of these technologies make them ‘genuinely difficult to regulate,’ says Ben Winters, senior counsel at the Electronic Privacy Information Center, a digital privacy research nonprofit. For instance, placing regulations on A.I.’s ability to generate text, images, music, or other information could be unconstitutional, given First Amendment protections. Some have proposed that companies should be held accountable for their algorithms’ output, but it’s unclear exactly where a company’s liability for information disseminated on their platforms begins and ends—and it could have chilling effects for freedom of speech online in general. Questions of accountability become extra tricky when dealing with algorithms, which function as a ‘black box’—many engineers who helped train and shape the technology can’t be sure exactly why the system they built ends up behaving one way or another.

There are other gray areas that are difficult to navigate, like which types of online data are fair game for algorithmic inputs and what sort of consent, if any, companies should seek. In 2019, I reported on a case in which researchers scraped publicly available YouTube videos to train a model to guess what a person looks like—and one of the unwitting participants was surprised to learn he’d been part of the model’s training data. It’s not technically illegal, but it doesn’t feel ethical, either. Questions about online data privacy ‘dovetail with a lack of privacy protections that we still don’t have yet,’ says Winters.

And that’s the thing about A.I.: It heightens the societal stakes on these existing unsolved issues around privacy and algorithms, adding even more computing power behind the systems that perpetuate privacy loss and bias. In the past, more primitive models were trained on smaller datasets and required substantial training to become even halfway competent. The large language models that drive A.I. are much more nimble: They have absorbed a huge amount of data and are able to perform tasks they weren’t explicitly trained on. As a result, the algorithm ‘black box’ only becomes deeper and darker, and it becomes more challenging to trace why, exactly, a model spits out an answer—and potentially more difficult to address issues with it.

At the same time, users of these models need less knowledge than ever to operate them. Generative A.I. systems like ChatGPT are incredibly accessible to the general public, but that increased accessibility is a double-edged sword. Not too long ago, querying a model required learning how to code or, at the very least, some knowledge about how to parse the question correctly so the model would provide an answer. Models that accepted requests in plain text were not very sophisticated, like SmarterChild, the chatbot we all thought was cutting-edge A.I. in the 2000s. SmarterChild had canned answers to most of our responses; ‘Do you kiss your mother with that mouth?’ was its default reply if you cursed at it. Compare that to the current day, when anyone can send complex queries to ChatGPT in plain language. People now play around with ChatGPT by prompting it to create a series of images depicting the ‘most Seattle’ imagery, or to write a series of increasingly threatening-sounding emails, then post their results on TikTok.

That’s all innocuous, but we’ve already started seeing the stakes of wider access to these powerful technologies: deepfakes created to spread misinformation or as revenge porn; ChatGPT ‘hallucinations’ (a fun euphemism for ‘lies’) that falsely accused people of crimes and were filed in legal cases; A.I.–created art that reinforces harmful stereotypes. As A.I.
continues to develop and as more people use it and trust it without questioning its ethics or output, there will only be more problems.

Where do we go from here? The conversation so far has focused on regulation, since, as Rep. Mace put it, the genie can’t be put back in the bottle. But Noble questions that assumption. Tech companies, she says, seem to be using the same playbook as oil companies did decades ago.
‘The words progress, innovation, industry—all of these tropes of the promise of American progress—are quite similar to those about what oil and gas could do to improve our lives,’ says Noble. Like the fossil fuel industry, Big Tech drives extractive industries and its accompanying labor abuses and produces a huge amount of carbon emissions. These companies, Noble says, ‘present themselves as here to stay, positive, future-forward,’ but she rejects that their continued existence is inevitable. ‘As they extract everything they can get from us as consumers and use all our shared resources—our roads, our best-educated people, our attention and time spent on devices—we have grounds to refuse that.’

Even if a future with A.I. isn’t an inevitability, the technology itself and the industry propping it up won’t disappear overnight. Moving forward, it seems unlikely that comprehensive blanket policies will materialize; the issue is complicated enough that solutions may not be one-size-fits-all. Take, for example, the idea of improving the transparency of algorithms or A.I. through audits or impact assessments. I asked Winters for details about how companies were carrying those out: What types of data are assessors assessing? Who are the assessors—people who work within a company, or third parties? ‘There isn’t a universal definition of it; every time it’s being used, it means something different,’ Winters says. ‘Whether it’s valuable and helpful to consumers—well, the devil’s in the details.’

A good impact assessment might explain the intended logic behind an algorithm, how and why it’s being used to supplement human decision making, and include ways for the public to appeal. But given the variability in algorithmic systems and current limits in how much the government can compel companies to reveal about their technologies, it’s unclear how much such audits or assessments would really tell us and how robust their conclusions really are. (As the nonprofit Algorithmic Justice League puts it: Who audits the auditors?)

EFF’s McSherry suggests that one way to minimize algorithmic harms would be to focus on specific instances of bias or abuse, and leverage existing legal tools to fight it. ‘We have a whole web of laws that apply to a harm you’re worried about, rather than the technology itself,’ says McSherry. Deepfakes might be fought using states’ publicity rights laws, which prevent people from using others’ images in ways they didn’t consent to. Defamation law, too, can be used as a defense in cases where one’s identity has been used inappropriately, or some types of misinformation. (‘I’m not actually sure what law you’re going to write that is actually going to prevent misinformation,’ McSherry says.) Copyright law might be applied in cases where A.I. jacks your work, and civil rights laws theoretically prevent discrimination if you’re unfairly denied housing and employment. None of these laws guarantee that you’ll see justice, and they’re a defensive rather than proactive step, but they’re tools nonetheless.

While A.I. has the potential to wreak havoc, McSherry sees one silver lining in its recent ascendance: People are finally paying attention. There’s been plenty of hype around fears of an A.I. doomsday, but after everyone calms down, real change might be possible. ‘What if we take advantage of this moment to actually put all that energy towards not speculative harms but actual harms?’ says McSherry. ‘If in 2024 we actually get the kind of regulation and transparency we need for existing harms, I will take any amount of hype.’

Future Tense is a partnership of MediaDownloaderNew America, and Arizona State University that examines emerging technologies, public policy, and society.

Reference: https://slate.com/technology/2023/12/ai-artificial-intelligence-chatgpt-algorithms-regulation-congress-biden.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *