Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Elon Musk Is the A.I. Threat
April 1, 2023

Elon Musk Is the A.I. Threat

Reading Time: 8 minutes

Why Elon Musk Is Trying to Convince Everyone That A.I. Is Evil, The Tesla CEO called for a pause in chatbot development. But he’s pushing something much more dangerous., Why Elon Musk joined a call to stop A.I. development.

For much of the past decade, Elon Musk has regularly voiced concerns about artificial intelligence, worrying that the technology could advance so rapidly that it creates existential risks for humanity. Though seemingly unrelated to his job making electric vehicles and rockets, Musk’s A.I. Cassandra act has helped cultivate his image as a Silicon Valley seer, tapping into the science-fiction fantasies that lurk beneath so much of startup culture. Now, with A.I. taking center stage in the Valley’s endless carnival of hype, Musk has signed on to a letter urging a moratorium on advanced A.I. development until ‘we are confident that their effects will be positive and their risks will be manageable,’ seemingly cementing his image as a force for responsibility amid high technology run amok.

Don’t be fooled. Existential risks are central to Elon Musk’s personal branding, with various Crichtonian scenarios underpinning his pitches for Tesla, SpaceX, and his computer-brain-interface company Neuralink. But not only are these companies’ humanitarian ‘missions’ empty marketing narratives with no real bearing on how they are run, Tesla has created the most immediate—and lethal—’A.I. risk’ facing humanity right now, in the form of its driving automation. By hyping the entirely theoretical existential risk supposedly presented by large language models (the kind of A.I. model used, for example, for ChatGPT), Musk is sidestepping the risks, and actual damage, that his own experiments with half-baked A.I. systems have created.

The key to Musk’s misdirection is humanity’s primal paranoia about machines. Because humans evolved beyond the control of gods and nature, overthrowing them and harnessing them to our wills, so too do we fear that our own creations will return the favor. That this archetypal suspicion has become a popular moral panic at this precise moment may or may not be justified, but it absolutely distracts us from the very real A.I. risk that Musk has already unleashed.

That risk isn’t an easy-to-point-to villain—a Skynet, a HAL—but rather a flavor of risk we are all too good at ignoring: the kind that requires our active participation. The fear should not be that A.I. surpasses us out of sheer intelligence, but that it dazzles us just enough to trust it, and by doing so we endanger ourselves and others. The risk is that A.I. lulls us into such complacency that we kill ourselves and others.

Musk’s involvement in both OpenAI and Tesla’s Autopilot can be traced to Google, whose advanced experiments so clearly impressed him that he believed that artificial general intelligence and autonomous driving would arrive much sooner than they have. As Semafor recently reported, Musk initially funded the nonprofit OpenAI in response to learning about Google’s research into novel A.I. techniques, reflected in public quotes in which he expressed fear that Google would create an ‘A.I. overlord.’ Needless to say, those fears seem a long way from fruition. Having stepped away from OpenAI—now a for-profit startup that created ChatGPT and the similarly impressive text-to-image-tool DALL-E—Musk is now watching from the sidelines as the company becomes the widely acknowledged leader in cutting-edge A.I. products.

Similarly, Musk appears to have found out details about Google’s self-driving technology during (or about the time of) his early 2013 request for Google to buy Tesla, and he announced that Tesla would pursue its own Autopilot system as soon as he called off that deal. What much of the public still doesn’t understand is that, according to former Google self-driving-car program insiders Lawrence Burns and John Krafcik, immediately prior to those negotiations, Google had canceled its own product development program called AutoPilot. Having had a look under the hood during M&A negotiations (reported much later in Ashlee Vance’s biography of the CEO), Musk clearly decided to make his own version of the Google system, thinking it was on the brink of being a truly self-driving system (or at least that it would be widely perceived that way).

Google’s AutoPilot was, by far, the most advanced driving automation system of its time, and the first effort by Google’s self-driving-car program to make a consumer-facing product. It was envisioned as a Level 2 driver assistance system, like Tesla’s Autopilot and Full Self-Driving beta are today, which means that although it had automated steering, braking, and acceleration, it required constant human oversight for safety. In 2009 Google tested AutoPilot on 1,000 miles of California public roads, and its performance was at least as impressive as Tesla’s systems are now.

It was only after allowing nontechnical employees to test the system on their commutes to and from work that Google’s team realized how troublesome the system was. No matter how many times they were warned, employees continued to become distracted while using AutoPilot, applying makeup, eating food, playing with phones, and even opening laptops. Google realized that such a human-machine hybrid system was a problem of behavioral psychology as much as sheer technology, and that keeping a human ‘in the loop’ required capabilities that its strictly technological focus didn’t provide. Rather than induce customers into inattention that would inevitably lead to crashes, potentially destroying public trust in the technology, Google decided to focus on fully autonomous systems that did not require a person to be in the loop.

Google didn’t need to rely on its own experiments to reach this conclusion. Human behavioral research going back a century shows that humans inevitably struggle with ‘vigilance tasks,’ and that automation can leave humans underaroused and unable to act when needed, no matter how capable or well trained they are—one of many infamous ‘ironies of automation.’ Especially on long freeway drives, the most likely use case for AutoPilot, automation exacerbates the inevitable underarousal, making it almost impossible to keep drivers engaged for long periods. And even with Google’s cutting-edge lidar-sensor technology, which made its experimental AutoPilot quite expensive, probabilistic A.I. systems are nowhere near reliable enough to provide safety-critical reliability without a human as backup.

It’s important to note that autonomous vehicles represent the first effort to deploy ‘A.I.’ systems in safety-critical applications, and that the key to making this combination work is to make driving as much like the board games where A.I. thrives as possible. You do that in part by limiting the operating domain to certain neighborhoods or cities so that the unbounded complexity of the open road is reduced to something closer to the limited complexity of a chess or Go board. It’s also critical to give the A.I. driving ‘agent’ as close to the perfect information an A.I. chess or Go player has, which is why real AVs have expensive and complex sensor suites, using advanced lidar, radar, camera, thermal, and other sensors. Only with a limited operating area and near-perfect information is it possible to bring driving A.I. agents up to a level of safety and reliability that rivals even the most inexpert humans.

That’s fine for robotaxis and self-driving semitrucks, but the high cost and limited operating area that is inevitable with this approach were non-starters for Tesla’s business of selling cars to individual consumers. That’s why Tesla’s systems ended up being incredibly limited relative to real AVs, using sensor suites that cost a few hundred dollars instead of a few hundred thousand, and requiring human babysitters to achieve even a modicum of safety. Thus far, the hope that A.I. techniques will deliver safety-critical performance without these steps remains a pipe dream, which means that Tesla’s systems must be described as ‘driver assistance’ even though they are bad robots with untrained human babysitters instead of well-designed assistance systems that leverage the strengths of humans and machines.

In full light of the well-understood facts about A.I.’s limitations when it comes to vehicles, Elon Musk chose to Leeroy Jenkins into the exact kind of system Google rejected over safety concerns, right down to the name ‘Autopilot’ (though he did drop the capitalization of the P). Not only did Tesla’s Autopilot lack the lidar sensors of Google’s, making it less reliable, but Musk failed to address the obvious concerns about driver inattention as well. Despite his own engineers urging him to include camera-based driver monitoring or, at the very least, capacitive touch sensors on the steering wheel to ensure driver engagement when the system was active, Musk rejected these based on cost. After all, Google could afford expensive hardware for its robotaxi-based business model, but Tesla had to keep its costs down, lest consumers balk at the high cost of Autopilot on top of the cars’ heady list price.

In 2016 the inevitable bill of Musk’s lunge into driving automation started to come due, as first Gao Yaning and then Josh Brown died in horrifying crashes. Both of these fatal crashes, as well as the subsequent deaths of Walter Huang and Jeremy Banner, all showed the same patterns: Autopilot drove these people into large, easily avoidable obstacles at high speeds, and none of them intervened despite having plenty of time. One National Transportation Safety Board investigation after the other showed that the exact factors that had kept Google from deploying its AutoPilot were ending precious human lives on public roads.

Musk’s response to these deaths was to double down, arguing that while these isolated incidents were tragic, Autopilot was overall safer than human drivers. In case the sheer callousness of this utilitarianism weren’t ugly enough, it was also another misdirect: As I argued in the Daily Beast in 2016, Tesla’s crude safety claim didn’t adjust for the biggest-known factors in road safety, like road type and driver age. Now we finally have a peer-reviewed effort to make these adjustments, and the results show that rather than reducing crashes by 43 percent, as Tesla claims, Autopilot may actually increase crashes by 11 percent. As the study’s limitations make clear, the absolute safety record of the system is still unknown, but the fact that Tesla chose to make such a misleading claim as its best argument for the safety of Autopilot shows how cynical the entire effort has been.

Amazingly, Musk’s success at tapping into shared human tropes about technology has kept the full picture of Tesla’s cynical deployment of Autopilot from being appreciated. We seem to be desperate to believe that any form of automation must always be safer, with Autopilot users (including those who subsequently died) leaping at every opportunity to show proof of the system ‘saving them,’ without ever mentioning the countless prosaic moments in which they saved the system. Our desire to believe that Autopilot is safe reflects our desire to believe that the real A.I. risk involves a superhuman overlord. As a result, we don’t think twice about the unreliable driving assistants lulling us into complacency so we crash into easily avoidable objects.

The cognitive flaw that Musk is exploiting is the dichotomy between human and machine, and the crude, science fiction–inflected tropes we attach to it. It just makes sense to us that advanced machines must be superhuman, immune from our all-too-familiar human frailties. But A.I. has its own frailties as well, which are exacerbated in safety-critical systems where probabilistic whoopsies result in human deaths rather than extra digits in a generative art piece or a bizarre failed joke from ChatGPT.

Worst of all, our own frailties are exacerbated by the illusion of competence that A.I. projects. If Autopilot and FSD beta haven’t killed hundreds or thousands of people yet, it’s largely because enough people still recognize that these are incredibly janky and unreliable systems that inadvertently remind you, every couple of miles, not to trust them with your life. The real risk remains ahead of us, in a future where these systems eventually become good enough to go a few hundred miles between errors, lulling us deeper into overtrust and complacency before making an unpredictable mistake.

It’s this prosaic danger, not the sudden emergence of an artificial superconsciousness we struggle to even theorize about, that presents the most immediate A.I. risk. At the point that we’ve allowed inadequate A.I. systems to engage in the most dangerous thing we do every day—to contribute to the deaths of multiple people—this shouldn’t be controversial. The fact that it is suggests that our relationship with A.I. is off to a terrible start. There’s no reason to think listening to Elon Musk’s warnings will make it any better.

Reference: https://slate.com/technology/2023/03/elon-musk-chatgpt-openai-artificial-intelligence-tesla.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *