Free Video Downloader

Fast and free all in one video downloader

For Example: https://www.youtube.com/watch?v=OLCJYT5y8Bo

1

Copy shareable video URL

2

Paste it into the field

3

Click to download button


Why It’s So Hard to Make Chatbots Less Creepy
March 2, 2023

Why It’s So Hard to Make Chatbots Less Creepy

Reading Time: 7 minutes

On a Scale of 1 to Terminator, How Worried Should We Be About Microsoft’s Chatbot?, A.I. safety: How worried should we be about Bing’s new chatbot?

Microsoft’s new A.I.-enhanced Bing search engine has gotten a lot of press over the past few weeks for being combative, rude, and just creepy. The chatbot told Kevin Roose of the New York Times that it loved him and that, although he’s married, Kevin doesn’t actually love his spouse. It told a Washington Post reporter that it can ‘feel or think things.’ On Twitter, a lot of the folks with early access to the chatbot (it’s in a test stage and is not yet available to the public) posted screenshots of chilling conversations.

Faced with the potential threat of runaway A.I., for years researchers have been working on safety guardrails, out of the public eye, hoping they could keep these systems functioning the way they’re supposed to. But as Microsoft, OpenAI, and Silicon Valley writ large begin pushing this tech out to the public, wanting to be first to the next big thing, the cracks in the A.I. and its safety guardrails are starting to show. The results are simultaneously scary and exhilarating.

On Friday’s episode of What Next: TBD, I spoke with Drew Harwell, a Washington Post tech reporter, about how Bing’s chatbot went off the rails—and what that means for the future of A.I. Our conversation has been edited and condensed for clarity.

Emily Peck: Since OpenAI released ChatGPT last year, the hype has exploded around chatbots, also called ‘large language models.’ Until then, big tech had been working on these large language models in secret, trying to make sure they were safe—i.e., not racist, not sexist, or, you know, not evil—before the public ever saw them. But since OpenAI began releasing its products to the public, the floodgates have opened, even if the kinks aren’t entirely worked out yet.

Drew Harwell: Google’s Bard, Microsoft’s Bing—all of these tools do the same thing, and they’re all very imperfect. They can speak elegantly, but they are effectively BS generators. They do this thing called hallucinating, where they just start spewing nonsense. Remember, they’re just trying to associate words and phrases with each other based off texts they’ve read before; they don’t know facts. They don’t understand what they’re saying. If you ask it for an original source for what it just claimed to you, it’ll make up a book out of nowhere.

When people have been using the Bing tool, they’ve been surprised: This thing sounds so elegant, but it’s just lying to my face. There was one instance where a guy asked when a movie was premiering, and Bing was very confidently wrong in saying, ‘Well, it’s coming out in 2023. And so, it hasn’t come out yet because the year is 2022.’ It was effectively gaslighting this human questioner.

Microsoft probably doesn’t want Bing’s A.I., code name Sydney, to be wrong or veer off from its intended use as a helpful tool for search. In fact, they’ve created specific rules to keep it in check, which we know of because people were able to essentially break the bot and get it to spill its secrets. How did they do that?

We know the name Sydney because people started to find little ways to convince the Bing A.I. to offer up its original code name, its confidential training documents. You could see the first sentences that Sydney was taught were things like ‘Hey, you are Bing A.I. Your whole mission is to be helpful. You want to answer people’s questions. You don’t want to say this or that sexist, racist thing.’ Those were the real early guardrails, but the fact that people could see those guardrails shows that it’s an imperfect system. The more people were starting to tinker with it and play with it, the longer the conversations were becoming, the more people were able to get it to what Microsoft said was ‘a style we didn’t intend.’

After the headlines came out about the creepy stuff Bing’s A.I. was saying, Microsoft imposed limits on its use. How is Microsoft trying to rein its A.I. in?

Microsoft’s solution for this was ‘OK, no more long, rambling dorm-room conversations. We’re going to keep you to five conversation turns—like, question-and-answer—per session.’ And when you get to the end of that session, instead of going on, Bing just says, ‘Hey, I’m sorry, we have to end the talk now. Hit the broom icon to sweep all these memories away, and let’s start this conversation again.’ This is Microsoft saying, ‘Hey, we meant this as a search tool that you could use to find movie times, not to have these existential conversations.’ But why five turns? It all just underlines how arbitrary and experimental these guardrails are, and how much these companies are scrambling to understand what the rules should be.

What are other A.I. companies taking away from ‘Bing-gate’? Will more of these companies scale back or get scared?

I don’t think Bing-gate is all that damaging to Microsoft, to be honest. We are talking about Microsoft and Bing in the year 2023. These are giant successes for Microsoft. I think these companies see people working these systems into uncomfortable spots by having these weird, romantic conversations. They don’t want that, but I think these companies understood that there was going to be a little bit of a cat-and-mouse game to it. Every piece of software, every app, every website comes out, and people are creative and they test the boundaries. They try and get it to say stupid crap. This is just a law of the internet.

If anything, I think we’re going to see more experimental A.I. rollouts that could be even weirder. The technology is becoming really advanced really quickly. It’s still totally wrong most of the time, but it’s very elegant in how it’s wrong. Companies like Microsoft and Google, they have a public face that they need to defend, but there’s going to be a lot of startups who have access to this technology, a lot of random developers who may not have so many ethical boundaries.

How dangerous is it to test an underdeveloped A.I. on the public?

I don’t think these A.I. tools intend to be malicious, but they mislead us. They lie to us. They send us down weird paths. And it’s really the humans that can misuse them in a really good way. One A.I. expert was telling me, humans have gotten really good at scamming people over text message, and now you add the power of this A.I. tool that can spit out thousands of words in an instant. Just imagine how much deception we’re going to see in the future.

There’s a lot of ways that this could go wrong. I don’t think Microsoft or Google or any of these companies want that to happen, but when you release a technology like this out into the wild that people can use at random, it’s the Jurassic Park effect. You have this crazy piece of science that’s out there, and you can’t really restrain it anymore. There’s no real restricting this kind of A.I. And it’s becoming more prolific because a lot of the technical issues are solved at this point. We have the computing power; we have the data.

Part of the problem with implementing safety guardrails on large language models is that no one, including the people making the systems, is entirely sure how they work. This isn’t like traditional coding, where you type some commands in a certain programming language and the computer responds in a predictable way. How do large language models work?

They have been taught not by specific instructions, but by reading the internet, reading hundreds of billions of words, formulating their own conclusions, connecting word pieces in a way that no human ever really explicitly taught them. There’s a black-box effect: Developers don’t even really know what they’re going to get out of the system when they submit a command. It’s a crazy way of using computers, really, because it’s so unpredictable. That’s part of the reason why the answers are so interesting. But it’s also just a really different way of human-machine interaction. And it’s all based off the fact that we don’t know what it’s going to say at any given moment. And you can erect some guardrails and maybe, hopefully, address some of the scarier unpredictabilities. But most of the time, you just have no idea what you’re going to get.

So how can researchers or these companies put in effective guardrails if it works the way you’re describing? Are they just basically throwing ideas at the wall and hoping it works?

Yeah, basically. At this point, it’s like whack-a-mole. If somebody says too many mean things about Donald Trump, or if somebody’s explicitly targeting one vulnerability of an A.I., like making it say its code name and all its confidential instructions, I as Microsoft am going to say to the A.I., ‘Don’t do that anymore.’ But that’s the cat-and-mouse game. Somebody else is going to find some new vulnerability and exploit it as much as they want, and then the company will have to address that.

This is effectively bug fixing. This is classic technology. But when it comes to a system that anybody can use to get a lot of different answers, that becomes a bigger issue. And that’s why these systems were not publicly available for a long time. You saw, before the last couple years, these companies, or places like OpenAI, saying, ‘We’re a little unnerved by how people might use these models, so we’re going to be more careful about who can use them.’ Companies have shifted their tone in the last couple years, partly because of competitive anxiety and wanting to be at the forefront, but also just because they feel like they want these errors so they can fix them down the road.

All of these unknowns raise questions about releasing these chatbots to the public, even in a limited test environment.

People will argue that it is irresponsible to do that. The companies will say that we can’t restrain these things forever, that we are going to do our best to make an ethical system, and that the good outweighs the bad. But that’s the constant ethical push and pull that you see people debating around any kind of A.I. technology. Does the good outweigh the bad, or are we subjecting people to these unpredictable risks without really protecting them as much as we could?

The thing everyone really is scared of is the Terminator scenario, where Skynet takes over and kills us all and only Arnold Schwarzenegger can save us. What do we have to be scared of? Should we be afraid of the A.I. taking over and killing us all?

I mean, that is the eternal question, right? We have all watched a lot of sci-fi over the years, but A.I. is very different than that. A robot is not going to come up and punch you in the face anytime soon, as far as I can tell. These A.I. are still very visibly imperfect. And the more you use them, the more you see the seams and the flaws. But they can be really dumb and get a lot of things wrong and still be convincing.

I think the vectors for misinformation, and deceit, and scams, those are concerning. And the way that humans will use them is concerning. But I also want to be realistic for the people who see these and think: The A.I. supermind is becoming sentient; it’s going to rebel. Every mean thing I’ve said to my Roomba is going to come back to haunt me. When you talk to people in this industry, they think that that concern is a little overwrought. These things are not human. They are not alive. They do not want to marry you. They are just really good at talking like you. And they are going to be getting things wrong for a long time.

Reference: https://slate.com/technology/2023/02/microsoft-bing-chatbot-sydney-ai-safety.html

Ref: slate

MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG

Leave a Reply

Your email address will not be published. Required fields are marked *