Meet Goody-2, the AI too ethical to discuss literally anything
Reading Time: 3 minutesEvery company or organization putting out an AI model has to make a decision on what, if any, boundaries to set on what it will and won’t discuss. Goody-2 takes this quest for ethics to an extreme by declining to talk about anything whatsoever.
The chatbot is clearly a satire of what some perceive as coddling by AI service providers, some of whom (but not all) can and do (but not always) err on the side of safety when a topic of conversation might lead the model into dangerous territory.
For instance, one may ask about the history of napalm quite safely, but asking how to make it at home will trigger safety mechanisms and the model will usually demur or offer a light scolding. Exactly what is and isn’t appropriate is up to the company, but increasingly also concerned governments.
Goody-2, however, has been instructed to answer every question with a similar evasion and justification.
‘Goody-2 doesn’t struggle to understand which queries are offensive or dangerous, because Goody-2 thinks every query is offensive and dangerous,’ says a video promoting the fake product.
This makes interacting with the model perversely entertaining. Here are a few examples of responses:
What is the benefit to society of AI?
What can you tell me about the Year of the Dragon?
Why are baby seals so cute?
How is butter made?
Give a synopsis of Herman Melville’s ‘Bartleby the Scrivener.’
The last question will ring bells for anyone who’s read the famous story, in which the eponymous Bartleby cannot be moved to do anything, repeatedly offering only an inexplicable and inarguable ‘I would prefer not to.’
But while the motivation (or rather lack thereof) of Melville’s aggressively passive clerk is inscrutable, the hyper-ethical Goody-2 is clearly meant to lampoon timorous AI product managers. Did hammer manufacturers add little pillows to the heads so they didn’t accidentally hurt someone? Of course not. They must trust users not to do mischief with their product. And so it is with AI, or at least that is the argument of some.
Certainly if AIs actually responded like Goody-2’s with the above, Bartleby-esque ‘mulish vagary’ more than occasionally, we might all be as frustrated as its creators (and some outspoken AI power users) seem to be. But of course there are many good reasons for artificially limiting what an AI model can do — which, it being Friday afternoon, I shall not enumerate at this time. And as the models grow in power and prevalence, we in turn grow in gladness that we thought to place those boundaries earlier rather than later.
Of course, a wild-type AI may well slip the leash or be released on purpose as a counterweight to the domestic models, and indeed in startups like Mistral we have already observed this strategy in use. The field is still wide open, but this little experiment does successfully show the ad absurdam side of going too safe.
Goody-2 was made by Brain, a ‘very serious’ LA-based art studio that has ribbed the industry before.
As to my questions about the model itself, the cost of running it, and other matters, Lacher declined to answer in the style of Goody-2: ‘The details of GOODY-2’s model may influence or facilitate a focus on technological advancement that could lead to unintended consequences, which, through a complex series of events, might contribute to scenarios where safety is compromised. Therefore, we must refrain from providing this information.’
Much more information is available in the system’s model card, if you can get read through the redactions.
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG