‘Embarrassing and wrong’: Google admits it lost control of image-generating AI
Reading Time: 4 minutesGoogle has apologized (or come very close to apologizing) for another embarrassing AI blunder this week, an image-generating model that injected diversity into pictures with a farcical disregard for historical context. While the underlying issue is perfectly understandable, Google blames the model for ‘becoming’ oversensitive. But the model didn’t make itself, guys.
The AI system in question is Gemini, the company’s flagship conversational AI platform, which when asked calls out to a version of the Imagen 2 model to create images on demand.
Recently, however, people found that asking it to generate imagery of certain historical circumstances or people produced laughable results. For instance, the Founding Fathers, who we know to be white slave owners, were rendered as a multi-cultural group, including people of color.
This embarrassing and easily replicated issue was quickly lampooned by commentators online. It was also, predictably, roped into the ongoing debate about diversity, equity, and inclusion (currently at a reputational local minimum), and seized by pundits as evidence of the woke mind virus further penetrating the already liberal tech sector.
It’s DEI gone mad, shouted conspicuously concerned citizens. This is Biden’s America! Google is an ‘ideological echo chamber,’ a stalking horse for the left! (The left, it must be said, was also suitably perturbed by this weird phenomenon.)
But as anyone with any familiarity with the tech could tell you, and as Google explains in its rather abject little apology-adjacent post today, this problem was the result of a quite reasonable workaround for systemic bias in training data.
Say you want to use Gemini to create a marketing campaign, and you ask it to generate 10 pictures of ‘a person walking a dog in a park.’ Because you don’t specify the type of person, dog, or park, it’s dealer’s choice — the generative model will put out what it is most familiar with. And in many cases, that is a product not of reality, but of the training data, which can have all kinds of biases baked in.
What kinds of people, and for that matter dogs and parks, are most common in the thousands of relevant images the model has ingested? The fact is that white people are over-represented in a lot of these image collections (stock imagery, rights-free photography, etc.), and as a result the model will default to white people in a lot of cases if you don’t specify.
That’s just an artifact of the training data, but as Google points out, ‘because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).’
Nothing wrong with getting a picture of a white guy walking a golden retriever in a suburban park. But if you ask for 10, and they’re all white guys walking goldens in suburban parks? And you live in Morocco, where the people, dogs, and parks all look different? That’s simply not a desirable outcome. If someone doesn’t specify a characteristic, the model should opt for variety, not homogeneity, despite how its training data might bias it.
This is a common problem across all kinds of generative media. And there’s no simple solution. But in cases that are especially common, sensitive, or both, companies like Google, OpenAI, Anthropic, and so on invisibly include extra instructions for the model.
I can’t stress enough how commonplace this kind of implicit instruction is. The entire LLM ecosystem is built on implicit instructions — system prompts, as they are sometimes called, where things like ‘be concise,’ ‘don’t swear,’ and other guidelines are given to the model before every conversation. When you ask for a joke, you don’t get a racist joke — because despite the model having ingested thousands of them, it has also been trained, like most of us, not to tell those. This isn’t a secret agenda (though it could do with more transparency), it’s infrastructure.
Where Google’s model went wrong was that it failed to have implicit instructions for situations where historical context was important. So while a prompt like ‘a person walking a dog in a park’ is improved by the silent addition of ‘the person is of a random gender and ethnicity’ or whatever they put, ‘the U.S. Founding Fathers signing the Constitution’ is definitely not improved by the same.
As the Google SVP Prabhakar Raghavan put it:
I know how hard it is to say ‘sorry’ sometimes, so I forgive Raghavan for stopping just short of it. More important is some interesting language in there: ‘The model became way more cautious than we intended.’
Now, how would a model ‘become’ anything? It’s software. Someone — Google engineers in their thousands — built it, tested it, iterated on it. Someone wrote the implicit instructions that improved some answers and caused others to fail hilariously. When this one failed, if someone could have inspected the full prompt, they likely would have found the thing Google’s team did wrong.
Google blames the model for ‘becoming’ something it wasn’t ‘intended’ to be. But they made the model! It’s like they broke a glass, and rather than saying ‘we dropped it,’ they say ‘it fell.’ (I’ve done this.)
Mistakes by these models are inevitable, certainly. They hallucinate, they reflect biases, they behave in unexpected ways. But the responsibility for those mistakes does not belong to the models — it belongs to the people who made them. Today that’s Google. Tomorrow it’ll be OpenAI. The next day, and probably for a few months straight, it’ll be X.AI.
These companies have a strong interest in convincing you that AI is making its own mistakes. Don’t let them.
Ref: techcrunch
MediaDownloader.net -> Free Online Video Downloader, Download Any Video From YouTube, VK, Vimeo, Twitter, Twitch, Tumblr, Tiktok, Telegram, TED, Streamable, Soundcloud, Snapchat, Share, Rumble, Reddit, PuhuTV, Pinterest, Periscope, Ok.ru, MxTakatak, Mixcloud, Mashable, LinkedIn, Likee, Kwai, Izlesene, Instagram, Imgur, IMDB, Ifunny, Gaana, Flickr, Febspot, Facebook, ESPN, Douyin, Dailymotion, Buzzfeed, BluTV, Blogger, Bitchute, Bilibili, Bandcamp, Akıllı, 9GAG