“Google apologized [last] Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would ‘overcompensate’ in seeking a diverse range of people even when such a range didn’t make sense…
“The partial explanation for why its images put people of color in historical settings where they wouldn’t normally be found came a day after Google said it was temporarily stopping its Gemini chatbot from generating any images with people in them. That was in response to a social media outcry from some users claiming the tool had an anti-white bias in the way it generated a racially diverse set of images in response to written prompts…
“[Examples] that drew attention on social media this week were images that depicted a Black woman as a U.S. founding father and showed Black and Asian people as Nazi-era German soldiers.” AP News
Both sides argue that AI is clearly not yet ready to replace humans:
“For now, at least, generative AI absolutely should not be used to create learning materials for our schools, breaking stories in our newspapers, or be anywhere within a 10,000-mile radius of our government. It turns out the business of interpreting the billions of bits of information online to arrive at rational conclusions is still very much a human endeavor. It is still very much a subjective matter, and there is a real possibility that no matter how advanced AI becomes, it always will be…
“This may be a hard pill to swallow for companies that have invested fortunes in generative AI development, but it is good news for human beings, who can laugh at the fumbling failures of the technology and know that we are still the best arbiters of truth. More, it seems very likely that we always will be.”
David Marcus, Fox News
“Image generators are profoundly strange pieces of software that synthesize averaged-out content from troves of existing media at the behest of users who want and expect countless different things. They’re marketed as software that can produce photos and illustrations — as both documentary and creative tools — when, really, they’re doing something less than that…
“That leaves their creators in a fitting predicament: In rushing general-purpose tools to market, AI firms have inadvertently generated and taken ownership of a heightened, fuzzy, and somehow dumber copy of corporate America’s fraught and disingenuous racial politics, for the price of billions of dollars, in service of a business plan to be determined, at the expense of pretty much everyone who uses the internet. They’re practically asking for it.”
John Herrman, New York Magazine
Other opinions below.
“Google didn't just stack the AI; it changed the prompts to generate them. When you asked for an image of a Pope, it changed the request behind the scenes to include ‘diverse.’ There is a DEI filter, which changes the question to suit Google's ideology. Google doesn't just stack the answers; it changes the question for you…
“Conservatives' weakness in reaching normies is that we often sound like conspiracy theorists. We say that Google is rigged, and it sounds crazy to people. Or, even if they believe it, the reality doesn't sink in enough to change behavior. Well, guess what--Google just inadvertently proved us right. And you can see it with your own eyes.”
David Strom, Hot Air
“As funny as this forced diversity is — the image-generating part of Gemini was taken offline within the day in an embarrassed rush back to the drawing board — it is intensely disconcerting as well. It was so off-putting at first that I actually refused to believe it was intended sincerely; as I said, the image generation results were simply too preposterous, so easily predictable and avoidable, and so comically insulting to the realities of history that I figured we were, for some reason, being cosmically trolled by Google’s devs…
“We were not. The text-generating aspect of Gemini — which, to be clear, is the one far more likely to be used by people searching for information or seeking to formulate arguments — is every bit as shot through with ultra-progressive bias, that of the most paternalistic sort. Gemini will simply refuse to answer questions that are in any way coded against progressive assumptions…
“The informational future that Silicon Valley’s biggest giant intends for us openly and proudly beckons with Gemini — and it is one not just where reality is happily bent to serve the whims of modern DEI obsessions but where certain matters are simply no longer up for discussion, or even acknowledgeable as real. Make no mistake: Google intends this program to shape our understanding of the world.”
Jeffrey Blehar, National Review
“Three years ago, Google got in trouble when its photo-tagging tool started labelling some Black people as apes… Google’s earlier chatbot Bard was so faulty that it made factual errors in its marketing demo. Employees had sounded warnings about that, but managers wouldn’t listen. One posted on an internal message board that Bard was ‘worse than useless: please do not launch,’ and many of the 7,000 staffers who viewed the message agreed…
“The issue is that the company did a shoddy job overcorrecting on tech that used to skew racist. No, its Chief Executive Officer Sundar Pichai hasn’t been infected by the woke mind virus. Rather, he’s too obsessed with growth and is neglecting the proper checks on his products… The female popes and Black founding fathers are products of a deeper, years-long problem of putting growth and market dominance before safety.”
Parmy Olson, Bloomberg
“‘Racially diverse Nazis’ and racist mislabeling of Black men as gorillas are two sides of the same coin. In each example, a product is rolled out to a huge user base, only for that user base—rather than Google’s staff—to discover that it contains some racist flaw. The glitches are the legacy of tech companies that are determined to present solutions to problems that people didn’t know existed…
"Google—and other generative-AI creators—are trapped in a bind. Generative AI is hyped not because it produces truthful or historically accurate representations: It’s hyped because it allows the general public to instantly produce fantastical images that match a given prompt. Bad actors will always be able to abuse these systems. (See also: AI-generated images of SpongeBob SquarePants flying a plane toward the World Trade Center.)…
“We should expect Google—and any generative-AI company—to do better. Yet resolving issues with an image generator that creates oddly diverse Nazis would rely on temporary solutions to a deeper problem: Algorithms inevitably perpetuate one kind of bias or another.”
Chris Gilliard, The Atlantic