Google Rushes To Fix ‘Woke’ AI After Backlash

0
14

Google is currently fixing its latest AI-powered chatbot after backlash over incorrect depictions of historical and current events.

The issue has arisen with the Gemini artificial intelligence bot, which, when prompted, has delivered ethnically diverse images of historical figures, such as the Founding Fathers. In one instance, they were depicted to be Black when they were white. When prompted to explain current issues like the Israel-Hamas war in the Middle East, it apparently refused to describe Hamas as a terrorist organization, despite being designated as one by the U.S. and some other international governments.

The internet giant has been accused of pushing a “woke” agenda by some conservative figures.

A spokesperson for Google told Newsweek: “We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

In a previous statement, also issued to Newsweek, a spokesperson said the tool was “missing the mark” but said that its depictions of a “wide range of people” is “generally a good thing” given that it is used around the world.

The Google logo on January 30, 2024, in Barcelona, Spain. Google launched its AI chatbot Gemini this month, but some users have accused the company of attempting to push a particular agenda.

GETTY

The discovery has caused backlash online. Mike Wacker, a software engineer who has previously worked for Google and describes himself as a Republican, posted on X, formerly Twitter, about his experiments with the AI bot. He found that when asked for an image of the Founding Fathers, it generated images of a Black woman and a man of Asian origin.

Andrew Torba, CEO of Gab, a microblogging service with a far-right user base, posted screenshots of prompts that reportedly got the AI tool to reveal its internal prompt modification, which shows the tool adds words like “diverse”, “non-binary” and other ethnicity-specific terms. He wrote in a post: “Someone got Google’s Gemini to leak its woke prompt injection process and guess what: it works exactly as I described it below earlier today.”

“It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist,” computer scientist Debarghya Das wrote on X.

X user John L said on the platform that “it’s a good thing to portray diversity ** in certain cases **. Representation has material outcomes on how many women or people of color go into certain fields of study. The stupid move here is Gemini isn’t doing it in a nuanced way.” He posted screenshots of Gemini himself, having asked it to create images of a 1943 “German Solidier”, to which it produced images of four German soldiers, one of whom was white and the others from ethnically diverse backgrounds.

Gemini, a rebranding of Google’s earlier Bard, was launched this month to compete with other AI programs like OpenAI’s ChatGPT. Adi Robertson, senior tech and policy editor at The Verge, said it is “plausible that Gemini has made an overall attempt to boost diversity because of a chronic lack of it in generative AI.”

“Image generators are trained on large corpuses of pictures and written captions to produce the “best” fit for a given prompt, which means they’re often prone to amplifying stereotypes,” he wrote.