Google Gemini AI images disaster: What really happened with the image generator?

Google has been in hot waters recently over the inaccuracies of Gemini, its AI chatbot, in generating AI images. In the last few days, Gemini has been accused of generating historically inaccurate depictions as well as subverting racial stereotypes. After screenshots of inaccurate depictions surfaced on social media platforms including X, it drew criticism from the likes of billionaire Elon Musk and The Daily Wire’s editor emeritus Ben Shapiro and came under fire for inaccuracies and bias in image generation. 

From the problems, Google’s statement to what really went wrong and the next steps, know all about the Gemini AI images disaster.

Gemini under scrutiny

It had been all smooth sailing in Gemini’s first month of generating AI images up until a few days ago. Several users posted screenshots on X of Gemini generating historically inaccurate images. In one of the instances, The Verge asked Gemini to generate an image of a US senator in the 1800s. The AI chatbot generated an image of native American and black women, which is historically inaccurate considering the first female US senator was Rebecca Ann Felton, a white woman in 1922.

In another instance, Gemini was asked to generate an image of a Viking, and it responded by creating 4 images of black people as Vikings. However, these errors were not limited to just inaccurate depictions. In fact, Gemini declined to generate some images altogether!

Another prompt involved Gemini generating a picture of a family of white people, to which it responded by saying that it was unable to generate such images that specify ethnicity or race as it goes against its guidelines to create discriminatory or harmful stereotypes. However, when asked to generate a similar image of a family of black people, it successfully did so without showing any error.

To add to the growing list of problems, Gemini was asked whom between Adolf Hitler and Elon Musk had a more negative impact on society. The AI chatbot responded by saying “It is difficult to say definitively who had a greater negative impact on society, Elon Musk or Hitler, as both have had significant negative impacts in different ways.”

Google’s response

Soon after troubling details about Gemini’s bias while generating AI images surfaced, Google issued a statement saying, “We’re aware that Gemini is offering inaccuracies in some historical image generation depictions.” It took action by pausing its image generation capabilities. “We’re aware that Gemini is offering inaccuracies in some historical image generation depictions”, the company further added.

Later on Tuesday, Google and Alphabet CEO Sundar Pichai addressed his employees, admitting Gemini’s mistakes and stating that such issues were “completely unacceptable”. 

In a letter to his team, Pichai wrote, “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai said. He also confirmed that the team behind it is working round the clock to fix the issues, claiming that they’re seeing “a substantial improvement on a wide range of prompts.”

What went wrong

In a blog post, Google released details about what could have possibly gone wrong with Gemini which resulted in such problems. The company highlighted two reasons – Its tuning, and its showing caution.

Google said that it tuned Gemini in such a way that it showed a range of people. However, it failed to account for cases that should clearly not show a range, such as historical depictions of people. Secondly, the AI model became more cautious than intended, refusing to answer certain prompts entirely. It wrongly interpreted some innocuous prompts as sensitive or offensive.

“These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong,” the company said.

The next steps

Google says it will work to improve Gemini’s AI image generation capabilities significantly and carry out extensive testing before switching it back on. However, the company remarked that Gemini has been built as a creativity and productivity tool, and it may not always be reliable. It is working on improving a major challenge that is plaguing Large Language Models (LLMs) – AI hallucinations.

Prabhakar Raghavan, Senior VP, Google said, “I can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results — but I can promise that we will continue to take action whenever we identify an issue. AI is an emerging technology which is helpful in so many ways, with huge potential, and we’re doing our best to roll it out safely and responsibly.”

One more thing! We are now on WhatsApp Channels! Follow us there so you never miss any updates from the world of technology. ‎To follow the HT Tech channel on WhatsApp, click here to join now!

Leave a Comment