Three weeks ago, Google launched a new image generation feature on its Gemini conversational app (formerly known as Bard). The feature has the ability to create images of people. However, things didn't go as the company expected. The feature missed the mark and generated some inaccurate and offensive images forcing the company to pause the feature altogether.
Now, Google has written a blog post explaining what went wrong and why the feature was overcorrected for technology. Explaining the problem, Prabhakar Raghavan, the company's Senior Vice President for Knowledge & Information, wrote that the company wanted to ensure the feature doesn’t generate images, that are violent or sexually explicit or depict real people. In addition to this, they attempted to make the feature work for everyone by offering a range of people in AI-generated images.
However, in order to achieve this, the company forgot to account for two things- first while generating images with a range of people, it “failed to account for cases that should clearly not show a range” and second, “over time, the model became way more cautious” than the company intended and failed to answer prompts that weren't inherently offensive.
Google said, “These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong”.
Google also adds that it didn’t want “Gemini to refuse to create images of any particular group” or “to create inaccurate historical — or any other — images”.
The tech giant said that the tool is built for creativity and productivity and may not always be reliable when generating “about current events, evolving news or hot-button topics”.
The company has promised to take action whenever it identifies an issue.
Meanwhile, the government of India on Saturday warned Google India that its ‘Digital Nagriks’ should not be experimented on with ‘unreliable’ algorithms or AI models. In addition to this, the IT Ministry is also in the process of issuing a notice to Google over "problematic and illegal" responses by its Gemini AI.