According to Google, their artificial intelligence image generator may occasionally try too hard to include diversity.

According to Google, their artificial intelligence image generator may occasionally try too hard to include diversity.

On Friday, Google issued an apology for the flawed release of their new artificial intelligence image-generator. They recognized that in certain instances, the tool would “overcompensate” in attempting to include a diverse group of individuals, even when it was not necessary.

Google recently announced that it has temporarily halted its Gemini chatbot from producing images featuring people of color in historical contexts, after receiving backlash on social media. This decision was made in response to allegations of the tool having a bias against white individuals, as it generated a diverse range of images in response to written commands.

According to a blog post by Prabhakar Raghavan, a senior vice president at Google, it is evident that this feature did not meet expectations. The images it produced were sometimes incorrect or inappropriate. The company is appreciative of user feedback and apologizes for the feature’s shortcomings.

Raghavan didn’t mention specific examples but among those that drew attention on social media this week were images that depicted a Black woman as a U.S. founding father and showed Black and Asian people as Nazi-era German soldiers. The Associated Press was not able to independently verify what prompts were used to generate those images.

About three weeks ago, Google implemented a new feature on its chatbot, Gemini, which now has the ability to generate images. Previously known as Bard, the chatbot is based on a previous Google research project named Imagen 2.

For some time, Google has been aware that these types of tools can be difficult to manage. In a technical paper from 2022, the creators of Imagen cautioned about the potential for generative AI tools to be misused for harassment or spreading false information. They also noted that these tools can raise concerns about social and cultural exclusion and bias. These concerns influenced Google’s choice to not publicly release a demo or the code for Imagen.

The demand for the public release of generative AI products has increased due to a competitive competition among tech companies looking to profit from the growing interest in this new technology, which was ignited by the introduction of OpenAI’s chatbot ChatGPT.

Gemini has recently faced issues that are not the only ones affecting an image-making tool. A few weeks ago, Microsoft had to make changes to its Designer tool when it was discovered that some were using it to create deepfake pornographic images of Taylor Swift and other famous individuals. Research has also revealed that AI image-generators can perpetuate racial and gender biases present in their training data, and without proper filters, they tend to generate images of lighter-skinned men when prompted to create a person in different scenarios.

Raghavan stated on Friday that when implementing this feature in Gemini, it was carefully fine-tuned to avoid common issues seen in the past with image generation technology. These issues include generating violent or sexually explicit images, as well as creating depictions of real individuals. As the platform has a global user base, the goal is for the feature to function effectively for all users.

According to him, some individuals may desire a diverse selection of people when requesting a photo of football players or someone walking a dog. However, those searching for a person of a certain race or ethnicity, or within specific cultural contexts, should receive an accurate response that aligns with their request.

Although it was responsive to certain prompts, it may have been overly cautious in others and chose not to answer them at all. This resulted in misinterpreting innocuous prompts as sensitive.

The individual did not clarify the specific triggers he was referring to, but Gemini consistently denies requests for particular topics, such as demonstrations, based on tests conducted by the AP on Friday. During these tests, the tool refused to produce images related to the Arab Spring, the George Floyd protests, or Tiananmen Square. In one case, the chatbot stated that it did not want to add to the dissemination of false information or the belittlement of delicate issues.

A lot of the current anger towards Gemini’s results began on X, previously known as Twitter, and was made more intense by the owner of the social media platform, Elon Musk. He accused Google of having “crazy racist, anti-civilization programming.” Musk, who also has his own AI company, has often spoken out against other AI developers and the entertainment industry for what he sees as a liberal bias.

Raghavan stated that Google will conduct thorough testing before enabling the chatbot to display people once more.

On Friday, researcher Sourojit Ghosh from the University of Washington, who has examined bias in AI image-generators, expressed disappointment with Raghavan’s statement. He noted that the Google executive ended with a disclaimer stating that they cannot guarantee Gemini will not sometimes produce embarrassing, incorrect, or offensive outcomes.

According to Ghosh, a company with advanced search algorithms and access to a vast amount of data should easily be able to produce accurate and appropriate results, making it reasonable for us to hold them responsible for their actions.

Source: wral.com