Google has temporarily halted the feature of Gemini AI chatbot that allows it to create images of individuals.

Google has temporarily halted the feature of Gemini AI chatbot that allows it to create images of individuals.

On Thursday, Google announced that it will be temporarily halting its Gemini AI chatbot from producing images of people. This decision comes after the company apologized for the “inaccuracies” in the historical depictions created by the chatbot.

This week, some Gemini users shared pictures on social media of traditionally white environments featuring more diverse characters, which they claim were created by the platform. This has sparked criticism and raised concerns about whether the company is going too far in trying to avoid racial bias in its AI model.

“Google has acknowledged recent problems with the image generation feature on Gemini and is actively working to resolve them. In the meantime, they have temporarily suspended the generation of images featuring people and plan to release an improved version in the near future.”

Past research has demonstrated that AI image-generators have the potential to reinforce racial and gender biases present in the data they are trained on. When producing an image of a person in different scenarios, these generators are more inclined to depict lighter-skinned men if not equipped with appropriate filters.

On Wednesday, Google acknowledged that there have been inaccuracies in certain historical image depictions generated by Gemini. They are currently striving to enhance these types of depictions as soon as possible.

According to Gemini, their system caters to a diverse group of individuals from around the globe, which they view as a positive aspect. However, they also acknowledge that there are certain areas where they fall short.

Sourojit Ghosh, a researcher at the University of Washington who has examined bias in AI image-generators, expressed support for Google’s decision to temporarily halt the generation of human faces. However, he also expressed some uncertainty about the process that led to this outcome. Despite claims on social media about “white erasure” and the belief that Google’s Gemini refuses to create faces of white individuals, Ghosh’s research has actually shown the opposite.

He stated that he found it challenging to reconcile the speed of this response with the abundance of literature and research that has demonstrated the exclusion of marginalized groups by models such as this one.

The AP requested that Gemini produce images of individuals or a large group, and the response was that they are currently striving to enhance this capability. The chatbot stated that they anticipate this feature to be available again soon and will inform you through release updates.

According to Ghosh, it is probable that Google can develop a method to sort through responses and consider the past circumstances of a user’s request. However, addressing the larger issues caused by image-generating programs created from years of online photos and artwork requires more than just a technical fix.

He stated that it is not possible to quickly create a text-to-image generator that does not perpetuate harmful representations. These generators reflect the values of our society.

Source: wral.com