The Meta platform has announced that it will identify AI-generated images on both Facebook and Instagram.

The Meta platform has announced that it will identify AI-generated images on both Facebook and Instagram.

Users of Facebook and Instagram will now notice tags on AI-generated pictures that show up on their social media streams. This is part of a larger effort by the tech industry to distinguish between what is authentic and what is not.

On Tuesday, Meta announced that it is collaborating with industry partners to establish technical guidelines that will simplify the process of identifying images, as well as potentially videos and audio, created by artificial intelligence technology.

The question that still needs to be answered is how effective it will be in a time where it is increasingly simple to create and share AI-generated images that can cause harm, such as spreading false information during elections or creating non-consensual fake nude photos of famous individuals.

According to Gili Vidan, an assistant professor at Cornell University, this serves as a indication that platforms are acknowledging the problem of fake content generation online. This measure may be successful in detecting a significant amount of AI-generated content created with commercial tools, but it will not catch everything.

The president of global affairs at Meta, Nick Clegg, did not provide a specific date on Tuesday for when the labels will be implemented. However, he mentioned that it will happen in the next few months and will be available in various languages. He also noted that this will coincide with several significant elections happening globally.

In a blog post, he stated that as the distinction between human and artificial content becomes less clear, individuals are curious about where the line is drawn.

Meta currently includes an “Imagined with AI” tag on realistic images created using their tool, however the majority of AI-generated content on their social media platforms is from external sources.

Several partnerships within the tech industry, such as the Content Authenticity Initiative led by Adobe, have been focused on establishing guidelines. The implementation of digital watermarking and labeling for content generated by AI was also included in an executive order signed by U.S. President Joe Biden in October.

Clegg stated that Meta is committed to labeling images from various companies such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. This will be done as these companies move forward with their plans to add metadata to images created using their tools.

Last year, Google announced that it would be implementing AI labels on YouTube and its other platforms.

“In the upcoming months, we will be implementing labels that notify viewers when the content they are viewing is artificially created,” stated YouTube’s CEO Neal Mohan in a blog post on Tuesday discussing plans for the year ahead.

One possible worry for customers is that technology platforms may improve at recognizing AI-generated material from a specific group of popular commercial sources, while failing to detect content created through other means, leading to a misleading sense of safety.

According to Vidan from Cornell University, the way platforms communicate this information to users is crucial. Users may wonder about the significance of this label, how reliable it is, and what it means if it is not present.