to be taken. A regulatory agency is urging for measures to be implemented in response to the potential rise of artificial intelligence-created images depicting child sexual abuse on the internet.
A warning was issued by a watchdog agency on Tuesday, stating that the increasing prevalence of child pornography on the internet could escalate further if measures are not taken to regulate artificial intelligence technology that creates deepfake images.
The Internet Watch Foundation, based in the U.K., has issued a written report calling on governments and technology companies to take swift action to prevent a potential surge of AI-generated images depicting child sexual abuse. This surge could potentially overwhelm law enforcement and greatly increase the number of victims.
Dan Sexton, the chief technology officer of the watchdog group, stated that they are not discussing the potential harm that may occur. Rather, he emphasized the urgency of addressing the current situation.
A South Korean court has handed down a unique sentence in September, with a man receiving a 2 1/2 year prison term for utilizing artificial intelligence to produce 360 virtual images depicting child abuse. This ruling was issued by the Busan District Court located in the southeastern region of the country.
In certain situations, children are utilizing these instruments against one another. At a school in the southwest region of Spain, authorities are looking into reports of teenagers using a phone application to digitally undress their peers in photos.
The report reveals a negative aspect of the competition to develop generative AI systems that allow users to verbalize their desired output – whether it be emails, artwork, or videos – and have the system generate it.
If left unchecked, the influx of deepfake images depicting child sexual abuse could hinder the efforts of investigators to rescue children who are actually virtual characters. These images could also be used by perpetrators to manipulate and exploit new victims.
According to Sexton, analysts at IWF found images of well-known children on the internet, and also revealed a significant desire for the production of additional images of children who have already been victimized, potentially many years ago.
He stated that they are using real content to produce new content featuring victims, which is extremely alarming.
Sexton stated that his non-profit group, which aims to fight against online child sexual abuse, initially received reports about harmful imagery created by artificial intelligence earlier this year. This prompted an examination of forums on the “dark web,” a section of the internet that is encrypted and can only be accessed through anonymous tools.
IWF analysts discovered that individuals were posting advice and expressing awe at how simple it was to use their personal computers to produce pornographic images of children of various ages. Additionally, some are exchanging and trying to make money from these images, which are becoming more realistic.
Sexton noted that there is a growing abundance of content being produced.
The report from the IWF aims to bring attention to a concerning issue and recommends that governments take action by improving laws to better combat the use of AI for abuse. The report specifically calls on the European Union to address the debate around surveillance measures that would scan messaging apps for potential child sexual abuse images, even if they have not been identified by law enforcement.
The main goal of the group is to protect past victims of sexual abuse from further abuse by stopping the distribution of their photographs.
According to the report, technology companies have the ability to make their products less susceptible to being used in this manner, but it is challenging due to the difficulty of undoing certain features.
Last year, a new batch of AI image-generators were unveiled and impressed the public with their capacity to create fanciful or lifelike images upon request. However, these generators are not popular among producers of child pornography due to their built-in features that prevent its creation.
According to Sexton, technology companies that have proprietary AI models, giving them complete authority over their training and implementation, have had better success in preventing misuse. An example of this is OpenAI’s DALL-E, which generates images.
In comparison, producers of child sexual abuse images often use Stable Diffusion, an open-source program created by Stability AI, a London-based startup. Since its emergence in the summer of 2022, some users have discovered how to utilize it to produce nudity and pornography. While these materials primarily featured adults, they were often produced without consent, such as in the case of celebrity-inspired nude photos.
Stability recently implemented additional filters to prevent access to unsafe and inappropriate content. Additionally, purchasing a license for Stability’s software includes a restriction on any illegal usage.
On Tuesday, the company issued a statement stating that it strictly forbids the use of its platforms for any illegal or unethical activities. The statement also expresses strong support for law enforcement in their efforts to combat the misuse of the company’s products for illegal or malicious purposes.
According to David Thiel, the chief technologist at the Stanford Internet Observatory, individuals creating explicit content involving children continue to have access to unfiltered older versions of Stable Diffusion, which is the preferred software for this purpose.
Sexton stated that it is impossible to control what individuals do on their computers in the privacy of their own bedrooms. This raises the question of how to prevent the use of easily accessible software to produce harmful content.
It is uncertain if law enforcement has the necessary resources to combat AI-generated child sexual abuse images, which are likely to be deemed illegal in the U.S., U.K., and other countries according to current laws.
The IWF released their report just before a worldwide conference on AI safety, organized by the British government. The event will be attended by prominent figures such as U.S. Vice President Kamala Harris and leaders in the tech industry.
In a prepared statement, Susie Hargreaves, CEO of IWF, expressed optimism despite the bleak outlook presented in the report. She emphasized the importance of raising awareness about the issue to a broad audience in order to facilitate discussions about the negative aspects of this powerful technology.
___
O’Brien provided coverage from Providence, Rhode Island. Additional reporting was contributed by Barbara Ortutay in Oakland, California, and Hyung-jin Kim in Seoul, South Korea.
Source: wral.com