Artificially created infants, genuine terror: Advanced fake videos from the conflict in Gaza heighten concerns about the capabilities of AI to deceive.


Amidst the photographs of destroyed homes and damaged roads in Gaza, certain images were particularly haunting: Injured and neglected babies covered in blood.

These deepfakes, made using artificial intelligence, have been viewed countless times online since the start of the war. By examining them closely, one can spot hints such as oddly curling fingers or eyes that reflect an unnatural light, revealing the digital manipulation behind them.

The images were designed to elicit anger, but unfortunately, the emotion they evoke is very much authentic.

Images from the conflict between Israel and Hamas have powerfully demonstrated how AI can be used as a means of propaganda, producing realistic depictions of violence. Throughout the past month of war, manipulated images shared on social media have been utilized to falsely assign blame for casualties or to mislead individuals about non-existent atrocities.

Although many of the false statements spreading on the internet regarding the war were not produced by AI and originated from mainstream sources, there has been a rise in technological advancements with minimal regulation. This has highlighted the potential for AI to be used as a weapon and provided a preview of its potential impact in future conflicts, elections, and significant occurrences.

According to Jean-Claude Goldenstein, CEO of CREOpoint, a technology company in San Francisco and Paris, the situation will deteriorate significantly before it improves. Their company utilizes AI to evaluate the credibility of online statements and has compiled a database of the most widely shared deepfakes from Gaza. With the use of generative AI, there will be a level of escalation that has yet to be witnessed, including manipulated images, videos, and audio.

In certain instances, images from previous conflicts or disasters have been reused and falsely presented as current. Alternatively, some instances have involved the use of generative AI technology to produce entirely new images, like one depicting a crying baby amidst the destruction of bombing, which gained widespread attention in the initial stages of the conflict.

Additional instances of computer-generated images using AI technology consist of footage displaying alleged missile attacks by Israel, or military vehicles moving through devastated residential areas, or groups of people searching through debris for any individuals who may have survived.

Often, the counterfeits appear to be strategically crafted to elicit a powerful emotional response by featuring the bodies of infants, children, or families. During the violent early stages of the conflict, advocates for both Israel and Hamas accused the opposing side of targeting and harming innocent children and babies. Images manipulated with deepfake technology, showing crying infants, were presented as “proof” and quickly shared as evidence.

According to Imran Ahmed, CEO of the nonprofit organization Center for Countering Digital Hate, the individuals behind these images are adept at exploiting people’s strongest instincts and fears. Whether it is a manipulated image or a genuine photograph from a different war, the emotional response from the audience remains unchanged.

The more repulsive the image, the higher the chance that a user will remember it and pass it on, unknowingly spreading the false information even more.

Ahmed explained that individuals are currently being instructed to view an image of a baby, as the false information is intentionally crafted to entice viewers to interact with it.

Following the Russian invasion of Ukraine in 2022, there was a rise in misleading content created by artificial intelligence. One modified video seemingly depicted Ukrainian President Volodymyr Zelenskyy instructing Ukrainians to surrender. These false allegations have continued to be shared, even as recently as last week, highlighting the persistence of misinformation that can be easily disproven.

Every time a new conflict or election cycle arises, it presents a chance for those spreading false information to showcase the latest developments in artificial intelligence. This has led many experts in AI and political science to caution against the potential dangers in the upcoming year, as several countries have important elections scheduled, such as the U.S., India, Pakistan, Ukraine, Taiwan, Indonesia, and Mexico.

Lawmakers from both parties in Washington are concerned about the potential for AI and social media to spread misinformation to U.S. voters. During a recent hearing on the risks of deepfake technology, Representative Gerry Connolly, a Democrat from Virginia, stated that the U.S. should allocate resources towards creating AI tools specifically designed to combat other AI.

Connolly emphasized the importance of our country getting it right.

Many startup technology companies globally are developing software that can detect deepfakes, add watermarks to images for proof of origin, or scan text to confirm the validity of any dubious statements potentially inserted by artificial intelligence.

Maria Amelie, co-founder of Factiverse, a Norwegian company, stated that the upcoming advancement in AI will focus on verifying the accuracy of existing content. She also questioned how we can identify misinformation and evaluate the reliability of text through AI analysis. Factiverse has developed an AI program that is capable of scanning content for any errors or bias that may have been introduced by other AI programs.

These programs are likely to attract the attention of educators, journalists, financial analysts, and others who are interested in detecting false information, plagiarism, or deception. There are also similar programs being created to identify altered images or videos.

Although this technology holds potential, individuals who utilize AI for deceit are typically more advanced, as stated by David Doermann, a computer expert who spearheaded a project at the Defense Advanced Research Projects Agency aimed at addressing the national security risks presented by AI-altered images.

According to Doermann, who currently holds a position as a professor at the University at Buffalo, effectively addressing the political and social issues caused by AI disinformation will necessitate improvements in technology and regulations. This can also involve implementing voluntary industry guidelines and making significant investments in digital literacy initiatives to assist internet users in distinguishing between fact and fiction.

Doermann stated that each time a tool is launched to identify this, our opponents have the ability to utilize AI to conceal any trace of evidence. He emphasized that relying on detection and removal of this material is no longer effective, and a more comprehensive solution is necessary.

Source: wral.com