by sharing real photos of her

The circulation of fabricated and inappropriate visuals of Taylor Swift on various online platforms has prompted her supporters to combat it by disseminating authentic photographs of her.

by sharing real photos of her The circulation of fabricated and inappropriate visuals of Taylor Swift on various online platforms has prompted her supporters to combat it by disseminating authentic photographs of her.

NEW YORK (AP) — Pornographic deepfake images of Taylor Swift are circulating online, making the singer the most famous victim of a scourge that tech platforms and anti-abuse groups have struggled to fix.

Harmful and offensive fabricated photos of Swift have been spreading widely on the social media platform X this week.

The devoted followers of Taylor Swift, known as “Swifties,” rapidly took action by creating a response on the social media site previously known as Twitter and using the hashtag #ProtectTaylorSwift to overwhelm it with uplifting pictures of the singer. Several individuals mentioned that they were flagging accounts that were spreading the manipulated videos.

The organization Reality Defender, which specializes in identifying deepfakes, reported an overwhelming amount of unsolicited pornographic content featuring Swift. This material was primarily found on X, but some of it also appeared on Facebook, which is owned by Meta, as well as other social media platforms.

“Regrettably, they had already reached countless users before some of them were removed,” stated Mason Allen, the head of growth at Reality Defender.

The scientists discovered numerous AI-generated pictures, with the most popular ones being related to football. These images portrayed Swift in a painted or bloodied state, objectifying her and sometimes depicting her deepfake persona suffering from violent harm.

Experts have noted an increase in the amount of obvious deepfake content over the last couple of years, due to the increased accessibility and user-friendliness of the technology used to create them. A 2019 report from DeepTrace Labs, an artificial intelligence company, revealed that these images were primarily used as weapons against women. According to the report, a majority of the targeted individuals were female Hollywood celebrities and South Korean K-pop artists.

According to Brittany Spanos, a senior writer at Rolling Stone and instructor of a course on Swift at New York University, Swift’s followers are swift to rally behind her in times of controversy, particularly those who are deeply devoted to her.

She mentioned that it could have a significant impact if she actually takes it to court.

Spanos states that the problem of deep fake pornography is similar to other issues that Swift has faced, citing her previous legal case in 2017 against a radio host who allegedly sexually assaulted her. The jury awarded Swift with $1 in damages, which her lawyer, Douglas Baldridge, described as a symbolic amount that holds immeasurable value for all women in light of the MeToo movement. This trend of $1 lawsuits continued, such as in Gwyneth Paltrow’s 2023 countersuit against a skier.

When asked about the fabricated pictures of Swift, X referred The Associated Press to a statement from their safety account which stated that the company strictly forbids the distribution of non-consensual nude images on their platform. The company has also significantly reduced the size of their content moderation teams since Elon Musk assumed control of the platform in 2022.

“Our company has taken swift action to remove any images that have been identified and is also addressing the responsible accounts. We are closely monitoring the situation to promptly address any future violations and remove the content.”

Meta released a statement expressing its strong disapproval of “the content that has spread through various online platforms” and has taken measures to eliminate it.

The company stated that they will continue to keep a close watch on their platforms for any content that violates their policies and will take necessary measures as required.

A spokesperson for Swift did not promptly reply to a comment request on Friday.

According to Allen, the images were likely generated by diffusion models with a 90% level of confidence. Diffusion models are a form of artificial intelligence that can use written prompts to create new, realistic images. The most popular examples include Stable Diffusion, Midjourney, and OpenAI’s DALL-E. Allen’s team did not attempt to determine the origin of the images.

According to OpenAI, measures have been implemented to restrict the creation of damaging material and “reject any inquiries that specifically mention a public figure, such as Taylor Swift.”

Microsoft, which provides an image-generation tool partially utilizing DALL-E, announced on Friday that it is currently investigating any potential misuse of the tool. As with other AI services for business, Microsoft strictly prohibits the creation of “adult or non-consensual intimate content” and repeated violations of this policy may result in loss of access to the service.

During an interview with Lester Holt on “NBC Nightly News,” Microsoft CEO Satya Nadella was asked about the Swift deepfakes. He stated that there is still much work to be done in establishing safeguards for AI, and it is important for us to act quickly on this matter. The interview will air on Tuesday.

Nadella stated that the situation is alarming and terrible, and it is necessary for us to take action.

During the middle of the journey, both OpenAI and Stability AI, who specialize in creating stable diffusion, did not promptly reply to requests for comment.

U.S. legislators who have proposed legislation to increase regulations or make deepfake pornography a criminal offense have pointed to this incident as evidence of the need for stronger safeguards.

According to U.S. Representative Yvette D. Clarke from New York, non-consensual deepfakes have been targeting women for years and the recent incident involving Taylor Swift is not uncommon. Clarke, a Democrat, has proposed a bill that would make it mandatory for creators to digitally watermark their deepfake content.

According to Clarke, Generative-AI is aiding in the production of improved deepfakes at a reduced expense.

Representative Joe Morelle, a Democratic congressman from New York, is advocating for a bill that would make sharing deepfake pornography online a criminal offense. He stated that the incident with Swift is concerning and has become increasingly widespread on the internet.

Morelle stated that although the images may be fabricated, their consequences are significant. She emphasized that deepfakes are a common occurrence for women in our technologically advanced society and it is crucial to take action against them.

Source: wral.com