News

Trolls have flooded X with graphic Taylor Swift AI fakes

The newest instance of the propagation of artificial intelligence (AI)-generated false pornography and the difficulty in halting its dissemination is the sexually explicit AI-generated photographs of Taylor Swift that have been making the rounds on X (previously Twitter) over the last day. When a verified user published photographs on X, it garnered over 45 million views, 24,000 reposts, hundreds of thousands of likes, and bookmarks. However, the user’s account was suspended for breaking platform policies. Before it was taken down, the post remained active on the site for around seventeen hours.

However, the pictures started to circulate and were shared on other accounts as people started talking about the trending post. There is still a tonne of them online, and a tonne of new graphic fakes have surfaced since. The photos were made more widely known when the phrase “Taylor Swift AI” started trending in several areas. According to a 404 Media storey, the pictures could have started in a Telegram channel where members exchange graphic AI-generated pictures of women that are frequently created with Microsoft Designer. It was stated that members of the gang made jokes about how Swift’s photos went popular on X.

Trolls have flooded X with graphic Taylor Swift AI fakes : r/technology

Swift’s supporters have taken issue with X for letting some of the posts stay up for so long. Fans have reacted by filling the hashtags used to spread the pictures with statements that, in place of promoting explicit fakes, promote actual footage of Swift performing.

The event highlights the serious difficulty in preventing deepfake porn and artificial intelligence-generated photos of actual people. While some AI image generators specifically do not give nude, pornographic, or photorealistic photographs of celebrities, many others have limits in place that prohibit such images from being created. Social media sites are frequently in charge of stopping the spread of bogus photos.

Something that may be challenging in the best of situations, and much more so for an organisation like X that has completely undermined its moderation department. The company is allegedly being questioned about its crisis protocols after false information about the Israel-Hamas war was discovered to be promoted across the platform. The EU is currently looking into the company over allegations that it is being used to “disseminate illegal content and disinformation.

The Taylor Swift deepfakes are a warning

This weekend marks the conclusion of our massive sale in honour of Platformer’s huge sale! Using this link, new customers may receive 20% off their first year of an annual subscription. Is it premature to conclude that generative AI has not been good for the internet overall? One is that, according to academics, the surge in AI-generated spam has made human-written articles in Google search results less effective. One of the main causes of the recent devastation caused by layoffs in the journalism business has been the consequent drop in advertising income. Two, a new class of electoral fraud and electioneering is brought about by generative AI technologies. This month, in both the Harlem politics and the New Hampshire primaries, artificial voices were deployed to trick voters.

The third is the subject of my discussion today: the application of generative AI technologies to harassment campaigns. When sexually graphic, artificial intelligence-generated photos of Taylor Swift appeared on X on Wednesday, the topic received a lot of attention. Furthermore, despite the abuse of the term “going viral,” they really attracted a sizable audience. This is, at its core, a narrative about X, and not a very unexpected one at that. Elon Musk took over X and, based on his whims, he started enforcing its written standards, dismantling the company’s safety teams and trust. Advertisers have fled the ensuing mayhem, and authorities are starting inquiries all around the world. (X did not reply to my message requesting a remark.)

It makes sense that the site would be inundated with graphic AI-generated graphics under those conditions. Though it is seldom brought up in polite conversation, X is among the most popular porn applications worldwide because of Apple’s tolerance for a firm that has consistently broken its rules and its long-standing policy of permitting graphic images and videos. (X is officially classed 17+, which is a historically low rating for “Infrequent/Mild Sexual Content and Nudity.) Robust regulations, committed personnel, and swift enforcement mechanisms are needed to distinguish between AI-generated harassment and consenting, legal adult material. That’s how you acquire 45 million views on a single post criticising Taylor Swift, and X doesn’t have any of those.

However, it would be incorrect to see Swift’s harassment this week only in light of X’s shortcomings. Another important perspective is how platforms that have resisted efforts to aggressively control material have given bad actors a way to plan, produce, and disseminate damaging content on a large scale. Specifically, researchers have now frequently seen a pipeline between the messaging service X and Telegram, where malicious activities are planned and developed on the former before being disseminated on the latter.

Indeed, the Swift deepfakes were also brought about via the Telegram-to-X pipeline, according to Emanuel Maiberg and Samantha Cole of 404 Media:

According to 404 Media, sexually obscene AI-generated photos of Taylor Swift went viral on Twitter after being shared in a Telegram channel devoted to offensive photos of women. The organisation makes use of a free Microsoft text-to-image AI generator as one of its tools.

The very identical pictures that went viral on Twitter yesterday night were uploaded to the Telegram a day earlier, according to 404 Media. Some in the group even made jokes about how the Telegram group would be shut down as a result of the attention the photographs were receiving on Twitter after the tweets went viral.

Considering that Telegram won’t even forbid the exchange of content containing child sexual assault, I’d think there’s minimal likelihood of that happening. In any event, it’s becoming increasingly obvious that Telegram, which has over 700 million monthly users, merits at least as much attention as other significant social media platforms. The technology itself serves as the last and maybe most significant prism through which to view the Swift narrative. The graphics were produced using Microsoft’s presently in beta free generative AI product Designer, which made the Telegram-to-X pipeline feasible as previously explained.

https://youtu.be/1L65jnQ6qjQ

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *