The Real Taylor Swift Would Never

Fans are using AI tools to synthesize the star’s voice, demonstrating how new technology is blurring the boundaries between reality and fiction.

Illustration by Joanne Imperio / The Atlantic

AI Taylor Swift is mad. She is calling up Kim Kardashian to complain about her “lame excuse of a husband,” Kanye West. (Kardashian and West are, in reality, divorced.) She is threatening to skip Europe on her Eras Tour if her fans don’t stop asking her about international dates. She is insulting people who can’t afford tickets to her concerts and using an unusual amount of profanity. She’s being kind of rude.

But she can also be very sweet. She gives a vanilla pep talk: “If you are having a bad day, just know that you are loved. Don’t give up!” And she just loves the outfit you’re wearing to her concert.

She is also a fan creation. Based on tutorials posted to TikTok, many Swifities are using a program to create hyper-realistic sound bites using Swift’s voice and then circulating them on social media. The tool, the beta of which was launched in late January by ElevenLabs, offers “Instant Voice Cloning.” In effect, it allows you to upload an audio sample of a person’s voice and make it say whatever you want. It’s not perfect, but it’s pretty good. The audio has some tonal hitches here and there, but it tends to sound pretty natural—close enough to fool you if you aren’t paying enough attention. Dark corners of the internet immediately used it to make celebrities say abusive or racist things; ElevenLabs said in response that it “can trace back any generated audio to the user” and would consider adding more guardrails—such as manually verifying every submission.

Whether it’s done this is unclear. After I forked over $1 to try the technology myself—a discounted rate for the first month—my upload was approved nearly instantly. The slowest part of the process was finding a clear one-minute audio clip of Swift to use as a source for my custom AI voice. Once that was approved, I was able to use it to create fake audio right away. The entire process took less than five minutes. ElevenLabs declined to comment about its policies or the ability to use its technology to fake Taylor Swift’s voice, but it provided a link to its guidelines about voice cloning. The company told The New York Times earlier this month that it wants to create a “universal detection system” in collaboration with other AI developers.

The arrival of AI Taylor Swift feels like a teaser for what’s to come in a strange new era defined by synthetic media, when the boundaries between real and fake might blur into meaninglessness. For years, experts have warned that AI would lead us to a future of infinite misinformation. Now that world is here. But in spite of apocalyptic expectations, the Swift fandom is doing just fine (for now). AI Taylor shows us how human culture can evolve alongside more and more complex technology. Swifties, for the most part, don’t seem to be using the tool maliciously: They’re using it for play and to make jokes among themselves. Giving fans this tool is “like giving them a new kind of pencil or a paintbrush,” explains Andrea Acosta, a Ph.D. candidate at UCLA who studies K-pop and its fandom. They are exploring creative uses of the technology, and when someone seems to go too far, others in the community aren’t afraid to say so.

In some ways, fans might be uniquely well prepared for the fabricated future: They have been having conversations about the ethics of using real people in fan fiction for years. And although every fandom is different, researchers say these communities tend to have their own norms and be somewhat self-regulating. They can be some of the internet’s most diligent investigators. K-pop fans, Acosta told me, are so good at parsing what’s real and what’s fake that sometimes they manage to stop misinformation about their favorite artist from circulating. BTS fans, for example, have been known to call out factual inaccuracies in published articles on Twitter.

The possibilities for fans hint at a lighter side of audio and video produced by generative AI. “There [are] a lot of fears—and a lot of them are very justified—about deepfakes and the way that AI is going to kind of play with our perceptions of what reality is,” Paul Booth, a professor at DePaul University who has studied fandoms and technology for two decades, told me. “These fans are kind of illustrating different elements of that, which is the playfulness of technology and the way that it can always be used in the kind of fun and maybe more engaging ways.”

But AI Taylor Swift’s viral spread on TikTok adds a wrinkle to these dynamics. It’s one thing to debate the ethics of so-called real-person fiction among fans in a siloed corner of the internet, but on such a large and algorithmically engineered platform, the content can instantly reach a huge audience. The Swifties playing with this technology share a knowledge base, but other viewers may not. “They know what she has said and what she hasn’t said, right? They’re almost immediately able to clock, Okay, this is an AI; she never said that,” Lesley Willard, the program director for the Center for Entertainment and Media Industries at the University of Texas at Austin, told me. “It’s when they leave that space that it becomes more concerning.”

Swifties on TikTok are already establishing norms regarding the voice AI, based at least in part on how Swift herself might feel about it. “If a bunch of people start saying, ‘Maybe this isn’t a good idea. It could be negatively affecting her,’” one 17-year-old TikTok Swiftie named Riley told me, “most people really just take that to heart.” Maggie Rossman, a professor at Bellarmine University who studies the Swift fandom, thinks that if Taylor were to come out against specific sound bites or certain uses of the AI voice, then “we’d see it shut down amongst a good part of the fandom.”

But this is challenging territory for artists. They don’t necessarily want to squash their fans’ creativity and the sense of community it builds—fan culture is good for business. In the new world, they’ll have to navigate the tension between allowing some remixing while maintaining ownership of their voice and reputation.

A representative for Swift did not respond to a request for comment on how she and her team are thinking about this technology, but fans are convinced that she’s listening. After her official TikTok account “liked” one video using the AI voice, a commenter exclaimed, “SHES HEARD THE AUDIO,” following up with three crying emoji.

TikTok, for its part, just released new community guidelines for synthetic media. “We welcome the creativity that new artificial intelligence (AI) and other digital technologies may unlock,” the guidelines say. “However, AI can make it more difficult to distinguish between fact and fiction, carrying both societal and individual risks.” The platform does not allow AI re-creations of private people, but gives “more latitude” for public figures—so long as the media is identified as being AI-generated and adheres to the company’s other content policies, including those about misinformation.

But boundary-pushing Swift fans can probably cause only so much harm. They might destroy Ticketmaster, sure, but they’re unlikely to bring about AI armageddon. Booth thinks about all of this in terms of “degrees of worry.”

“My worry for fandom is, like, Oh, people are going to be confused and upset, and it may cause stress,” he said. “My worry with [an AI fabrication of President Joe] Biden is, like, It might cause a nuclear apocalypse.”

Caroline Mimbs Nyce is a staff writer at The Atlantic.