AI-generated media imitating public figures have caused a stir across many realms of life — pop culture, religion, politics, and more. As generative artificial intelligence continues to advance, the ease with which we can distinguish the true from the false diminishes. AI-made photos, videos, and audio called “deepfakes” have taken the internet by storm over the last two years, prompting calls for regulation of the technology.
Quartz looks at the biggest AI deepfake moments.
2 / 9
An AI-generated video of Ukrainian President Volodymyr Zelenskyy on social media in 2022 told the country’s soldiers they should put down their weapons and surrender to Russia.
The video was more obviously fake than newer ones that have become harder to distinguish from reality. The clip of fake Zelenskyy convinced few if any, given the odd accent and dissimilar skin tone of the deepfake.
3 / 9
A posh deepfake photo of the Pope in a fashionable, knee-length, Balenciaga puffer jacket in early 2023 was the first to go truly viral, showcasing how the technology rapidly advanced to thin the line between what’s organic and what’s AI. It came around the same time AI-generated photos of Elon Musk holding hands with U.S. Representative Alexandria Ocasio-Cortez popped up on social media as well as fake pictures of Trump being arrested.
4 / 9
Viral song “Heart on My Sleeve,” with vocals imitating Drake and the Weeknd, was met with major concern from the music industry over intellectual property rights.
Later in 2023, Universal Music Group (UMG), together with ABKCO and Concord Publishing, filed a copyright infringement lawsuit against AI startup Anthropic for allegedly stealing lyrics to train its chatbot. And some of the biggest names in the music industry — Nicki Minaj, Katy Perry, Billie Eilish, Camila Cabello, and a whole bunch of others (more than 200 in total) — have since signed a strongly-worded open letter calling on tech companies, AI developers, and music platforms to pledge they won’t make or use AI music-generating tools.
5 / 9
Florida Gov. Ron DeSantis’ now-defunct presidential campaign posted a video including deepfakes of Donald Trump kissing former National Institute of Allergy and Infectious Diseases director Anthony Fauci on X last summer. The move heightened ongoing concerns over the use of AI misinformation and disinformation ahead of the U.S. 2024 presidential election.
6 / 9
“What a bunch of malarkey,” the message began.
It was an AI-generated voice imitating President Joe Biden, calling thousands of New Hampshire residents to discourage them from voting in the state’s Democratic primary election. “[Y]our vote makes a difference in November, not this Tuesday,” the voice said.
The Democratic political consultant who admitted he paid $500 to orchestrate the robocalls, Steven Kramer, was indicted on 13 charges of felony voter suppression as well as 13 misdemeanors in New Hampshire on Thursday. Kramer is also facing charges from the Federal Communications Commission. The FCC in February made AI-generated voices in robocall scams illegal.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” said FCC Chairwoman Jessica Rosenworcel at the time.
7 / 9
AI-generated nudes of Taylor Swift flooded X in January, with one such photo garnering tens of millions of views and 24,000 reposts. Swifties swarmed the social media platform to drown out the deepfakes by posting real photos of the star.
If there’s one person who can unite the country, it’s Taylor Swift. The attack on Swift moved federal legislators across both parties to action. While there’s no federal law on the books regarding AI-made nudes, U.S. senators introduced legislation in January that would allow victims to sue their perpetrators, citing Swift’s case.
“Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real,” said U.S. senators Richard Durbin and Lindsey Graham in their announcement of the bill dubbed the DEFIANCE Act. “Victims have lost their jobs, and may suffer ongoing depression or anxiety.”
8 / 9
Ahead of the Taiwan presidential and legislative elections in January, an online operation backed by the Chinese Communist Party, known as “Spamouflage” or “Dragonbridge,” used AI deepfakes to try and sway the results. The group made fake audio clips of a former candidate, who had dropped out of the race months earlier, endorsing someone they did not support.
In response, Microsoft sounded the alarm about China’s use of AI to create misinformation campaigns and sway foreign elections. “This was the first time that Microsoft Threat Intelligence has witnessed a nation state actor using AI content in attempts to influence a foreign election,” its researchers wrote.
9 / 9