The Extraordinary and Terrifying World of “Deepfakes”
With the disturbing rise of new visual augmentation software, you can appear anywhere at any time doing anything, all without your consent. Can anyone remain untouched, or are we totally fucked? Photographed by Hannah Whitaker.
A woman with Kim Kardashian’s face but not Kim Kardashian’s body, one of the rare objects more recognizable than her face, appears to be having sex in a porn video. She gasps and writhes. Nothing excites. This is what we’re calling a “deepfake”: a video produced using a computer and 25 lines of code to harness artificial neural networks, modeled on a similar information processing function in the human brain, that can “encode an image from person A and reconstruct one with similar features to resemble person B,” says web developer Alan Zucconi. Its processing can take dozens of hours, and because neural networks are, as scientists say, like “a black box,” results are not guaranteed. Yet more remarkable than how it is made, is that someone found it necessary, or even worthwhile, to create. If you want to see Kardashian in a porno, simply Google “Kim porno.” Her sex tape with ex-boyfriend Ray J dominates the results for three pages. The star of our reality dispensed with some of fame’s biggest risks—the leaking of private encounters, displays of private parts, self-taken nudes—at the start of her career, making it pointless to threaten her with exposure. She is now so deeply fake that no one accuses her of being phony, in any way, in person.
The term “deepfakes” comes from the name of a Reddit user, creator of an eponymous app and subreddit. His early videos all featured faces of famous actresses on bodies of porn stars, as did most videos by other user-creators in the subreddit. Naturally, that is to say perhaps by design, these creations quickly received prurient and horrified media play. A reporter at Motherboard broke the story at the beginning of this year under the headline “We Are Truly Fucked: Everyone Is Making AI-Generated Porn Now.” Everyone wasn’t, but as the bad press multiplied throughout the winter, membership in r/deepfakes swelled. By spring, after Reddit had shut down the channel, “deepfakes” had already become a metonym for any video made with the face-swapping technology, regardless of who makes it or how—or what it shows. It’s as if Kleenex were not only a colloquial term for facial tissues of any brand, but also the word for all paper towels and toilet paper, despite the significant difference between spilled wine and shit.
Some of us are more fucked than others, irrespective of how fuckable we seem. A deepfake doesn’t have to be any more realistic than a newspaper cartoon caricature in order to fulfill its purpose, if the purpose is to puncture and wound its target; all it has to do is make clear enough who the target is. Should a civilian be targeted in this way, she has almost no legal recourse. Unless she uses her image to sell things, she can’t claim copyright infringement, and likewise, she can’t claim her reputation is being damaged if she doesn’t have one.
Hollywood, of course, is reportedly taking a firm stance against the practice of deepfaking. Perhaps they feel their methods are proprietary. George Lucas famously owned Carrie Fisher’s likeness and could put it on a doll or a pillow. Actors have long enjoyed the use of body doubles, whose faces go unseen, for scenes with nudity. Sometimes, in post-production, their own bodies become noticeably less so: Keira Knightley was given the illusion of bigger breasts for Disney’s Pirates of the Caribbean franchise, while Lindsay Lohan was reportedly given smaller breasts for the 2005 Disney movie Herbie: Fully Loaded. It is rumored Margot Robbie was stretched 10 percent, making her legs appear “killer,” for Martin Scorsese’s 2013 dramedy The Wolf of Wall Street.
However, when Nicolas Cage appears in a deepfake, it’s a projected remake of Indiana Jones. Male actors have so far been unravaged by the deepfakes posted to sites like Pornhub, where the videos are ostensibly banned, and to pseudo-spinoffs like deepfakeshub.net. One theory that halfway makes sense is that the obvious losers responsible for these videos are making a kind of “revenge porn,” actively seeking to humiliate women who are famous for talking about feminism by putting them in what they believe—for differently sexist reasons—to be “compromising positions.” There is one where “Gal Gadot” has sex with a guy playing the stepbrother of the woman in the original video. In response, the Swedish female pornographer Erika Lust told British GQ, “The fact that she featured in the highest-grossing female-led film of all time and was chosen to be used in a deepfake was, in my opinion, a way to put her ‘back in her place’ as a sexual object and not a powerful women in control of her own agency.”
There are several deepfakes circulating that feature the face of Emma Watson, the English actress known for being a United Nations ambassador for women as well as for playing a brunette named Hermione in the Harry Potter movies; and there are a few incorporating the face of Jennifer Lawrence, the American actress who shot to fame as a role model in The Hunger Games and is now so well paid and well respected that after her naked selfies leaked online in 2014, she was given the cover of Vanity Fair to say it wasn’t a scandal, but a sex crime.
Zucconi, the web developer whose video tutorials help you make “perfect deepfakes” but urge you to make them “ethically,” agreed to spend hours talking to me about how and why. Because of a time difference, when I phoned him on video it was still daylight for me, sun pouring in, and already dark where he was. My situation seemed artificially bright compared to his lamp-lit office, and the contrast made me extraordinarily aware of my performance as an image on his screen. As if mirroring this awareness, Zucconi often looked down, shyly, when he was talking.
He explained that the reason a deepfake leaves an impression of shallowness is not simply because the female stars’ faces are so recognizable, but because our own neural networks—our brains—are excellently trained to recognize any face as such. He illustrated the point by talking about onscreen explosions: Most people see a burst of fire and smoke in an action movie and, despite its certain lack of verisimilitude, accept it as being an explosion. Only someone who has seen many explosions firsthand will think it looks strange. If there were a person who had somehow seen more explosions than faces, he would find an amateur deepfake to be no less real than news footage of war. Soon, however, machines will learn faces like we do, and the speed with which this is happening is why “journalists” like me see deepfakes and predict—how else would I put it?—an explosive effect.
Zucconi said he believes this kind of machine learning, like any technology, is neutral, becoming only as good or as bad as the various ways it’s used. He considered the nth-degree analogy of the atom bomb, which turned “nuclear” into a synonym for terrifying, though had the same energy been used for good, we would use the term more positively. This alternate reality was, to me, literally unthinkable. Why would something so powerful be created and then used for good? The United States government was never going to drop the bomb on its own people, but it wanted to use that energy all the same. The resulting tragedy was not merely to do with “the loss of life” in statistical terms, since more people died during World War II from war-related famine in Russia, for example, but with the impact of the explosion and fallout in Japan. While starvation was a familiar way to die, the atom bomb and its use was patently—so it seemed—inhuman. Our ideas about humanness never recovered. There arose a long fissure in reality.
I could feel myself slipping into the mainstream media. I felt so alarmed! Zucconi, ceding the point, said that Adobe Photoshop may offer a better analogy for deepfakes. Anyone can download the program and use it to retouch a photo or to cut and paste a person from one photo to another, but, he explained, very few people have the skill to do this on a billboard scale and make it believable. At the same time, everyone knows that when you see a model on a billboard, in a magazine, or on Instagram, her photo has usually been altered to perfection. Reality cannot be so altered. Yet knowing something is “fake” does little to assuage what we believe, which is obvious when we consider how perfect people today, more than ever, expect themselves to look.
And here, he pointed out, we can begin to imagine a “positive use” of face-swapping tech. We could see actors becoming nearly indistinguishable from certain real-life people they are playing, because rather than using makeup like a mask, they will wear the character’s face, while retaining and using, of course, their own expressive abilities to get stories across. This did seem like something Hollywood would love. Yet, I pointed out, wasn’t it more interesting, and didn’t it require more of an actor, to convey the essence of a character beyond appearance? Cate Blanchett was the best Bob Dylan out of the six actors, all the rest male, who played the musician in the 2007 Todd Haynes movie I’m Not There. She was perfectly him, in a wig, no makeup, and sometimes sunglasses, playing what she and Haynes referred to as “a silhouette.”
Zucconi countered with a hypothetical: A trans boy, feeling that his corporeality doesn’t match what he wants to express, might use face-swapping to help project, onto his home computer screen, a new holistic self. I could not say whether this was positive, but it didn’t sound like a bad thing. Zucconi continued. He could also imagine disabled people wanting to see themselves walking and running. He could imagine people putting their faces in a crowd in a historical moment, a historical reenactment. I laughed, because historical reenactment is such a classically reactionary form. He admitted, then, that our imaginations are limited by how we live, and that it was always an understandable fear with new technology that, rather than creating alternatives, it would turn out to eerily reproduce what we already see and find to be normal.
The porn-based deepfakes might exemplify the worst of this normative, stuck tendency. Because the applications so far are only successful in reconstructing the features of a single face, with no interference from other objects, no deepfakes appear where the performers do oral, for example, let alone orgies. Most feature a female performer lying flat on her back, looking into the camera, as she gets penetrated. The required passivity of “person A” in a face-swapping scenario exacerbates the tendency of deepfake creators to think of their subjects as lacking, per se, subjectivity.
Zucconi had spent time observing the posters on r/deepfakes, seeing how, if ever, they brought up consent. He was bemused when one suggested a “porn star rule”: they should only use the faces, as well as the bodies, of porn stars, rather than pairing the knowing bodies with “innocent” faces. Stupid as it was, he thought, the suggestion at least demonstrated a consideration for whether someone appears naked with or without her permission. At the same time, though, it betrayed a conviction that “adult performers” are less valued than actresses, when really they too are acting. Indeed, the public’s knowledge of how the porn industry works, and why it works well, is scant. I remembered once having dinner with a bona fide porn star and being naïvely impressed with the array and specificity of acts and scenes she would and would not “do,” and feeling, more to the point, envious of her fine sense of control. Those making deepfakes might also envy—and want to seize—the apparent superiority that comes with being sure of one’s own actions.
Deepfakers are not, as far as I can scroll, concerned with learning new rules. More likely is that sincere-minded feminists will try to hack the game, to flip the script. Maybe someone will be really subversive and put porn star Riley Reid’s face on the body of Gadot as Wonder Woman; but I imagine mostly mothers and fathers would use an app to transmute their teen daughters into superheroines or leaders of the free world, while “sex-positive” artists, showing exclusively on Instagram stories, would seek to “humanize” porn stars by swapping their faces for those of famed suffragettes.
Regarding the political implications of the genre, Hany Farid, a professor of computer science at Dartmouth University, told a journalist at The Outline in February that experts in forensic video were preoccupied with worst-case scenarios, like a deepfake wherein Donald Trump appears to say that he has launched nuclear weapons into North Korea, causing North Korea to launch its own nuclear weapons, in actuality, before phoning the White House to confirm. “I would say I’m not prone to hysteria or exaggeration,” Farid said to The Outline, “but I think we can agree that’s not entirely out of the question right now.” Speaking to Rolling Stone in April, for an article entitled “Face-Swapping Porn: How a Creepy Internet Trend Could Threaten Democracy,” Farid noted that, additionally, it was unclear how we might continue to trust video evidence in the court of law.
Experts are always, however, preoccupied with worst-case scenarios. They are so rarely right. I talked to Zucconi because I liked the way he spoke to the people—often, he said, quite young people—who want to make these videos. He told me that he believes, despite the hype, that the real-life effects of this “internet trend” will be negligible. As deepfakes become more and more real looking, developers will find new ways to use neural networks to spot them. People will become ever more dependent on sources they trust. It’s no different, he said, from what is already happening with fake news.
A greater concern, he said, was how the people who make these pornographic deepfakes, who seem not to think about consent, are treating other people in their lives. Zucconi wondered whether an attained knowledge of ethics in speculative reality could help these guys, in their real lives, to treat their friends and girlfriends as people with minds of their own. He worried that, lacking an understanding of both their own power and power dynamics in general, they see women as disposable pieces of plastic to play with.
Like dolls? Yes, said Zucconi. A lot like dolls.
This very off-hand simile best expressed the depth of the problem, there being no agreed-upon way to handle even plastic bodies. As a child, I was given, or rather was allowed to have, one Barbie. Actually, she was a “friend” of Barbie’s named Teresa. She had green eyes like me, and brown hair, darker, and was tanned like me in the summer. She came with a Dalmatian, the only kind of dog I’d ever wanted. Sometimes, bored of my sisters, I took Teresa to play with the 10 or more Barbies who belonged to a girl my age next door. She, the neighbor girl, was weird about the Barbies. She liked to cut their hair. She twisted off their heads, then twisted them back onto the wrong bodies, even though all the bodies were basically alike with occasional differences in skin tone. On good days, her Barbies wore glittery lipstick and shrieked and fucked the ugly orange Kens. I did not like this girl. I did not understand her. Something missing in me—a truly creative gene, or a deep-enough envy of brothers—was needed to play her barbaric games. I looked at Teresa, going to the park with a leashed Dalmatian, all dressed in red with a tan. Here was a creature that looked more beautiful than humanly possible, even when she was only out walking her dog. Could I really do better? Didn’t she just want to be untouched?
A version of this story appears in GARAGE issue 15, publishing September 2018.