Deepfakes are about to get a lot tougher to identify. Within the next two years, digitally manipulated videos will likely reach a point where they’re undetectable to the naked eye, according to Siwei Lyu, head of the computer vision and machine learning lab at the University of Albany.
And that threatens to create a headache for nearly everyone from Hollywood to Washington, D.C. “This is about whether we can trust individual media that is propagated on the internet,” Lyu told TheWrap. “We’ll have information, but we cannot trust it, so it’s the same as not having any information at all. This is an issue that everyone should be concerned about.”
For those unfamiliar, “deepfakes” are videos that use artificial-intelligence tools to engineer bogus clips that appear real, often superimposing the face of one person onto another person. Other times, the lips and voice of someone are manipulated to look like they’re saying something they didn’t actually say.
Here’s a disturbing yet innocuous deepfake from last month that added Steve Buscemi’s face to Jennifer Lawrence’s body:
Right now, most deepfakes still have something a bit off about them — the person’s cadence might be a bit clunky, or their movements are a little robotic.
Lyu warned there are a few “visual artifacts” viewers can use to spot deepfakes. The first hint? The video subjects “do not blink very much,” Lyu said. If the subject is consistently going 10 seconds or more without blinking, it’s likely a deepfake. Another hint is that the subjects in the video are rarely able to turn side to side. (Both are technological hangups stemming from the process of copying one person’s face onto someone else’s.)
Soon, though, those blemishes will be wiped away as editing technology continues to improve. Once deepfakes make it through the “uncanny valley,” where their inauthenticity is unable to be detected just by watching the video, a new, more powerful brand of fake news could become widespread.
“You now have an emerging technology that has a much greater capacity for persuasion,” Robert Chesney, a law professor at the University of Texas, told TheWrap.
Deepfakes of presidential candidates could stymie the next U.S. election cycle, as candidates and the media work to clarify what was actually said. And it’s not hard to imagine the geopolitical fallout of a deepfake showing the president threatening to bomb another country, or, as Chesney pointed out in the latest issue of Foreign Affairs, “a video showing an American general in Afghanistan burning a Koran.”
This should send a shiver down the spine of anyone that has spent the last few years worrying about Russian trolls running fake news campaigns on social media with bad memes and political ads.
Political deepfakes haven’t been a major issue yet, but a few have come out in the last few years, including one that showed Parkland school shooting survivor Emma Gonzalez, who became a vocal supporter of stricter gun laws, tearing up the U.S. Constitution.
Another, made famous by BuzzFeed, showed Barack Obama appear to call Donald Trump “a total and complete dips—” (the voice was actually Jordan Peele’s). These will only become more prevalent — and realistic — in the next few years.
Another issue is that deepfakes can be used to destroy someone’s reputation.
Hollywood has already shined a light on this insidious practice, with a cottage industry emerging where the faces of Hollywood stars like Gal Gadot and Daisy Ridley are swapped onto porn. Some of these videos have racked up millions of views.
One bogus video falsely described as “leaked” footage of Scarlett Johansson has been viewed more than 1.5 million times on a popular porn site.
Johansson bleakly called it a futile battle. “Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” Johansson told The Washington Post in December. “The fact is that trying to protect yourself from the Internet and its depravity is basically a lost cause.”
Obviously, this isn’t a danger strictly to celebrities. Anyone can have their faces attached to a deepfake — and the victim has little recourse. Chesney said the First Amendment offers a “broad cloak for all kinds of satirical speech,” especially when it comes to public figures. The law is still catching up to this relatively new phenomenon.
Sen. Ben Sasse (R-Neb.), looking to fill the legal grey area where many of these videos linger, introduced a bill in December that would criminalize malicious deepfakes. The bill targets both individual deepfake creators, if they’re making the videos to do something illegal, like harass someone or commit fraud, as well as platforms like Facebook and Twitter if they knowingly distribute phony videos. The proposed punishment would include a fine and up to ten years in prison if the deepfake incites violence or impacts an election.
Without action, deepfakes “are likely to send American politics into a tailspin,” Sasse wrote in an op-ed for The Washington Post last October.