Deepfakes are about to get a lot tougher to identify. Within the next two years, digitally manipulated videos will likely reach a point where they’re undetectable to the naked eye, according to Siwei Lyu, head of the computer vision and machine learning lab at the University of Albany.
And that threatens to create a headache for nearly everyone from Hollywood to Washington, D.C. “This is about whether we can trust individual media that is propagated on the internet,” Lyu told TheWrap. “We’ll have information, but we cannot trust it, so it’s the same as not having any information at all. This is an issue that everyone should be concerned about.”
For those unfamiliar, “deepfakes” are videos that use artificial-intelligence tools to engineer bogus clips that appear real, often superimposing the face of one person onto another person. Other times, the lips and voice of someone are manipulated to look like they’re saying something they didn’t actually say.
Here’s a disturbing yet innocuous deepfake from last month that added Steve Buscemi’s face to Jennifer Lawrence’s body:
For Only $1/Day Members Access:
- 5 exclusive members-only stories a week
- Digital Video industry analysis and deep-dive features
- Daily WrapPRO newsletter covering the latest digital video industry news
- Access to full length Members-Only Video archive
- Video of notable sessions from TheWrap Events (TheGrill, Power Women Summit)
- VIP seating for TheWrap's Screening Series with stars and filmmakers
- Access to 'chill spots' at select industry events Sundance, TIFF, Newfronts and more
- Access to exclusive invite-only events
- DataBank showcasing key stats and streaming video trends in the OTT market
- In-depth entertainment industry research and whitepapers