By Shelly Palmer
Late last year, I wrote about deepfakes and how for about 200,000 years, we have relied on our eyes and ears to separate truth from lies and fact from fiction. Even if we ignore the rise of fake news, technology is on the verge of making it impossible to know if what we are seeing and hearing is real or fake.
It’s been less than a year since I wrote that article, and the technology has advanced in ways that make last year’s fakes look primitive. Here’s a roundup of the advancements and uses in the past year.
Revenge porn has been outlawed in many jurisdictions around the globe. In July, Virginia became one of the first states to outlaw the sharing of computer-generated pornography aka deepfakes. It did so by amending an existing law which criminalizes revenge porn, making clear that the category now includes “falsely-created” material. (Image above).
The Democratic Party deepfaked its own chairman to highlight 2020 concerns. In early August at DEF CON – one of the world’s biggest conventions for hackers – attendees were told DNC Chair Tom Perez couldn’t make it for the DNC’s presentation. In lieu of that, he “Skyped in” and chatted. Except he didn’t. Do you know what Perez’s voice sounds like? Neither did those in attendance.
The technology behind deepfakes was already easy to find and use last fall, but it’s even more so today. Imagine where it’ll be next summer when the 2020 election is four months away.
Did Elizabeth Warren really say that? Was that footage of Bernie Sanders in that attack ad real, or was it a deepfake? As political advertisements already twist candidates’ words and manipulate the truth for the perfect soundbite, can you believe anything you hear when it can all be manufactured on any laptop you can find at Best Buy?
While news cycle after news cycle covered allegations of possible voter fraud in the 2016 election, is anyone ready for what comes next? Let’s quickly imagine an AI-assisted system to commit election fraud.
Go to ThisPersonDoesNotExist.com. See that face? That lifelike photo? That’s not a real person. The site “showcases fully automated human image synthesis by endlessly generating images that look like facial portraits of human faces.”
Fun! Right? Only until you think about the possible ramifications. How much work would it take to build an entire history and lifetime for this person? Use AI to generate the person’s name and backstory (maybe seed the system with an obituary so it could find their living relatives on social media, alongside a cache of “real” pictures)?
Create an email address, Facebook account, and Twitter profile. Use FaceApp to age the photo up 30 years, 60 years. Photoshop the character’s likeness into his or her (real) family photos. Do the same in other photos.
Use AI-assisted software to generate tweets and Facebook posts. Use procedural generation to make the character’s posts seem real, and not scheduled. Have the character retweet things. Reply to things. Like things. Interact with the world. It’s like the “Yahoo Boys” military scam, but automated and amplified to the nth degree, powered by technology and by machine learning.
You may not like this game, but it is not going to be played by you or me. It’s going to be played by nation-states with unlimited resources. Buckle your seat belt. This is going to be a rough ride.
About Shelly Palmer
Named one of LinkedIn’s Top Voices in Technology, Shelly Palmer is CEO of The Palmer Group, a strategy, design and engineering firm focused at the nexus of technology, media and marketing. He is Fox 5 New York’s on-air tech and digital media expert, writes a weekly column for Adweek, and is a regular commentator on CNN and CNBC. Follow @shellypalmer or visit shellypalmer.com or subscribe to our daily email http://ow.ly/WsHcb