Skip to main content

Deepfake propaganda is not a real problem

We’ve spent the last year wringing our hands about a crisis that doesn’t exist

Share this story

Jeff Bezos/Elon Musk FakeApp model training

If you’ve been following tech news in the past year, you’ve probably heard about deepfakes, the widely available, machine-learning-powered system for swapping faces and doctoring videos. First reported by Motherboard at the end of 2017, the technology seemed like a troubling omen after years of bewildering misinformation campaigns. Deepfake panic spread broader and broader in the months that followed, with alarm-raising articles from Buzzfeed (several times), The Washington Post (several times), and The New York Times (several more times). It’s not an exaggeration to say that many of journalism’s most prominent writers and publications spent 2018 telling us this technology was an imminent threat to public discourse, if not truth itself.

Most recently, that alarm has spread to Congress. Sen. Ben Sasse (R-NE) is currently pushing a bill to outlaw use of the technology, describing it as “something that keeps the intelligence community up at night.” To hear Sasse tell it, this video-manipulation software is dangerous on a geopolitical scale, requiring swift and decisive action from Congress.

The predicted wave of political deepfakes hasn’t materialized

But more than a year after the first fakes started popping up on Reddit, that threat hasn’t materialized. We’ve seen lots of public demonstrations — most notably a Buzzfeed video in which Jordan Peele impersonated former President Obama — but journalists seem more adept with the technology than trolls. Twitter and Facebook have unmasked tens of thousands of fake accounts from troll campaigns, but so far, those fake accounts haven’t produced a single deepfake video. The closest we’ve seen is one short-lived anti-Trump video in Belgium, but it was more of a confusing political ad than a chaos campaign. (It was publicly sponsored by a known political group, for instance, and made using After Effects.) The predicted wave of political deepfakes hasn’t arrived, and increasingly, the panic around AI-assisted propaganda seems like a false alarm.

The silence is particularly damning because political trolls have never been more active. During the time deepfake tech has been available, misinformation campaigns have targeted the French elections, the Mueller investigation and, most recently, the Democratic primaries. Sectarian riots in Sri Lanka and Myanmar were fueled by fake stories and rumors, often deliberately fabricated to stoke hate against opposing groups. Troll campaigns from Russia, Iran, and Saudi Arabia have raged through Twitter, trying to silence opposition and confuse opponents.

In any of these cases, attackers had the motive and the resources to produce a deepfake video. The technology is cheap, easily available, and technically straightforward. But given the option of fabricating video evidence, each group seems to have decided it wasn’t worth the trouble. Instead we saw news articles made up from whole cloth, or videos edited with conventional tools to take on a sinister meaning.

It’s a good question why deepfakes haven’t taken off as a propaganda technique. Part of the issue is that they’re too easy to track. The existing deepfake architectures leave predictable artifacts on doctored video, which are easy for a machine learning algorithm to detect. Some detection algorithms are publicly available, and Facebook has been using its own proprietary system to filter for doctored video since September. Those systems aren’t perfect, and new filter-dodging architectures regularly pop up. (There’s also the serious policy problem of what to do when a video triggers the filter, since Facebook hasn’t been willing to impose a blanket ban.)

Deepfakes are being used for misogynist harassment, not geopolitical intrigue

But even with deepfake filters’ limitations, they could be enough to scare political trolls away from the tactic. Uploading an algorithmically doctored video is likely to attract attention from automated filters, while conventional film editing and obvious lies won’t. Why take the risk?

It’s also not clear how useful deepfakes are for this kind of troll campaign, as some have pointed out. Most operations we’ve seen so far have been more about muddying the water than producing convincing evidence for a claim. In 2016, one of the starkest examples of fake news was the Facebook-fueled report that Pope Francis had endorsed Donald Trump. It was widely shared and completely false, the perfect example of fake news run amok. But the fake story offered no real evidence for the claim, just a cursory article on an otherwise unknown website. It wasn’t damaging because it was convincing; people just wanted to believe it. If you already think that Donald Trump is leading America toward the path of Christ, it won’t take much to convince you that the Pope thinks so, too. If you’re skeptical, a doctored video of a papal address probably won’t change your mind.

That reveals some uncomfortable truths about the media, and why the US was so susceptible to this kind of manipulation in the first place. We sometimes think of these troll campaigns as the informational equivalent of food poisoning: bad inputs into a credulous but basically rational system. But politics is more tribal than that, and news does much more than just convey information. Most troll campaigns focused on affiliations rather than information, driving audiences into ever more factional camps. Video doesn’t help with that; if anything, it hurts by grounding the conversation in disprovable facts.

There’s still real damage being done by deepfake techniques, but it’s happening in pornography, not politics. That’s where the technology started: Motherboard’s initial story on deepfakes was about a Reddit user pasting Gal Gadot’s face on a porn actress’s body. Ever since, the seedier corners of the web have continued inserting women into sex footage without consent. It’s an ugly, harmful thing, particularly for everyday women targeted by harassment campaigns. But most deepfake coverage has treated pornography as an embarrassing sideshow to protecting the political discourse. If the problem is non-consensual porn, then the solution should be focused on individual harassers and targets, rather than the blanket ban proposed by Sasse. It also suggests the deepfake story is about misogynist harassment rather than geopolitical intrigue, with less obvious implications for national politics.

Some will argue that the deepfake revolution just hasn’t happened yet. Like any tech, video doctoring programs get a little more sophisticated every year. The next version could always solve whatever problems are holding it back. As long as there are bad actors and available tools, advocates say, eventually the two will overlap. The underlying logic is so compelling; it’s only a matter of time before reality catches up.

They may be right. A new wave of political deepfakes could pop up tomorrow to prove me wrong — but I’m skeptical. We’ve had the tools to fabricate videos and photos for a long time. It’s even been used in political campaigns before, most memorably in a forged John Kerry photo circulated during the 2004 campaign. AI tools can make that process easier and more accessible, but it’s easy and accessible already. As the countless demos showed, deepfakes are already in reach for anyone who wants to cause trouble on the internet. It’s not that the tech isn’t ready yet. As it turns out, it just isn’t that useful.