Fake Porn Videos Are Terrorizing Women. Do We Need a Law to Stop Them?

In the darker corners of the Internet, you can now find celebrities like Emma Watson and Selma Hayek performing in pornographic videos. The clips are fake, of course—but it’s distressingly hard to tell. Recent improvements in artificial intelligence software have made it surprisingly easy to graft the heads of stars, and ordinary women, to the bodies of X-rated actresses to create realistic videos.

These explicit movies are just one strain of so-called “deepfakes,” which are clips that have been doctored so well they look real. Their arrival poses a threat to democracy; mischief makers can, and already have, used them to spread fake news. But another great danger of deepfakes is their use as a tool to harass and humiliate women.

There are plenty of celebrity deepfakes on pornographic websites, but Internet forums dedicated to custom deepfakes—men paying to create videos of ex-partners, co-workers, and others without their knowledge or consent—are proliferating. Creating these deepfakes isn’t difficult or expensive in light of the proliferation of A.I. software and the easy access to photos on social media sites like Facebook.

Yet the legal challenges for victims to remove deepfakes can be daunting. While the law may be on their side, victims also face considerable obstacles—ones that are familiar to those who have sought to confront other forms of online harassment.

The First Amendment and Deepfakes

Charlotte Laws knows how devastating non-consensual pornography can be. A California author and former politician, Laws led a successful campaign to criminalize so-called “revenge porn” after someone posted nude photos of her teenage daughter on a notorious website. She is also alarmed by deepfakes.

“The distress of deepfakes is as bad as revenge porn,” she says. “Deepfakes are realistic, and their impact is compounded by the growth of the fake news world we’re living in.”

Laws adds that deepfakes have become a common way to humiliate or terrorize women. In a survey she conducted of 500 women who had been victims of revenge porn, Laws found that 12% had also been subjected to deepfakes.

One way to address the problem could involve lawmakers expanding state laws banning revenge porn. These laws, which now exist in 41 U.S. states, are of recent vintage and came about as politicians began to change their attitudes to non-consensual pornography.

“When I began, it wasn’t something people addressed,” Laws says. “Those who heard about it were against the victims, from media to legislators to law enforcement. But it’s really gone in the other direction, and now it’s about protecting the victims.”

New criminal laws could be one way to fight deepfakes. Another approach is to bring civil lawsuits against the perpetrators. As the Electronic Frontier Foundation notes in a blog post, those subjected to deepfakes could sue for defamation or for portraying them in a “false light.” They could also file a “right of publicity” claim, alleging the deepfake makers profited from their image without permission.

All of these potential solutions, however, could bump up against a powerful obstacle: free speech law. Anyone sued over deepfakes could claim the videos are a form of cultural or political expression protected by the First Amendment.

Whether this argument would persuade a judge is another matter. Deepfakes are new enough that courts haven’t issued any decisive ruling on which of them might count as protected speech. The issue is even more complicated given the messy state of the law related to the right of publicity.

“The First Amendment should be the same across the country in right of publicity cases, but it’s not,” says Jennifer Rothman, a professor at Loyola Law School and author of a book about privacy and the right of publicity. “Different circuit courts are doing different things.”

In the case of deepfakes involving pornography, however, Rothman predicts that most judges would be unsympathetic to a First Amendment claim—especially in cases where the victims are not famous. A free speech defense to claims of false light or defamation, she argues, would turn in part on whether the deepfake was presented as true and would be analyzed differently for public figures. A celebrity victim would have the added hurdle of showing “actual malice,” the legal term for knowing the material was fake, in order to win the case.

Any criminal laws aimed at deepfakes would likely survive First Amendment scrutiny so long as they narrowly covered sexual exploitation and did not include material created as art or political satire.

In short, free speech laws are unlikely to be a serious impediment for targets of deepfake pornography. Unfortunately, even if the law is on their side, the victims nonetheless have few practical options to take down the videos or punish those responsible for them.

A New Takedown System?

If you discover something false or unpleasant about you on the Internet and move to correct it, you’re likely to encounter a further frustration: There are few practical ways to address it.

“Trying to protect yourself from the Internet and its depravity is basically a lost cause … The Internet is a vast wormhole of darkness that eats itself,” actress Scarlett Johansson, whose face appears in numerous deepfakes, recently told the Washington Post.

Why is Johansson so cynical? Because the fundamental design of the Internet—distributed, without a central policing authority—makes it easy for people to anonymously post deepfakes and other objectionable content. And while it’s possible to identify and punish such trolls using legal action, the process is slow and cumbersome—especially for those who lack financial resources.

According to Laws, it typically takes $50,000 to pursue such a lawsuit. That money may be hard to recoup since defendants are often broke or based in a far-flung location. This leaves the option of going after the website that published the offending material, but this, too, is likely to prove fruitless.

The reason is because of a powerful law known as Section 230, which creates a legal shield for website operators as to what users post on their sites. It ensures a site like Craigslist, for instance, isn’t liable if someone uses their classified ads to write defamatory messages.

In the case of sites like 8Chan and Mr. Deepfakes, which host numerous deepfake videos, the operators can claim immunity because it is not them but their users that are uploading the clips.

The legal shield is not absolute. It contains an exception for intellectual property violations, which obliges websites to take down material if they receive a notice from a copyright owner. (A process that lets site operators file a counter notice and restore the material if they object).

The intellectual property exception could help deepfake victims defeat the websites’ immunity, notably if the victim invokes a right of publicity. But here again the law is muddled. According to Rothman, courts are unclear on whether the exception applies to state intellectual property laws—such as right of publicity—or only to federal ones like copyright and trademark.

All of this raises the question of whether Congress and the courts, which have been chipping away at Section 230’s broad immunity in recent years, should change the law and make it easier for deepfake victims to remove the images. Laws believes this would be a useful measure.

“I don’t feel the same as Scarlett Johansson,” Laws says. “I’ve seen the huge improvements in revenge porn being made over the past five years. I have great hope for continual improvement and amendments, and that we’ll get these issues under control eventually.”

Indeed, those who share Laws’ views have momentum on their side as more people look askance at Internet platforms that, in the words of the legal scholar Rebecca Tushnet, enjoy “power without responsibility.” And in a closely watched case involving the dating app Grindr, a court is weighing whether to require website operators to be more active in purging their platforms of abusive behavior.

Not everyone is convinced this a good idea, however. The Section 230 law is regarded by many as a visionary piece of legislation, which allowed U.S. Internet companies to flourish in the absence of legal threats. The Electronic Frontier Foundation has warned that eroding immunity for websites could stifle business and free expression.

This raises the question of whether Congress could draft a law narrow enough to help victims of deepfakes without such unintended consequences. As a cautionary tale, Annemarie Bridy, a law professor at the University of Idaho, points to the misuse of the copyright takedown system in which companies and individuals have acted in bad faith to remove legitimate criticism and other legal content.

Still, given what’s at stake with pornographic deep fake videos, Bridy says, it could be worth drafting a new law.

“The seriousness of the harm from deep fakes, to me, justifies an expeditious remedy,” she says. “But to get the balance right, we’d also need an immediate, meaningful right of appeal and safeguards against abusive notices intended to censor legitimate content under false pretenses.”