Each method is weaponized—almost always against women—to degrade, harass, or cause shame, among other harms. Julie Inman Grant, Australia’s e-safety commissioner, says her office is starting to see more deepfakes reported to its image-based abuse complaints scheme, alongside other AI-generated content, such as “synthetic” child sexual abuse and children using apps to create sexualized videos of their classmates. “We know it’s a really underreported form of abuse,” Grant says.
As the number of videos on deepfake websites has grown, content creators—such as streamers and adult models—have used DMCA requests. The DMCA allows people who own the intellectual property of certain content to request it be removed from the websites directly or from search results. More than 8 billion takedown requests, covering everything from gaming to music, have been made to Google.
“The DMCA historically has been an important way for victims of image-based sexual abuse to get their content removed from the internet,” says Carrie Goldberg, a victims’ rights attorney. Goldberg says newer criminal laws and civil law procedures make it easier to get some image-based sexual abuse removed, but deepfakes complicate the situation. “While platforms tend to have no empathy for victims of privacy violations, they do respect copyright laws,” Goldberg says.
WIRED’s analysis of deepfake websites, which covered 14 sites, shows that Google has received DMCA takedown requests about all of them in the past few years. Many of the websites host only deepfake content and often focus on celebrities. The websites themselves include DMCA contact forms where people can directly request to have content removed, although they do not publish any statistics, and it is unclear how effective they are at responding to complaints. One website says it contains videos of “actresses, YouTubers, streamers, TV personas, and other types of public figures and celebrities.” It hosts hundreds of videos with “Taylor Swift” in the video title.
The vast majority of DMCA takedown requests linked to deepfake websites listed in Google’s data relate to two of the biggest sites. Neither responded to written questions sent by WIRED. The majority of the 14 websites had over 80 percent of the complaints leading to content being removed by Google. Some copyright takedown requests sent by individuals indicate the distress the videos can have. “It is done to demean and bully me,” one request says. “I take this very seriously and I will do anything and everything to get it taken down,” another says.
“It has such a huge impact on someone’s life,” says Yvette van Bekkum, the CEO of Orange Warriors, a firm that helps people remove leaked, stolen, or nonconsensually shared images online, including through DMCA requests. Van Bekkum says the organization is seeing an increase in deepfake content online, and victims face hurdles to come forward and ask that their content is removed. “Imagine going through a hiring process and people Google your name, and they find that kind of explicit content,” van Bekkum says.
Google spokesperson Ned Adriance says its DMCA process allows “rights holders” to protect their work online and the company has separate tools for dealing with deepfakes—including a separate form and removal process. “We have policies for nonconsensual deepfake pornography, so people can have this type of content that includes their likeness removed from search results,” Adriance says. “And we’re actively developing additional safeguards to help people who are affected.” Google says when it receives a high volume of valid copyright removals about a website, it uses those as a signal the site may not be providing high-quality content. The company also says it has created a system to remove duplicates of nonconsensual deepfake porn once it has removed one copy of it, and that it has recently updated its search results to limit the visibility for deepfakes when people aren’t searching for them.