We Need Laws to Stop AI-Generated Deepfakes
NEWS | 27 November 2025
Generative artificial intelligence can now counterfeit reality at an industrial scale. Deepfakes—photographs, videos and audio tracks that use AI to create convincing but entirely fabricated representations of people or events—aren’t just an Internet content problem; they are a social-order problem. The power of AI to create words and images that seem real but aren’t threatens society, critical thinking and civilizational stability. A society that doesn’t know what is real cannot self-govern. We need laws that prioritize human dignity and protect democracy. Denmark is setting the example. In June the Danish government proposed an amendment to its copyright law that would give people rights to their own face and voice. It would prohibit the creation of deepfakes of a person without their consent, and it would impose consequences on those who violate this rule. It would legally enshrine the principle that you own you. What makes Denmark’s approach powerful is the corporate fear of copyright-infringement legalities. In a study uploaded to preprint server arXiv.org in 2024, researchers posted 50 nude deepfakes on X and reported them to the platform in two ways: 25 as copyright complaints and 25 as nonconsensual nudity under X’s policies. X quickly removed the copyright claims but took down none of the intimate-privacy violations. Legal rights got action; privacy didn’t. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. We need laws that prioritize human dignity and protect democracy. The proposed addition to Danish law would give victims of deepfakes removal and compensation rights, which matters because the harm that deepfakes cause isn’t hypothetical. The people who make deepfakes exploit victims for money, sexual favors or control; some of the videos have led to suicides—most clearly documented in a string of cases involving teenage boys targeted by scammers. The majority, however, target women and girls. Researchers have found that 96 percent of deepfakes are nonconsensual and that 99 percent of sexual deepfakes depict women. This problem is widespread and growing. In a survey of more than 16,000 people across 10 countries, 2.2 percent of them reported having been victims of deepfake pornography. The Internet Watch Foundation documented 210 web pages with AI-generated deepfakes of child sexual abuse in the first half of 2025—a 400 percent increase over the same period in 2024. And whereas only two AI videos of child sexual abuse were reported in the first six months of 2024, 1,286 videos were reported in the first half of 2025. Of these, 1,006 depicted heinous acts with such realism as to be indistinguishable from videos of real children. Deepfakes also threaten democracy. A few months before the 2024 U.S. presidential election, Elon Musk reposted on X a deepfake video of Vice President Kamala Harris calling herself a diversity hire who doesn’t know “the first thing about running the country.” Experts determined that the content violated X’s own synthetic-media rules, but Musk passed it off as parody, and the post stayed up. Even financial systems are at risk. In 2024 criminals used deepfake video to impersonate executives from an engineering company on a live call, persuading an employee in Hong Kong to transfer roughly $25 million to accounts belonging to the criminals. A recent report from Resemble.ai, a company specializing in AI-driven voice technologies, documents 487 deepfake attacks in the second quarter of 2025, up 41 percent from the previous quarter—with approximately $347 million in losses in just three months. Despite all this, the U.S. is making progress. The bipartisan TAKE IT DOWN Act, passed this year, makes it a federal crime to publish or threaten to publish nonconsensual intimate images, including deepfakes, and gives platforms 48 hours to remove content and suppress duplicates. States are taking action, too. Texas criminalized deceptive AI videos intended to sway elections; California has laws obliging platforms to detect, label and remove deceptive AI content; and Minnesota passed a law that allows criminal charges against anyone making nonconsensual sexual deepfakes or using deepfakes to influence elections. Other states might soon join them. But none of these efforts go far enough; we should adopt a federal law protecting one’s right to their likeness and voice. Doing so would give people legal grounds to demand fast removal of deepfakes and the right to sue for meaningful damages. The proposed NO FAKES Act (which stands for “Nurture Originals, Foster Art, and Keep Entertainment Safe”) would protect performers and public figures from unauthorized deepfakes, but it should include all people. The introduced Protect Elections from Deceptive AI Act, which would prohibit deepfakes of federal candidates, would be more effective than a patchwork of state laws vulnerable to First Amendment challenges—such as X’s deeply troubling bid to block Minnesota’s deepfake statute. Abroad, the E.U. AI Act requires synthetic media to be identifiable through labeling or other provenance signals. And under the Digital Services Act, large platforms operating in Europe must mitigate manipulated media. The U.S. must adopt similar legislation. We also need to confront factories of abuse—the “nudify” sites and apps designed to create sexually explicit deepfakes. San Francisco’s city attorney has forced multiple such sites offline, and California’s bill AB 621 targets companies providing services to these kinds of sites. Meta sued a company behind nudify apps that advertised on its platforms. The rise of deepfake technology has shown that voluntary policies have failed; companies will not police themselves until it becomes too expensive not to do so. This is why Denmark’s approach is not just innovative; it’s essential. Your image should belong to you. Anyone who uses it to their own ends without your permission should be in violation of law. No legislation will stop every fake. We can, however, enforce a baseline of accountability that prevents our society from tipping into chaos.
Author: The Editors.
Source