
Abstract
Artificial intelligence-powered deepfake technology has ushered a new era of digital sexual abuse, allowing for the seamless fabrication of explicit images that violate bodily autonomy with alarming realism. With little more than a photograph scraped from social media, individuals can use AI tools to fabricate hyper-realistic nude images and pornographic videos of people without their consent. These nonconsensual sexual deepfakes are rapidly spreading online, often going viral before victims are even aware of their existence. The technology can affect anyone, but women and girls remain disproportionately targeted. Recent incidents involving minors highlight both the reach and severity of this harm. Despite the emotional, reputational, and psychological toll on victims, the current legal framework offers few avenues for redress. Traditional tort claims such as defamation or intentional infliction of emotional distress are often ill-suited to address the uniquely digital and anonymous nature of deepfake abuse. In response, many states have begun enacting legislation to criminalize or regulate deepfake content. However, the patchwork of state-level approaches leaves significant gaps in protection. Congress has introduced several bills aimed at regulating the creation and distribution of deepfakes, and very recently passed the first federal law to criminalize nonconsensual sexual deepfakes. This Note argues that a more effective path forward lies in targeting the technology’s source; specifically, a federal provision imposing civil strict liability on developers of software designed primarily to produce sexually explicit deepfakes. By focusing on the creators of these tools–rather than attempting to police individual users or internet platforms–this approach addresses a key enforcement challenge while avoiding entanglements with the First Amendment. In doing so, it offers a constitutionally sound and victim-centered framework for confronting the growing threat of nonconsensual sexual deepfakes.
Recommended Citation
Zilana Lee,
UNVEILING THE UNDERBELLY OF ARTIFICIAL INTELLIGENCE: THE INADEQUACIES OF THE LEGAL SYSTEM WITH REGARD TO VICTIMS OF NONCONSENSUAL SEXUAL DEEPFAKES,
33 J. L. & Pol'y
182
(2025).
Available at:
https://brooklynworks.brooklaw.edu/jlp/vol33/iss2/5
Included in
Civil Law Commons, Internet Law Commons, Law and Gender Commons, Law and Society Commons, Legal Remedies Commons, Legislation Commons, Privacy Law Commons, Science and Technology Law Commons, Torts Commons