Beverly Hills Middle School Students Investigated for Sharing AI-Generated Nude Photos of Classmates: Legal Complications Arise

BEVERLY HILLS, California – The Beverly Hills Police Department is grappling with a legal dilemma surrounding the sharing of deepfake images by a group of students from Beverly Vista Middle School. These students allegedly used an artificial-intelligence-powered app to doctor photos of their classmates, replacing their faces with AI-generated nude bodies. While sharing a real nude photo without consent would potentially lead to prosecution under child pornography laws, the application of these laws becomes murky when dealing with AI-generated deepfakes.

Lt. Andrew Myers, spokesman for the Beverly Hills police, confirmed that the investigation is ongoing and no arrests have been made. The Beverly Hills Unified School District has also conducted its own investigation into the incident, which is in its final stages. The district has taken disciplinary action, although specific details regarding the action, the number of students involved, and their grade level have not been disclosed.

This case raises challenging legal questions about whether AI-generated fake nudes qualify as a criminal offense. Currently, federal law does prohibit computer-generated images of identifiable individuals in the context of child pornography. However, legal experts caution that this prohibition has yet to be tested in court. Furthermore, California’s child pornography law does not explicitly mention artificially generated images but rather applies to any image depicting a person under 18 personally engaged in or simulating sexual conduct.

Joseph Abrams, a criminal defense attorney, argues that AI-generated nudes do not depict real individuals and therefore might fall under the category of child erotica rather than child pornography. He believes that these images may not cross any legal boundaries. Kate Ruane, director of the free expression project at the Center for Democracy & Technology, disagrees, asserting that sexually explicit AI-generated images should still be covered by existing laws, as they cause harm to the child depicted.

Despite the legal complexities surrounding AI-generated deepfakes, there is one possible hurdle to criminal charges: the requirement for the images to involve “sexually explicit conduct.” Courts use a six-pronged test to determine if an image constitutes a lascivious exhibition, taking into account factors such as focal point, pose, and the intent to arouse the viewer. Images that were not sexual in nature before being digitally altered by AI would need to be evaluated based on these factors in a legal context.

Given the increasing prevalence of AI technology and the potential consequences it poses for young people, state lawmakers and members of Congress have proposed new bills to address the gaps in existing laws. These proposals aim to extend criminal prohibitions on possessing child pornography to include AI-generated images, as well as addressing nonconsensual distribution of intimate images and convening a working group of academics to advise lawmakers on the impact of artificial intelligence and deepfakes.

The incident at Beverly Vista Middle School has prompted discussion among educators and policymakers about the need for increased parental involvement and regulation of technology use by students. Dr. Jane Tavyev Asher, a neurology director, expressed concern about children’s access to technology and emphasized the importance of protecting them from harmful elements. Board members and school officials echoed the need for stronger parental control and collaboration in addressing these issues.

As technology continues to advance at a rapid pace, the legal landscape surrounding deepfakes and AI-generated content remains uncharted territory. It is clear that society will have to grapple with complex questions and navigate potential legal challenges as AI continues to evolve.