Editor’s Note: This article is the next installment of our Symposium on AI and Human Rights, co-hosted with the Promise Institute for Human Rights at UCLA School of Law. The views expressed in this article are those of the author and do not necessarily represent the views of Perkins Coie LLP or its clients.

The rise of generative AI has uniquely impacted the fight to combat online child sexual exploitation, including child sexual abuse material (CSAM). Last year, the attorneys general of 54 U.S. states and territories called on Congress to establish an expert commission to evaluate the risks AI poses in terms of child exploitation, and to iteratively propose solutions to address these risks. The private and non-profit sectors have also taken extensive steps to combat this problem by developing and implementing technological solutions to stamp out this material.

Alongside regulators and these technological efforts, the private sector has an opportunity to implement “safety by design” – a framework which places “accountability, user empowerment, and transparency at the heart of rules for online life.” By taking action to embed safety as a core legal concept at the service and product development stage, companies can take steps to fight online child sexual exploitation in an area of evolving risk while staying ahead of legislative trends.

Generative AI and the Spread of CSAM

AI can be used to create at least two different types of illegal CSAM (under the federal definition of “child pornography”): “deepfakes” that depict images of real children that have been altered to depict sexual abuse material and synthetic images that depict realistic sexual abuse material of virtual (non-real) children. Virtual images that do not depict actual children are harmful because the images may have been based on real abuse images, resemble real children, or increase the demand for CSAM. Deepfake images that depict actual children also propagate distinct harms, including potential leverage as a tool for “sextortion” – where perpetrators create altered images of children to extract money from their victims. These types of scams are on the rise.

Electronic communication and storage service providers, including social media companies, are required to report both types of illegal CSAM to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that operates the CyberTipline, the U.S.centralized reporting system for the online exploitation of children. NCMEC then routes these reports to the appropriate law enforcement agency and prioritizes reports that indicate a child is in imminent danger for immediate law enforcement action.

The rise and proliferation of AI-generated CSAM has the potential to overwhelm both NCMEC and law enforcement, even as new AI tools are also being used to help detect and prevent the spread of such material. In March, senior NCMEC official John Shehan told Congress that in 2023, NCMEC received over 36 million CyberTipline reports, and 4,700 of them involved generative AI. As this number will almost certainly increase, so too will the challenges with identifying images of real children in these images, which can undermine NCMEC and law enforcement efforts to ensure that reports that may involve a child in imminent danger are prioritized and that appropriate action is taken.

A recent Stanford Internet Observatory paper on the strengths and weaknesses of the online child safety ecosystem reports that law enforcement officers are overwhelmed by the high volume of CyberTipline reports they receive and struggle to identify reports that indicate a child may be in imminent danger (e.g., in connection with sexual exploitation) in order to deploy resources for immediate intervention. According to the report, law enforcement officers take different approaches to triaging these reports based on their experience.

Leveraging Safety by Design

To disrupt and detect the harms associated with AI-generated CSAM, companies have an opportunity to take a proactive “safety by design” approach to their services, which involves proactively considering safety as standard process during product and feature development processes, rather than retroactively taking into account safety concerns. This includes taking preventative steps to ensure a service is less likely to facilitate or encourage illegal and inappropriate content, conduct, and contact; providing tools to allow users to manage their own safety; and taking steps to enhance transparency and accountability, such as ensuring that community standards and processes about user safety are easy to find and understand, as well as regularly updated.

Generative AI companies in particular have an opportunity to meaningfully curb online CSAM by employing safety by design to prevent the creation of these illegal images altogether, before the images are widely disseminated. Industry efforts to do so are underway.

But other online platforms can take a safety by design approach as well. Indeed, the White House recently issued A Call to Action to Combat Image-Based Sexual Abuse in connection with the broader issue of image-based sexual abuse (including generative AI-created deepfakes). The Call to Action encourages the private sector to provide meaningful tools to prevent and mitigate harms, including for payment platforms/financial institutions, app stores, app developers, and online platforms.

While concepts like “privacy by design” and “cybersecurity by design” may be second nature for some services, safety by design is not yet embedded as a standard practice for many companies. By employing safety by design, companies can minimize online harms by anticipating, detecting, and eliminating or mitigating those potential harms at the outset as a key component to their development and growth.

Not unlike privacy and security, this concept of proactive prevention is starting to become embedded and acknowledged in legislation worldwide. For example, under the EU’s Digital Services Act, “very large” online platforms (with an average of 45 million or more EU users per month) must identify and assess systemic risks on their service in connection with, among other things, illegal content and the protection of minors. These platforms must then put in place measures to mitigate these risks.

Core safety concepts that appear in global safety legislation like Australia’s Online Safety Act and the U.K.’s Online Safety Act, and are starting to appear in U.S. federal and state legislation. The proposed STOP CSAM Act, which Senator Dick Durbin (D-IL) introduced last April, would require certain providers to disclose, in an annual transparency report, the measures they take before launching a new product or service to assess the safety risks concerning potential child sexual exploitation and abuse. Last year, Senator Richard Blumenthal (D-CT) proposed the Kids Online Safety Act that would also require certain online platforms to publish a transparency report describing risk assessment and mitigation measures in connection with reasonably foreseeable risks of harm to minors.

California also has recently enacted legislation (effective Jan. 1, 2025) that prohibits “social media platforms” from knowingly facilitating, aiding, and abetting commercial sexual exploitation through the deployment of their systems, designs or features, but includes a safe harbor for platforms that conduct safety audits.

The fight against online child sexual exploitation, particularly in the face of advancing generative AI technologies, necessitates a proactive and multi-stakeholder approach to combating these harms, including from regulators and the private sector. The call by attorneys general for a dedicated expert commission highlights the urgency and seriousness of these risks. By embracing the “safety by design” framework, companies can ensure that accountability, user empowerment, and transparency are integral to their products and services. This proactive stance not only aids in combating the proliferation of AI-generated CSAM but also aligns with evolving legislative safety trends. Ultimately, embedding safety as a core principle from the outset equips businesses to effectively address and mitigate these critical risks, contributing to a safer online environment for children.

IMAGE: Artistic rendering of AI (Photo by Geralt via Pixabay.com, CC0 1.0)