Diffusing Harm: An Exploration of the Ethical and Policy Landscape of Diffusion Models for AI Generated Child Sexual Abuse Material

by Lauren Y. Ho 

Abstract

Initially identified by the United States in the 1970s, child sexual abuse material (CSAM) has continued to grow with the evolution of technology. With the popularization of AI, the issue of AI generated (AIG) CSAM has emerged through diffusion models. The unique generation method of diffusion models, destroying and recreating images from its training dataset, results in unique and realistic images. However, direct challenges with diffusion models include unethical data and training, morphed images of children, and pure AIG-CSAM. Further cascading issues are related to investigative challenges such as difficulty in identifying identities of children, distinguishing morphed images from pure AIG-CSAM, and accessibility to CSAM on the clear web. Existing federal policies have evolved to include certain AIG-CSAM but fail to directly address the issue, leaving some AIG-CSAM protected in the United States. However, presidential initiatives have been presented to encourage and engage leading AI companies into safety measures to prevent the creation and distribution of AIG-CSAM. In pursuit of ethical diffusion model usage, emphasis should be placed on a social cyber contract, collaborative efforts between private and public sectors, and government policy intervention. This project seeks to understand the emerging ethical challenges as well as identify current policies and their limitations on AIG-CSAM through a qualitative policy analysis. Finally, recommendations for future research and policy considerations are discussed. 

Introduction

            During the 1970s, American policy makers began recognizing the issue of child pornography and generally defined it as any image or video depicting the sexual exploitation of children (Salter & Wong, 2024). However, in 2016, child pornography was re-termed child sexual abuse material (CSAM) to encompass the exploitation and abuse of children (Department of Justice [DOJ], n.d.). Now, as technology and artificial intelligence are rapidly advancing, CSAM is more easily created and distributed (Australian Institute of Criminology [AIC], 2025). Although an unconventional cyberthreat, CSAM remains a discussion topic under cybersecurity as it falls under criminal use of data, networks, and devices (Cybersecurity & Infrastructure Security Agency [CISA], 2021). Specifically, with the growing use of artificial intelligence (AI), a rising phenomenon in artificial intelligence generated (AIG) CSAM through diffusion models has become apparent in both the input of data and the output of the model (DOJ, n.d.). Policymakers must begin examining regulations and laws around AI diffusion models to ensure ethical model training and prevention of AIG-CSAM. 

Diffusion Models

            Within the context of AI, diffusion models are both a machine learning and generative AI mechanism primarily used to generate images and videos (Bergmann & Stryker, 2024). Diffusion models are unique because of their method for training and generating results (Bergmann & Stryker, 2024). These models add noise to progressively diffuse data until the original image is destroyed, with the degradation of the images occurring during the forward diffusion process (Bergmann & Stryker, 2024). Because this noise follows a specific statistical distribution, the model can detect underlying patterns in the diffused image that are otherwise undetectable to the human eye (Bergmann & Stryker, 2024). Additionally, during this training phase, the model learns to denoise the destroyed images and reworks the data into a similar image called the reverse diffusion process (Bergmann & Stryker, 2024). After reconstructing many diffused images, the model is capable of creating new images whose pixels follow the learned noise distribution, allowing the model to generate believable images based on user input. Therefore, after a model’s training is complete, it automatically conducts the forward diffusion process using noise to produce a new image (Bergmann & Stryker, 2024). 

Diffusion models are publicly accessible, large language models that are popular because of their text-to-image capability (Higham et al., 2023). Another distinct feature of diffusion models is their ability to produce more realistic imagery over other types of generative models (Higham et al., 2023). Although intended to generate new images, diffusion models have been found to produce replicated images of training data (Higham et al., 2023). Continuing research has indicated that these reproductions of training data are often induced through user prompts (Higham et al., 2023). As image generation has become prevalent, the most prominent diffusion models publicly available include Stable Diffusion, Midjourney, DALL-E, and Imagen (Bergamann & Stryker, 2024). However, many of these diffusion models are closed source, permitting users to only enter their queries (Bergamann & Stryker, 2024). Although less common, open-source models like Stable Diffusion allow users to fine-tune a model with a user’s own data set through specific sites (Internet Watch Foundation [IWF], 2023). 

Generative AI for CSAM Production

There are three issues identified directly involving AIG-CSAM which are unethically and abuse-trained models, morphed-image CSAM, and finally realistic CSAM (Pfefferkorn, 2024). AIG-CSAM has been primarily found to be produced using open-source models that permit users to train it on their own dataset (Struckman, 2025). Specifically, to be able to produce CSAM, AI models are fine-tuned and trained on CSAM datasets (Pfefferkorn, 2024). In 2023, AI-image generators, Stable Diffusion and Midjourney, were found to have been trained on CSAM images that were found in a public dataset called LAION-5B (Thiel, 2023). Another challenge that is becoming apparent is that although some models are not trained on CSAM, they have been trained on pornographic images of adults and harmless images of children (UNICRI, 2025). Human Rights Watch found that this method of training diffusion models still enables the generation of CSAM (UNCRI, 2025).   

Indirect issues resulting from AIG-CSAM that were identified were related to challenges faced by law enforcement in several areas. Using diffusion models to morph existing images can conceal the identity of a child and prevent identification (AIC, 2025). These morphed images make it difficult for law enforcement to determine if the content has been altered from abuse material of a real child (Struckman, 2025). Additionally, the accessibility of diffusion models on the clear web creates a seemingly contradictory issue (AIC, 2025). Diffusion models enable everyday users to create and access CSAM, but if regulated could permit better oversight from regulatory figures (IWF, 2024). CSAM has traditionally been distributed and accessed through the dark web and tracked by law enforcement using known hash values to track pre-existing CSAM (IAC, 2025). However, the images created and altered by diffusion models will not have hash values that are registered with law enforcement (IAC, 2025). With further oversight and regulation on diffusion models, law enforcement could better identify consumers of CSAM. Conversely, without regulation, new challenges in monitoring and identification of victims and consumers becomes apparent. 

Policies and Initiatives

Although researchers are identifying that diffusion models for CSAM are emerging, no country has identified pure AIG-CSAM (United Nations Interregional Crime and Justice Research Institute [UNICRI], 2024). Globally, only 18% of countries, including the United States, have targeted policies at this new form of CSAM (UNICRI, 2024). Additionally, digital forensic examiners have begun recognizing that should this type of case occur, many countries are hesitant to address it (UNICRI, 2024). To be exact, handling the first pure AIG-CSAM case would set a global precedent for future cases that many countries do not want to set. 

The topic of CSAM was initially addressed in the United States during the 1970s through the First Amendment, prohibiting the production, distribution, or possession of CSAM (Pfefferkorn, 2024). Over time, the changing nature of CSAM was recognized and the definition began evolving overtime to properly address it and the methods of distribution (Pfefferkorn, 2024). However, the first targeted federal law at digitalization of CSAM came about in 1996 through Title 18 Section 2252A (Pfefferkorn, 2024). During the creation of the policy, all images depicting sexually explicit images of non-existent children was illegalized (Pfefferkorn, 2024). However, in 2002 Section 2252A was altered to only include certain images as CSAM (Pfefferkorn, 2024). The PROTECT Act of 2003 was passed by Congress and built off the previous change, further revising the definition of CSAM and making it more ambiguous (Pfefferkorn, 2024). The PROTECT Act states that distinguishable and realistic child in the imagery generated the material is most likely protected through the First Amendment unless it depicts an existing child (Pfefferkorn, 2024). Although the changes to the definition and designation of AIG-CSAM at this time did not change how it was dealt with, introduction of diffusion models has raised concerns about the applicability of the law. 

Despite the lack of direct policies against AIG-CSAM, the U.S. federal government has implemented various initiatives with law enforcement and non-profits to bring awareness. During the Biden Administration there were two initiatives that were implemented to further accountability from the U.S. government in oversight of AI generated sexual abuse imagery (The White House, 2024). Released in early 2024, A Call to Action to Combat Image-Based Sexual Abuse was an initiative to engage private sector companies to reduce CSAM and other forms of AIG abuse material (The White House, 2024). This effort maintained a broad scope of preventing overall AI image-based sexual abuse material (Prabhakar, 2024). The second initiative, though similar in being a “voluntary commitment,” was specific to engaging prominent AI companies (The White House, 2024). 

Outlook 

AI generated images will continue to improve and likely become difficult to distinguish from real images. Though challenging to implement solidified cyber-policies, a stronger focus on collaborative efforts between the public and private sectors will likely be made. The private sector has already been engaging and creating cyber-policies and initiatives for oversight but are so far insufficient. Data providers such as LAION have already agreed to partnerships with safety organizations to prevent the distribution of CSAM through training and fine-tuning datasets (IWF, 2024). Another example of efforts to ensure ethicality in diffusion models was in 2022 when Stable Diffusion forbid pornographic images from being in training data despite being an open-source model (IWF, 2023). Although positive actions by LAION and Stable Diffusion, AI associated companies should not wait until discovery of CSAM in training data to intervene. Further research could permit a social cyber contract to be created which would offer a measure of oversight from other AI platforms. As AI companies can block specific user prompts that would generate CSAM, a social cyber contract may reinforce the need for these interventions. 

Some platforms may adhere to such a social contract, but most corporations and AI platforms will have no monetary incentive to enforce the placement of restrictions and interventions on generative AI (Thorn, n.d.). Limiting users’ ability to customize public AI models and blocking specific prompts will likely be a roadblock to setting parameters on diffusion models (Thorn, n.d.). It becomes necessary for AI companies to invest funding to set up preventative measures and maintain them. Another monetary disincentive to adhering to social frameworks and contracts is the potential loss of users due to the restrictions on model’s generative ability. 

The other direction that will be focused on is governmental policies. Current policies in the United States continue to evolve to include AIG-CSAM, but are not specifically targeted towards the issue itself. Because of this, certain AI generated materials that do not show an “identifiable” child or have CSAM in the data training set may end up being protected under the First Amendment (Struckman, 2025). As such, the U.S. government must begin to create targeted policies to cover the discrepancies. Enforcing ethical data and training of diffusion models will remain a more straightforward process at the federal level. However, forbidding the production and distribution of pure AIG-CSAM will continue to be debated, with conflict focusing on the constitutionality of limiting freedom of speech. 

Conclusion

CSAM is an issue that has evolved over time and become progressively more cyber involved, and as such requires policy regulation. Now, AI offers individualized image generation capabilities through diffusion models but brings with it many ethical challenges. Despite AIG-CSAM being acknowledged as an emerging cyber-related topic, there are few policies directly addressing it the expansiveness of the cyber issue. Current U.S. policies have evolved to include certain AIG-CSAM content while protecting pure AIG-CSAM images. AI platforms and the particular use of diffusion models is furthering the globalization of CSAM and permitting individuals to easily produce and morph abusive images. Identifying positive incentives to encourage private companies to enforce preventative measures should be another area of focus. Discourse on a social cyber contract could provide accountability from the AI companies themselves. However, despite these private sector efforts, it is necessary to begin regulating diffusion models through governmental intervention to provide oversight and accountability to ensure ethical use of these AI models.

References

Australian Institute of Criminology. (2025). Artificial intelligence and child sexual abuse: A rapid evidence assessment. Australian Government; Australian Institute of Criminology.

Bergmann, D. & Stryker, C. (2024). What are diffusion models? Retrieved from https://www.ibm.com/think/topics/diffusion-models#:~:text=Diffusion%20models%20are%20generative%20models,to%20generate%20high%2Dquality%20images

Higham, C., Higham, D., & Grindrod, P. (2023). Diffusion models for generative artificial intelligence: An introduction for applied mathematicians. Retrieved from https://arxiv.org/html/2312.14977v1

Internet Watch Foundation. (2023). How AI is being abused to create child sexual abuse imagery. Retrieved from https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf.

Internet Watch Foundation. (2024). What has changed in the AI CSAM landscape? Retrieved from https://admin.iwf.org.uk/media/nadlcb1z/iwf-ai-csam-report_update-public-jul24v13.pdf.

Pfefferkorn, R. (2024). Addressing computer-generated child sex abuse imagery: Legal framework and policy implications. Retrieved from https://s3.documentcloud.org/documents/24403088/adressing-cg-csam-pfefferkorn-1.pdf.

Prabhakar, A. (2024). A call to action to combat image-based sexual abuse. Washington, DC: The White House.

Salter, M., & Wong, T. (2024). Parental Production of Child Sexual Abuse Material: A Critical Review. Trauma, violence & abuse25(3), 1826–1837. https://doi.org/10.1177/15248380231195891

Struckman, K. (2025). Combatting AI-generated CSAM. Retrieved from https://www.wilsoncenter.org/article/combatting-ai-generated-csam

The White House. (2024). White House announces new private sector voluntary commitments to combat image-based sexual abuse. Washington, DC: The White House.

Thiel, D. (2023). Investigation finds AI image generation models trained on child abuse. Retrieved from https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse

Thorn. (n.d.). Mitigating the risk of generative AI models creating child sexual abuse materials: An analysis by child safety nonprofit Thorn. Retrieved from https://partnershiponai.org/wp-content/uploads/2024/11/case-study-thorn.pdf

U.S. Department of Justice. (n.d.). Child sexual abuse material. Washington, DC: Department of Justice. United Nations Interregional Crime and Justice Research Institute. (2024). Generative AI: A new threat for online child sexual exploitation and abuse. Bracket Foundation

Leave a comment

Blog at WordPress.com.

Up ↑