Skip to main content

WARNING: Child abuse images found in AI training data

 

From Axios, December 20, 2023

Stanford researchers have discovered over 1,000 child sexual abuse images in an AI dataset used to train popular image generation tools such as Stable Diffusion.

Why it matters: Illegal child sexual abuse material (CSAM) represents an extreme example of the wider problem of AI developers not having or sharing clear records of what material is used to train their models. It may take only a small selection of CSAM images create many more new and realistic synthetic images of child abuse.

The big picture: Digital technology has made it easier produce and distribute such images, which is illegal in the U.S. and most jurisdictions. There is dispute about the scale of harmful content online targeting and featuring children, but the number now runs into millions of images and thousands of victims.

Details: The Stanford Internet Observatory report pinpoints the LAION-5B model, a popular open source dataset maintained by a Germany-based nonprofit, as the source of the images. The Stanford team used guidance from the National Center for Missing and Exploited Children and worked with the Canadian Centre for Child Protection to provide third party validation of the findings, using Microsoft's PhotoDNA tool. A LAION spokesperson told Bloomberg it pursues a "zero tolerance policy" for illegal content and has removed LAION datasets from the internet while it investigates. But developers have been flagging the existence of at least some of the images since April in online discussion forums.

Our thought bubble: The study's findings mirror the emergence of similar material on internet search engines and social networks which made those training data sets possible. Developers of new technology platforms have typically failed to address problems like CSAM before they emerge: It's cheaper and easier for the companies to triage the problems only after the material and its victims are identified.

What they're saying: A Stability AI spokesperson told Bloomberg the company had filtered the LAION data before using it for training, and said the firm has "implemented filters to intercept unsafe prompts or unsafe outputs when users interact with models on our platform."

What's next: There's almost certainly more CSAM material to be discovered in AI training data in coming weeks and months. The Stanford researchers examined only a small portion of the material in the training datasets of large language and image models, much of which is not publically known or available.

Add Comment

Comments (0)

Post
Copyright Ā© 2023, PACEsConnection. All rights reserved.
×
×
×
×
Link copied to your clipboard.
×