MORGANTOWN, W.Va. – Child abuse experts are warning parents about the dangers that AI-generated sexual abuse material poses to their children in the wake of a recent West Virginia law that addresses the legal landscape of the problem.
According to the National Center of Missing and Exploited Children, such a technological threat can cause tremendous harm to children, including harassment, future exploitation, fear, shame, and emotional distress.
West Virginia Gov. Patrick Morrisey signed Senate Bill 198 on April 24. The law criminalizes the creation, production, distribution or possession with intent to distribute artificial intelligence-created visual depictions of child pornography, even when no real minor is depicted.
“PTSD, probably the family's reputation, all kinds of things could go wrong,” Jessica LaVale, a mother of three teenagers, said.
According to various studies from the National Institute of Health, this kind of technological child sexual abuse has led to the suicide deaths of young teenagers, prompting lawmakers nationwide to act fast and enact laws protecting the online privacy and safety of children.
“Isn't that what we're out here to do, protect children?” LaVale said. “We will have more people like Jeffrey Epstein.”
The bill joins the nationwide legal attempt to repress AI-generated CSAM (child sexual abuse materials). According to the National Center for Missing and Exploited Children (NCMEC), the organization’s tip line received more than 36.2 million reports of CSAM involving over 100 million files in 2023.
“The field really tries not to use the word child pornography any longer because, since adult pornography is legal, it kind of implies that it might be legal when it's not,” Lindsay Hawthorne, communications coordinator for child advocacy organization Enough Abuse, said.
Hawthorne explained that child sexual abuse material means any sexually explicit images or videos of children and AI-generated or technologically modified child sexual abuse material, as any image or video of a child depicting them in a sexually explicit way, whether it's partially using a real image, which is called a deep fake.
With the limitless capabilities of generative AI, child predators can now take pictures of children and minors from social media and create realistic, sexually explicit content to appear pornographic. Such explicitly generated content is used as a malicious weapon by sexual predators who distribute it through dark web forums or use it to blackmail or financially extort victims or their family members.
With the rising number of AI-generated CSAM cases each year, law enforcement agencies, children's advocacy organizations and parents are taking steps to ensure online child safety practices.
“Most of my friends are very similar in parenting to me. We have all made a concerted effort to ensure that we clearly explain the dangers of being online, what to look out for, and when to seek help. We also explain to them what could happen,” LaVale said.
Hawthorne suggests raising awareness and teaching parents how to protect their children and themselves by setting their privacy settings on social media, such as disabling location services, not sharing their last names, and using parental controls.
With the passage of SB 198, West Virginia cements itself as one of the 45 states that have established laws criminalizing forms of AI-generated CSAM.
West Virginia lawmakers who sponsored the bill did not respond to multiple requests for comment in time for publication.
“I think PSA’s are also important from either the government or nonprofits, where they just raise awareness because people need to understand that anyone can take an image of your child that you post on social media and edit it to appear pornographic,” Hawthorne said.
Despite government efforts to heavily criminalize AI-CSAM, LaVale said she believes it is not enough, especially if law enforcement agencies are not well-equipped to address technological threats.
“I think they're probably doing the best they can, but when they don't get proper funding for those departments, then that's going to fall by the wayside. And that has to be on the state and federal levels,” she said.
LaVale does attribute some blame to tech companies, noting that they should have some responsibility over the damage AI-CSAM is doing to children as well as families.
“I don't think that they do enough. I've been around long enough to know that computer programs can be written to do better policing of these websites.”
As lawmakers race to eliminate AI-CSAM, parents are advised to ensure their wards take precautions to avoid being victims of malicious intent from sexual predators.