California Hikes Fake Nude Fines to $250K Max to Shield Kids from Online Exploitation
SACRAMENTO, Calif. - In a move aimed at protecting minors from online exploitation, California has increased the maximum fine for creating or distributing fake nude images of children to $250,000.
On Monday, Governor Gavin Newsom signed Assembly Bill 2304, which raises the penalty for producing, distributing, or possessing child pornography, including deepfakes. The new law takes effect immediately and applies to both individuals and corporations.
"This legislation is a crucial step in protecting our children from online predators," said Assemblyman Jim Patterson, who sponsored the bill. "The production and distribution of child pornography are heinous crimes that have devastating consequences for victims."
Under the new law, anyone found guilty of creating or distributing fake nude images of minors will face a maximum fine of $250,000. This is a significant increase from the previous maximum fine of $10,000.
California's move comes as concerns about deepfakes and child safety continue to grow. Deepfakes are AI-generated images or videos that can be manipulated to create realistic but false content. They have raised fears among experts and lawmakers about their potential use in creating fake child pornography.
The new law also requires social media platforms and other online services to take steps to prevent the spread of deepfakes and child exploitation material. This includes implementing AI-powered tools to detect and remove such content, as well as providing education and resources for users on how to identify and report suspicious activity.
"This is a critical step in addressing the growing threat of deepfakes and child exploitation online," said State Senator Susan Rubio, who co-sponsored the bill. "We must do everything in our power to protect our children from these heinous crimes."
The increase in fines is part of a broader effort by California lawmakers to address the issue of child safety online. In recent months, the state has also passed laws regulating companion bots, which are AI-powered chatbots that can mimic human conversation.
Companion bots have raised concerns among experts and parents about their potential use in grooming children for exploitation. Under the new law, companion bot platforms will be required to create protocols to identify and address users' suicidal ideation or expressions of self-harm.
The move is seen as a significant step forward in protecting minors from online exploitation. As technology continues to evolve, experts warn that the threat of deepfakes and child safety remains a pressing concern.
"We must stay vigilant and continue to work together to protect our children from these heinous crimes," said Patterson. "This legislation is an important step in that effort."
*Reporting by Arstechnica.*