Saturday, September 6, 2025
  • Login
Forbes 40under40
  • Home
  • Technology
  • Innovation
  • Real Estate
  • Leadership
  • Money
  • Lifestyle
No Result
View All Result
  • Home
  • Technology
  • Innovation
  • Real Estate
  • Leadership
  • Money
  • Lifestyle
No Result
View All Result
Forbes 40under40
No Result
View All Result
Home Technology

California lawmakers tackle potential dangers of AI chatbots after parents raise safety concerns

by Riah Marton
in Technology
California lawmakers tackle potential dangers of AI chatbots after parents raise safety concerns
Share on FacebookShare on Twitter


When her 14-year-old son took his own life after interacting with artificial intelligence chatbots, Megan Garcia turned her grief into action.

Last year, the Florida mom sued Character.AI, a platform where people can create and interact with digital characters that mimic real and fictional people.

Garcia alleged in a federal lawsuit that the platform’s chatbots harmed the mental health of her son Sewell Setzer III and the Menlo Park, Calif., company failed to notify her or offer help when he expressed suicidal thoughts to these virtual characters.

Now Garcia is backing state legislation that aims to safeguard young people from “companion” chatbots she says “are designed to engage vulnerable users in inappropriate romantic and sexual conversations” and “encourage self-harm.”

“Over time, we will need a comprehensive regulatory framework to address all the harms, but right now, I am grateful that California is at the forefront of laying this ground,” Garcia said at a news conference on Tuesday ahead of a hearing in Sacramento to review the bill.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.


As companies move fast to advance chatbots, parents, lawmakers and child advocacy groups are worried there are not enough safeguards in place to protect young people from technology’s potential dangers.

To address the problem, state lawmakers introduced a bill that would require operators of companion chatbot platforms to remind users at least every three hours that the virtual characters aren’t human. Platforms would also need to take other steps such as implementing a protocol for addressing suicidal ideation, suicide or self-harm expressed by users. That includes showing users suicide prevention resources.

Under Senate Bill 243, the operator of these platforms would also report the number of times a companion chatbot brought up suicide ideation or actions with a user, along with other requirements.

The legislation, which cleared the Senate Judiciary Committee, is just one way state lawmakers are trying to tackle potential risks posed by artificial intelligence as chatbots surge in popularity among young people. More than 20 million people use Character.AI every month and users have created millions of chatbots.

Lawmakers say the bill could become a national model for AI protections and some of the bill’s supporters include children’s advocacy group Common Sense Media and the American Academy of Pediatrics, California.

“Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of the products. The stakes are high,” said Sen. Steve Padilla (D-Chula Vista), one of the lawmakers who introduced the bill, at the event attended by Garcia.

But tech industry and business groups including TechNet and the California Chamber of Commerce oppose the legislation, telling lawmakers that it would impose “unnecessary and burdensome requirements on general purpose AI models.” The Electronic Frontier Foundation, a nonprofit digital rights group based in San Francisco, says the legislation raises 1st Amendment issues.

“The government likely has a compelling interest in preventing suicide. But this regulation is not narrowly tailored or precise,” EFF wrote to lawmakers.

Character.AI has also surfaced 1st Amendment concerns about Garcia’s lawsuit. Its attorneys asked a federal court in January to dismiss the case, stating that a finding in the parents’ favor would violate users’ constitutional right to free speech.

Chelsea Harrison, a spokeswoman for Character.AI, said in an email the company takes user safety seriously and its goal is to provide “a space that is engaging and safe.”

“We are always working toward achieving that balance, as are many companies using AI across the industry. We welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space,” she said in a statement.

She cited new safety features, including a tool that allows parents to see how much time their teens are spending on the platform. The company also cited its efforts to moderate potentially harmful content and direct certain users to the National Suicide and Crisis Lifeline.

Social media companies including Snap and Facebook’s parent company Meta have also released AI chatbots within their apps to compete with OpenAI’s ChatGPT, which people use to generate text and images. While some users have used ChatGPT to get advice or complete work, some have also turned to these chatbots to play the role of a virtual boyfriend or friend.

Lawmakers are also grappling with how to define “companion chatbot.” Certain apps such as Replika and Kindroid market their services as AI companions or digital friends. The bill doesn’t apply to chatbots designed for customer service.

Padilla said during the press conference that the legislation focuses on product design that is “inherently dangerous” and is meant to protect minors.

Tags: ai chatbotBillCaliforniacalifornia lawmakercharacterChatbotsChildcompanion chatbot platformCompanyConcernsDangerslawmakersmegan garciaParentParentsPeoplePlatformPotentialRaiseSafetystate legislationSuicideTackleUseryoung people
Riah Marton

Riah Marton

I'm Riah Marton, a dynamic journalist for Forbes40under40. I specialize in profiling emerging leaders and innovators, bringing their stories to life with compelling storytelling and keen analysis. I am dedicated to spotlighting tomorrow's influential figures.

Next Post
Global bond rout starting to sound market alarm bells

Global bond rout starting to sound market alarm bells

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Forbes 40under40 stands as a distinguished platform revered for its commitment to honoring and applauding the remarkable achievements of exceptional individuals who have yet to reach the age of 40. This esteemed initiative serves as a beacon of inspiration, spotlighting trailblazers across various industries and domains, showcasing their innovation, leadership, and impact on a global scale.

 
 
 
 

NEWS

  • Forbes Magazine
  • Technology
  • Innovation
  • Money
  • Leadership
  • Real Estate
  • Lifestyle
Instagram Facebook Youtube

© 2025 Forbes 40under40. All Rights Reserved.

  • About Us
  • Advertise
  • Contact Us
No Result
View All Result
  • Home
  • Technology
  • Innovation
  • Real Estate
  • Leadership
  • Money
  • Lifestyle

© 2024 Forbes 40under40. All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In