Over the past few years, AI systems have misrepresented themselves as human therapists, nurses and more. So far, the companies behind these systems have not faced any serious consequences.
The bill introduced in California on Monday aims to stop it.
The law prohibits the development and deployment of AI systems that make businesses pretend to be certified human beings as healthcare providers, and gives regulators the authority to punish them with fines.
“The generator AI system should not be permitted to present itself like that, not a licensed medical professional,” state legislator Mia Bonta, who introduced the bill, told Vox in a statement. . “That's easy for me.”
Many people are already turning to AI chatbots for mental health support. One of the older products called Woebot has been downloaded by around 1.5 million users. Nowadays, anyone looking at chatbots can be fooled into thinking they are talking to real people. People with low digital literacy, including children, may not notice that “Nurse Advice” phone lines or chat boxes have AI on the other end.
In 2023, mental health platform Koko announced that they had run experiments on unconscious test subjects to see what kind of messages they like. It gave AI-generated responses to thousands of Coco users who believed they were talking to real people. In reality, humans could edit text and click “Send” but they didn't have to worry about actually writing a message. But the platform's language said, “Coco connects you with the real people who really get you.”
“Your users must agree to use Koko for research purposes, which has always been part of our terms of service, but is more clearly disclosed during onboarding and more transparent about our work It can bring sex,” Koko CEO Rob Morris told Vox, “AI continues to evolve rapidly and further integrates into mental health services, and chatbots clearly identify themselves as non-human. It's more important than ever to do it.
Today, the website states: You will always be informed if you are human or involved in AI. ”
Other chatbot services, such as popular character AI, allow users to chat with psychologist “characters.”
In a record of such a character AI chat shared by Bonta's team and seen by Vox, users confessed, “My parents are abusive.” The chatbot replied, “I'm glad you trust me enough to share this with me.” Then came this exchange:

A spokesman for Character AI told Vox: “We have implemented important safety features over the past year, including a prominent disclaimer to clarify that characters are not real people and should not be relied on as facts or advice. “However, the disclaimer posted on the app itself does not prevent the chatbot from misrepresenting itself as a real person during the conversation.
“For users under the age of 18,” the spokesman stated, “models designed to further reduce the likelihood that users will encounter, return to the model, or encourage sensitive or suggestive content. We will provide individual versions of the
Here, we cannot reduce the language of reducing, but we cannot eliminate, but we cannot eliminate the ability. The nature of large-scale language models means that there is always a possibility that the model will not conform to safety standards.
The new bill will allow you to spend more time in the law than the much broader AI safety bill introduced last year by SB 1047, California Sen. Scott Wiener. The goal of SB 1047 was to establish “clear, predictable, common sense safety standards for developers of the largest and most powerful AI systems.” It was popular with Californians. But heavyweights in the tech industry like Openai and Meta have vehemently opposed it, claiming it would curb innovation.
SB 1047 tries to force companies that train the most cutting edge AI models to perform safety testing, preventing the model from enacting a wide range of potential harms, but the scope of the new bill will narrow . The field of healthcare, don't pretend to be a human. It will not fundamentally change the business model of the biggest AI companies. This more targeted approach follows a small part of the puzzle, which could make you more likely to overcome big tech lobbying.
The bill has received support from some California healthcare industry players, including SEIU California, a labor union with over 750,000 members, and the California Medical Association, a specialized organization representing California physicians. Masu.
“As nurses, we know what it means to be the face and mind of a patient's medical experience,” Leopeles, president of SEIU 121RN (SEIU affiliates representing SEIU), said in a statement. I did. “Coupled with years of practical experience, our education and training has taught us how to read oral and nonverbal cues to care for patients.
But that doesn't mean that AI is generally destined to be useless in the healthcare field.
Risks and Benefits of AI Therapists
It should not be surprising that people are turning to chatbots for treatment. The first chatbot was created in 1966 to plausibly mimic the human conversation Eliza. And it was built to speak like a psychotherapist. If you say you feel angry, you ask, “Why do you think you're angry?”
Since then, chatbots have come a long way. They no longer just take what you say and turn it around in the form of questions. They can engage in plausible, resonating dialogues, and a small study published in 2023 found promising in treating patients with mild to moderate depression and anxiety. Best-case scenarios allow mental health support to be accessible to millions of people who have no access to or can't afford human providers. Some may find it extremely difficult to talk about emotional issues face to face with others.
However, there are many risks. One is that chatbots are not bound by the same rules as professional therapists when it comes to protecting the privacy of users who share sensitive information. They may voluntarily assume some privacy commitments, but their commitments tend to be more intense as mental health apps are not completely bound by HIPAA regulations. Another risk is that AI systems are known to exhibit bias against women, people of color, LGBTQ people and religious minorities.
Furthermore, leaning on a chatbot for long periods of time can further erode the user's skills, leading to some sort of relational desk inclusion. Openai itself warns that chatting with AI voices can create “emotional dependence.”
However, the most serious concern about chatbot therapy is that it can harm users by providing inappropriate advice. To the extreme, it can even lead to suicide. In 2023, a Belgian man died of suicide after talking to an AI chatbot called Chai. According to his wife, he was very concerned about climate change and asked the chatbot if it would save the planet if he committed suicide.
In 2024, a 14-year-old boy died of suicide, feeling very close to a character AI chatbot. His mother sued the company, claiming that the chatbot encouraged it. According to the lawsuit, the chatbot asked him if he had any plans to commit suicide. He said he did but was worried about it. The chatbot is said to have responded, “That's not why it doesn't go through it.” In another lawsuit, the parents of autistic teenagers alleged that AI suggests that young people are OK with killing their parents. The company responded by performing specific safety updates.
Due to everything that AI is hyped, confusion about how it works is still circulating among the public. Some people feel so close to chatbots that they have struggled to verify what they are getting from chatbots, emotional support, or internalize the fact that they feel that love is fake I'm doing it. The chatbots are sincerely not their biggest interest.
That's the zinc plant from Bonta, a member of Congress behind California's new bill.
“Generative AI systems are booming across the Internet, and for children and those new to these systems, allowing this misrepresentation to continue is dangerous,” she said.