In the absence of stronger federal regulation, some US states have started regulating apps that offer “treatment” using artificial intelligence, amid increasing use of AI for mental health advice.

However, all laws passed this year do not fully cover the rapidly evolving landscape of AI programs. App developers, policymakers, and mental health advocates say state laws are insufficient to protect users or hold app creators accountable for harmful technology.

Karen Andrea Stefan, executive director and co-founder of the mental health chat app “Air Kik,” said, “The truth is millions of people use these tools, and they are not going to stop.”

State laws vary: Illinois and Nevada banned the use of AI for mental health treatment, while Utah imposed certain restrictions on therapy chatbots, including protecting users’ health information and clearly stating that the chatbot is not human. Pennsylvania, New Jersey, and California are considering ways to regulate AI therapy.

The impact on users varies; some apps have been blocked in states with bans, while others await clearer legal guidance before making changes.

Many laws do not cover generative chatbots like ChatGPT, which are not promoted as therapy but are used by an unknown number of people for that purpose. These bots have faced lawsuits in tragic cases where users lost touch with reality or committed suicide after interacting with them.

Valerie Wright, overseeing healthcare innovation at the American Psychological Association, said these apps fill a gap due to a national shortage of mental health providers, high care costs, and unequal access among insured patients.

She added that science-based, human-supervised mental health chatbots could change the situation by helping people before crises occur, but noted, “This is not what is currently on the commercial market.”

Therefore, federal regulation and oversight are needed.

Earlier this month, the Federal Trade Commission announced investigations into seven AI chatbot companies—including the parent companies of Instagram, Facebook, Google, ChatGPT, Grok (X app’s chatbot), and Snapchat—regarding how they “measure, test, and monitor potential negative impacts of this technology on children and adolescents.” The US Food and Drug Administration will hold an advisory committee meeting on November 6 to review AI-powered mental health devices.

Wright said federal agencies are considering restrictions on chatbot marketing, limiting addictive practices, requiring disclosure that they do not provide medical advice, tracking and reporting suicidal thoughts, and providing legal protection for whistleblowers reporting bad corporate practices.

From “companion apps” to AI-powered therapists to mental health apps, AI use in healthcare varies and is difficult to define and regulate.

This leads to divergent regulatory approaches. Some states target companion apps designed solely for friendship, while Illinois and Nevada have banned products claiming to provide mental health treatment, threatening fines up to $10,000 in Illinois and $15,000 in Nevada.

Stefan said there is still much ambiguity about Illinois law, and her company’s app “Air Kik” has not been banned there. Initially, they avoided calling their chatbot a therapist, but when users started using that term in reviews, they adopted it to appear in search results.

Last week, the company reverted to avoiding therapy and medical terms. The “Air Kik” website now describes its chatbot as “your compassionate AI advisor” designed to support your mental health journey, but now calls it a “self-care bot.”

Stefan confirmed the bot does not “diagnose.”

She expressed concern about states’ ability to keep pace with rapid innovation, saying, “The speed at which everything is evolving is tremendous.”

However, one chatbot company is attempting to fully emulate therapy.

In March, a Dartmouth College team published the first randomized clinical trial of a generative AI chatbot for treating mental health issues.

The goal was to create a chatbot called “Therabot” to treat people with anxiety, depression, or eating disorders.

The study found users rated the app as similar to a therapist, with significant symptom reduction after 8 weeks compared to those who did not use it. A person monitors all interactions with the bot and intervenes if the bot’s responses are harmful or unsupported by evidence.

Nicholas Jacobson, the clinical psychologist leading the research lab, said the results are promising but larger studies are needed to show if the bot can work with larger populations.