In the absence of stronger federal regulation, some US states have begun regulating apps that offer “therapy” through artificial intelligence, as more people turn to AI for mental health advice.
However, all laws passed this year do not fully cover the rapidly evolving landscape of AI software. App developers, policymakers, and mental health advocates say state laws are insufficient to protect users or hold app creators accountable for harmful technology.
Karen Andrea Stefan, executive director and co-founder of the mental health chat app “Air Kik,” said, “The truth is millions of people use these tools and they are not going to stop.”
State laws vary: Illinois and Nevada banned the use of AI for mental health treatment, while Utah imposed certain restrictions on therapy chatbots, including protecting “users’ health information” and clearly stating that the chatbot is not human. Pennsylvania, New Jersey, and California are exploring ways to regulate AI therapy.
The impact on users varies. Some apps have blocked access in states with bans, while others await clearer legal guidance.
Many laws do not cover generative chatbots like ChatGPT, which are not promoted for therapy but are used by an unknown number of people for that purpose. These bots have faced lawsuits over tragic incidents, including users losing touch with reality or committing suicide after interacting with them.
Valerie Wright, who oversees healthcare innovation at the American Psychological Association, said these apps fill a gap due to a nationwide shortage of mental health providers, rising care costs, and unequal access among insured patients.
She added that scientifically based mental health chatbots created with expert input and human oversight could change the situation.
“These bots can be something that helps people before they face a crisis,” she said, noting “this is not what is currently on the commercial market.”
She emphasized the need for federal regulation and oversight.
Earlier this month, the Federal Trade Commission announced investigations into seven AI chatbot companies, including the parent companies of Instagram, Facebook, Google, ChatGPT, Grok (the chatbot on X app), and Snapchat, regarding how they measure, test, and monitor potential negative impacts on children and teens. The US Food and Drug Administration will hold an advisory committee meeting on November 6 to review AI-powered mental health devices.
Wright said federal agencies are considering restrictions on how chatbots are marketed, limiting addictive practices, requiring disclosure that they do not provide medical advice, tracking and reporting suicidal thoughts, and providing legal protections for whistleblowers reporting bad corporate practices.
From “companion apps” to AI-powered therapists to mental health apps, AI use in healthcare varies widely and is difficult to define and legislate.
This leads to regulatory differences. Some states target companion apps designed solely for friendship but do not address mental health care. Illinois and Nevada laws ban products that claim to provide mental health treatment, threatening fines up to $10,000 in Illinois and $15,000 in Nevada.
Stefan said there is still much ambiguity about Illinois law, and her company has not been banned from accessing the state.
Initially, Stefan and her team refrained from calling their chatbot, which looks like a cartoon panda, a therapist. But when users started using the term in reviews, they adopted it to appear in search results. Last week, the company reverted to avoiding therapy and medical terms, now describing the bot as a “self-care bot.”
Stefan confirmed the bot does not “diagnose.” She is glad people approach AI critically but worries about states’ ability to keep up with innovation due to the rapid pace of change.
One chatbot company is trying to fully emulate therapy.
In March, a Dartmouth College team published the first randomized clinical trial of a generative AI chatbot to treat mental health issues.
The goal was to create a chatbot called “Therabot” to treat anxiety, depression, or eating disorders.
The study found users rated the app as similar to a therapist, and symptoms significantly decreased after eight weeks compared to those who did not use it. A person monitors all interactions with the bot and intervenes if the bot’s response is harmful or not evidence-based.
Nicholas Jacobson, clinical psychologist leading the research, said the results are promising but larger studies are needed to show if the bot can work with many people.
Recommended for you
Talib Al-Rifai Chronicles Kuwaiti Art Heritage in "Doukhi.. Tasaseem Al-Saba"
Exhibition City Completes About 80% of Preparations for the Damascus International Fair Launch
Unified Admission Applications Start Tuesday with 640 Students to be Accepted in Medicine
Egypt Post: We Have Over 10 Million Customers in Savings Accounts and Offer Daily, Monthly, and Annual Returns
His Highness Sheikh Isa bin Salman bin Hamad Al Khalifa Receives the United States Ambassador to the Kingdom of Bahrain
Al-Jaghbeer: The Industrial Sector Leads Economic Growth