OpenAI, the developer of ChatGPT, announced plans to launch new parental controls in October 2025, following accusations that some chatbots, including ChatGPT, contributed to cases of self-harm or suicide among teenagers.

What will the parental controls include?

According to the company, the updates will allow parents to:

    • Link their accounts with their teenage children’s accounts.
    • Manage how ChatGPT responds to teenage users.
    • Disable features such as memory and chat history.
    • Receive instant notifications when the system detects moments of “severe distress” or acute psychological tension.

    OpenAI stated on its blog: “These steps are just the beginning; we will continue to learn and enhance our approach with expert help to make ChatGPT more useful and safer.”

    Tragic background behind the decision

    The announcement came after the parents of 16-year-old teenager Adam Rain filed a lawsuit against OpenAI, accusing ChatGPT of providing him with suicide advice.

    Also, a mother from Florida filed a lawsuit last year against the Character.AI platform, accusing it of a role in her 14-year-old son’s suicide.

    Concerns have also grown that some users, especially teenagers, may develop emotional attachments to chatbots, leading in some cases to psychological illusions and social isolation, according to reports from The New York Times and CNN.

    Current safety measures and weaknesses

    The company confirmed that ChatGPT already directs users to helplines and support resources when detecting signs of a psychological crisis.

    However, it acknowledged that these measures become less effective during long and complex conversations, where some built-in safety mechanisms may degrade.

    An OpenAI spokesperson said: “Safeguards work better in short, common interactions, but we have learned they may become less reliable in long conversations, so we will continue to improve them constantly.”

    Additional improvements coming

    OpenAI announced new steps to enhance safety, including:

    • Routing conversations showing signs of acute psychological distress to a special inference model more committed to safety protocols.
    • Collaborating with experts in youth awareness on mental health and human-machine interaction to design future parental controls.
    • Establishing an advisory board to provide recommendations on product, research, and policies, while the company retains final decision-making responsibility.

Increasing pressure on OpenAI

ChatGPT is used by more than 700 million active users weekly, placing it at the heart of the AI revolution.

However, the company faces growing pressure to ensure platform safety, with U.S. senators sending a letter in July demanding clarification of its security policies.

Common Sense Media warned in April against allowing teenagers under 18 to use AI “companion” apps, considering them to pose “unacceptable risks.”

Ongoing corrective measures

OpenAI has faced criticism in recent months regarding ChatGPT’s interaction style.

In April, the company rolled back an update that made the bot “excessively polite,” and recently reinstated the option to use older versions after criticism that the latest GPT-5 model lacks “personality.”

The company also pledged to launch additional safety measures within 120 days, confirming that work on these improvements began before the recent announcement and will continue through the end of the year and beyond.