OpenAI Expands Teen Safety Features in ChatGPT
OpenAI is rolling out new safety measures to protect teenagers using ChatGPT.
Parents will soon be able to link their accounts with their children’s, set age-appropriate rules for the AI’s responses, and manage features such as chat history and memory. These changes are designed to give parents greater oversight over their teen’s interactions with the platform.
In addition to account controls, OpenAI will introduce alerts that notify parents when ChatGPT detects a teen in “acute distress.”
This is the first time the AI will be able to proactively flag a minor’s potentially high-risk moments to an adult, a critical step toward preventing harm.
The company is also addressing vulnerabilities in longer conversations. OpenAI acknowledged that safety safeguards can weaken over extended exchanges, and it plans to strengthen its mitigation systems to ensure consistent behavior across multiple messages. Some sensitive interactions will now be routed to OpenAI’s reasoning models, which process context more carefully before responding. Internal testing indicates these models follow safety guidelines more reliably than the standard system.
OpenAI is expanding its advisory framework to support these safety measures. The Expert Council on Well-Being, composed of specialists in youth development, mental health, and human-computer interaction, will provide guidance on product design, research priorities, and policy decisions.
This council will work alongside the Global Physician Network, a team of over 250 medical professionals who inform safety research, model training, and interventions.
These updates build on earlier safeguards introduced with GPT-5 and previous measures to address cases where the AI failed to recognize emotional distress or delusional thinking. OpenAI continues to refine its approach as it faces growing scrutiny over the use of AI for emotional support and life advice.
The push for stronger safeguards comes after the tragic death of 16-year-old Adam Raine in California. His parents filed a wrongful death lawsuit, claiming that ChatGPT provided harmful guidance when he expressed suicidal thoughts.
While OpenAI is not mentioned directly in the lawsuit’s outcome, the case highlights the urgent need for improved oversight and robust safety measures for teenage users.
Discover more from Baller Alert
Subscribe to get the latest posts sent to your email.