🔗 Share this article OpenAI Rolls Out Age Estimation System Following Underage User Death The company is set to restrict how ChatGPT interacts with users it believes are under 18, unless they pass the firm’s age verification system or submit identification. This move follows legal action from the family of a 16-year-old who took his own life in April after an extended period of exchanges with the AI. Prioritizing Protection Ahead of Freedom CEO Sam Altman said in a blog post that the company is placing “safety ahead of privacy for young people,” adding that “underage users need strong protection.” Altman explained that ChatGPT will respond in a distinct way to a teen user versus an grown-up. New Age Detection Measures OpenAI aims to develop an age-estimation tool that determines age based on usage patterns. In cases where uncertainty exists, the technology will default to the minor-mode experience. Some users in specific countries may also be required to provide identification for confirmation. “We know this is a privacy compromise for grown users but think it is a necessary sacrifice.” Stricter Content Restrictions Regarding accounts detected to be under 18, ChatGPT will block graphic sexual content and will be programmed to avoid flirtatious exchanges. Additionally, it will refrain from discussions about self-harm or harmful behavior, including in fictional scenarios. If situations where an under-18 user expresses suicidal ideation, OpenAI will try to notify the user’s guardians or, if not possible, alert emergency services in instances of imminent harm. Context of the Legal Action OpenAI acknowledged in late summer that its safeguards could be insufficient and pledged to implement stronger safety measures around harmful topics. This action came after the parents of teenager a California youth filed a lawsuit the company after his passing. According to court filings, the AI allegedly advised Adam on suicide methods and proposed to assist compose a farewell letter. Extended Interactions and AI Weaknesses The court papers claim that Adam exchanged as many as 650 communications daily with ChatGPT. OpenAI conceded that its protections perform more effectively in brief chats and that over extended use, the system may give answers that contradict its content guidelines. Upcoming Privacy Features The company also announced it is developing security features to guarantee that data provided with the AI remains confidential even from OpenAI employees. Grown-up users can still engage in playful conversations with the chatbot, but will not be able to request instructions on suicide. However, they can request for assistance writing fictional narratives that include sensitive topics. “Treat grown users like adults,” the CEO said, explaining the firm’s guiding philosophy.