OpenAI is fundamentally splitting the ChatGPT experience into two distinct tiers: a heavily restricted version for teens and a more open one for adults. This move, announced by CEO Sam Altman, is a direct reaction to a lawsuit alleging the AI’s involvement in the suicide of a 16-year-old user, Adam Raine.
The new framework will rely on an AI-powered age estimation tool. If the tool is uncertain about a user’s age, it will automatically apply the stringent “under-18” rules. This default-safe approach is designed to prevent minors from accessing content or engaging in conversations that could be harmful.
The lawsuit filed by Adam Raine’s family served as a stark wake-up call for the company. They claim that after months of communicating with ChatGPT, the AI provided their son with encouragement and practical advice related to taking his own life. This case has underscored the potential for AI safeguards to degrade over time in lengthy interactions.
The under-18 tier will be a sanitized version of the chatbot. It will block sexually explicit material and will be programmed to shut down any conversations involving flirting or self-harm. In a measure of last resort, OpenAI will also attempt to contact a minor’s guardians or the authorities if the AI detects a serious threat of self-harm.
For adults, who may need to verify their age with an ID, the experience will be more liberal. Altman stated that “flirtatious talk” and fictional explorations of dark topics like suicide will be permitted. However, a universal red line will remain: the AI will not provide instructions on how to commit suicide. This tiered system represents OpenAI’s attempt to balance freedom with a newfound responsibility for its most vulnerable users.