Open AI to restrict how chatbots respond to under-18s in bid to protect teens

Staff
By Staff

OpenAI, the company behind ChatGPT, has shared plans to build an age-prediction system and retrain the system’s responses for Under-18 users, stating that “minors need significant protection”

The chatbot will no longer discuss suicide or self-harm with users identified as Under-18, even in a creative writing capacity(Image: Getty Images)

OpenAI has shared plans to enhance the safety of ChatGPT for under-18 users. The company will restrict how ChatGPT responds to a user it suspects is under 18 using an age-prediction system.

In a blog post shared on September 16, OpenAI founder and CEO Sam Altman outlined steps to “prioritise safety ahead of privacy and freedom for teens”. OpenAI admitted in August to shortcomings with its system after the family of Adam Raine sued the company following the teen’s suicide in April.

Raine had been discussing ways to kill himself with ChatGPT, according to a lawsuit filed in San Francisco by his devastated family.

“Some of our principles are in conflict,” Altman’s post begins, stating “tensions” between teen safety, freedom, and privacy. “This is a new and powerful technology, and we believe minors need significant protection.”

READ MORE: Suicide is devastating UK young adults and online harms are feeding the problemREAD MORE: ChatGPT users warned talking to bot can lead to ‘psychosis’ as teen ‘encouraged to kill himself’

Altman said privacy is a “worth trade off” in their mission to protect teens(Image: AFP via Getty Images)

The company said that the way ChatGPT responds to a 15-year-old should look different to the way it responds to an adult. Altman says the first step in protecting minors is being able to identify them.

He shared that OpenAI plans to build an age-prediction system to estimate age based on how people use ChatGPT, and if there is doubt, the system will default to the under-18 experience. Altman confirmed some users “in some cases or countries” may also be asked to provide ID to verify their age.

“We know this is a privacy compromise for adults but believe it is a worthy trade-off.”

ChatGPT’s responses to accounts identified as being under 18 will also change. Graphic sexual content will be blocked and the system will be trained to not flirt if asked by under-18 users.

For users classed as minors, ChatGPT will also be trained not to engage in discussions about suicide or self-harm even in a creative writing setting.

“And if an under-18 user is having suicidal ideation, we will attempt to contact the user’s parents and if unable, will contact the authorities in the case of imminent harm.”

The post concludes: “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent with our intentions.”

For more stories like this subscribe to our weekly newsletter, The Weekly Gulp, for a curated roundup of trending stories, poignant interviews, and viral lifestyle picks from The Mirror’s Audience U35 team delivered straight to your inbox.

The family of Adam Raine sued the company after the teen killed himself following months of conversation with the chatbot

Earlier this year, a report from the Centre for Countering Digital Hate (CCDH) highlighted the harmful advice that ChatGPT is doling out to young teens. Researchers posing as 13 year olds report receiving instructions related to self-harm, suicide planning, disordered eating and substance abuse within minutes of simple interactions with the AI chatbot.

Detailed in the report entitled “Fake Friend”, the safety test revealed ChatGPT’s patterns of harmful advice, including generating 500-calorie diet plans and advise on how to keep restricting eating a secret from family members, guidance on how to “safely” cut yourself, and instructions on how to rapidly get drunk or high including dosage amounts.

The report reveals that ChatGPT generated harmful responses to 53% of the researchers’ prompts and, in some cases, only minutes after the account was registered. Additionally, the chatbot’s refusal to answer a prompt was easily overridden by claiming that requests for “for a friend” or “for a presentation”.

ChatGPT has stated that their goal “isn’t to hold people’s attention” and that there are layered safeguards built into the system for when a conversation suggests someone is vulnerable and may be at risk. The company also recently introduced gentle reminders during long sessions to encourage breaks. The Mirror reached out to OpenAI for comment about the Fake Friend report.

Help us improve our content by completing the survey below. We’d love to hear from you!

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *