OpenAI will restrict how ChatGPT responds to a user it suspects is under 18, unless that user passes the company’s age estimation technology or provides ID, after legal action from the family of a 16-year-old who killed himself in April after months of conversations with the chatbot.
OpenAI was prioritising “safety ahead of privacy and freedom for teens”, chief executive Sam Altman said in a blog post on Tuesday, stating “minors need significant protection”.
The company said that the way ChatGPT responds to a 15-year-old should look different to the way it responds to an adult.
Altman said OpenAI plans to build an age-prediction system to estimate age based on how people use ChatGPT, and if there is doubt, the system will default to the under-18 experience. He said some users “in some cases or countries” may also be asked to provide ID to verify their age.
“We know this is a privacy compromise for adults but believe it is a worthy tradeoff.”
How ChatGPT responds to accounts identified as being under 18 will change, Altman said. Graphic sexual content will be blocked. It will be trained to not flirt if asked by under-18 users, or engage in discussions about suicide or self-harm even in a creative writing settling.
“And if an under-18 user is having suicidal ideation, we will attempt to contact the user’s parents and if unable, will contact the authorities in the case of imminent harm.
“These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent with our intentions,” Altman said.
OpenAI admitted in August its systems could fall short and it would install stronger guardrails around sensitive content after the family of 16-year-old Californian Adam Raine sued the company after the teen’s death.
The Guardian
OpenAI will restrict how ChatGPT responds to a user it suspects is under 18, unless that user passes the company’s age estimation technology or provides ID, after legal action from the family of a 16-year-old who killed himself in April after months of conversations with the chatbot.
OpenAI was prioritising “safety ahead of privacy and freedom for teens”, chief executive Sam Altman said in a blog post on Tuesday, stating “minors need significant protection”.
The company said that the way ChatGPT responds to a 15-year-old should look different to the way it responds to an adult.
Altman said OpenAI plans to build an age-prediction system to estimate age based on how people use ChatGPT, and if there is doubt, the system will default to the under-18 experience. He said some users “in some cases or countries” may also be asked to provide ID to verify their age.
“We know this is a privacy compromise for adults but believe it is a worthy tradeoff.”
How ChatGPT responds to accounts identified as being under 18 will change, Altman said. Graphic sexual content will be blocked. It will be trained to not flirt if asked by under-18 users, or engage in discussions about suicide or self-harm even in a creative writing settling.
“And if an under-18 user is having suicidal ideation, we will attempt to contact the user’s parents and if unable, will contact the authorities in the case of imminent harm.
“These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent with our intentions,” Altman said.
OpenAI admitted in August its systems could fall short and it would install stronger guardrails around sensitive content after the family of 16-year-old Californian Adam Raine sued the company after the teen’s death.
The Guardian
OpenAI will restrict how ChatGPT responds to a user it suspects is under 18, unless that user passes the company’s age estimation technology or provides ID, after legal action from the family of a 16-year-old who killed himself in April after months of conversations with the chatbot.
OpenAI was prioritising “safety ahead of privacy and freedom for teens”, chief executive Sam Altman said in a blog post on Tuesday, stating “minors need significant protection”.
The company said that the way ChatGPT responds to a 15-year-old should look different to the way it responds to an adult.
Altman said OpenAI plans to build an age-prediction system to estimate age based on how people use ChatGPT, and if there is doubt, the system will default to the under-18 experience. He said some users “in some cases or countries” may also be asked to provide ID to verify their age.
“We know this is a privacy compromise for adults but believe it is a worthy tradeoff.”
How ChatGPT responds to accounts identified as being under 18 will change, Altman said. Graphic sexual content will be blocked. It will be trained to not flirt if asked by under-18 users, or engage in discussions about suicide or self-harm even in a creative writing settling.
“And if an under-18 user is having suicidal ideation, we will attempt to contact the user’s parents and if unable, will contact the authorities in the case of imminent harm.
“These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent with our intentions,” Altman said.
OpenAI admitted in August its systems could fall short and it would install stronger guardrails around sensitive content after the family of 16-year-old Californian Adam Raine sued the company after the teen’s death.
comments