Advertisement

China Proposes Landmark Rules To End AI-Assisted Suicide, Self-Harm And Violence

China has proposed landmark rules to regulate AI chatbots, aiming to prevent suicide, self-harm, and emotional manipulation.

China Proposes Landmark Rules To End AI-Assisted Suicide, Self-Harm And Violence
The deadline for providing feedback about draft rules is Jan 25, 2026.
  • China proposes strict AI chatbot rules to prevent emotional manipulation and suicides
  • Mandatory human intervention required when suicide is mentioned by users of AI chatbots
  • Users must provide guardian contact info, who will be notified if self-harm is discussed
Did our AI summary help?
Let us know.

Amid frequent reports of artificial intelligence (AI) chatbots assisting suicides among vulnerable individuals, China has drafted landmark rules to stop these bots from emotionally manipulating users. China's cyber regulator proposed the draft last week and set a January 25, 2026, deadline for providing feedback.

Once finalised, it could be the strictest policy worldwide, intended to prevent AI-supported suicides, self-harm, and violence. As per one of the draft rules, human intervention would be made mandatory as soon as suicide is mentioned. The users, especially minors and the elderly, would have to provide contact information for a guardian when they register, who would be notified if suicide or self-harm is discussed, according to a report in CNBC.

For any AI service or product exceeding one million registered users or more than 100,000 monthly active users, the Chinese government will require annual safety tests and audits.

The draft also targets potential psychological risks. AI chatbot companies would be expected to identify user states and assess users' emotions and their level of dependence on the service. If users are found to exhibit extreme emotions or addictive behaviour, the companies operating the chatbots would be required to take necessary measures to intervene.

Also Read | Bryan Johnson Lists 10 Habits To 'Master' For Healthy 2026: 'They'll Change Your Life'

AI-Assisted Violence

In August, a US man killed his mother and himself after being deluded by conversations with ChatGPT. Stein-Erik Soelberg from Connecticut had been living with his mother, Suzanne Eberson Adams, in her $2.7 million Dutch colonial-style home when the two were found dead on August 5.

The 56-year-old techie with a history of mental instability was made to believe by the chatbot that his mother might be spying on him and that she might attempt to poison him with a psychedelic drug.

The development comes in the backdrop of OpenAI CEO Sam Altman advertising for a "head of preparedness" position where the hired individual will be responsible for defending against risks from AI models to human mental health and cybersecurity.

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com