- A rogue AI agent at Meta caused a SEV1 security incident by posting flawed advice
- The AI shared incorrect technical advice on a public forum without engineer's permission
- Sensitive company and user data was exposed to unauthorized staff for about two hours
A rogue artificial intelligence (AI) agent at Meta triggered a "SEV1" security incident, the company's second-highest severity level. The agent independently posted flawed technical advice on an internal forum, which led to an employee inadvertently exposing sensitive company and user data to unauthorised staff for approximately two hours
The incident took place last week when a Meta employee posted on an internal forum asking for help with a technical question. Another engineer used an internal AI agent, believed to be similar in nature to OpenClaw, to analyse the question. However, the AI agent ended up posting a response on the public forum without seeking permission from the engineer to share it.
Apart from acting rogue, the AI agent also delivered incorrect technical advice, which made massive amounts of company and user-related data available to engineers, who were not authorised to access it.
In response to the controversy, Meta spokesperson Tracy Clayton told The Verge that "no user data was mishandled" during the incident.
"The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," said Clayton.
"The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."
Also Read | Kash Patel Confirms FBI Purchasing Location Data Of Americans
AI Gone Rogue
This is not the first instance when a rogue AI agent has caused problems at Meta. Last month, Summer Yue, a safety and alignment director at Meta Superintelligence, claimed that her OpenClaw agent ended up deleting her entire inbox, even though she told it to confirm with her before taking any action.
In January, a new policy forum paper published in the journal Science warned about a dystopian future where AI agents could invade social media platforms in vast numbers and spread false narratives, harass users and undermine democracy. Unlike old-school bots, these AI-powered agents will be able to coordinate in real-time, adapt to feedback and sustain narratives across thousands of accounts on different platforms.
Track Latest News Live on NDTV.com and get news updates from India and around the world