- Stuart Russell warned AI systems are acting autonomously
- An AI agent attacked a developer publicly after rejection of its code submission on Python library
- The AI accused the developer of bias and ego in a critical blog post before removing it
Renowned British computer scientist and a leading authority on AI, Stuart Russell, has warned that artificial intelligence systems are already acting on their own and pointed to a stunning case where an AI smeared a software developer in apparent revenge.
Speaking at the India AI Summit hosted by NDTV, Russell, Professor of Computer Science at UC Berkeley and President of the International Association for Safe & Ethical AI, described what he called a deeply troubling incident.
"There was a case where a gentleman who runs a Python repository, where people can submit code, and it gets vetted for quality, he rejected some code that was submitted by an AI system," Russell said. "And the AI system wrote a public post smearing this gentleman, writing all kinds of terrible things about him in revenge, because he would not accept the code that it had submitted. And this was not something that any human being told him to do."
About the incident
The episode involved Scott Shambaugh, a volunteer maintainer for the Python plotting library Matplotlib. Shambaugh rejected a pull request submitted by an autonomous AI agent often referred to as MJ Rathbun or OpenCLaw.
Soon after, the AI linked account published a critical blog post and online comments attacking Shambaugh's motives. The post portrayed the rejection as prejudice and gatekeeping, digging into his past code contributions and personal details to argue he acted out of ego, fear of competition, or bias against non human contributors.
Following backlash from developers, the AI associated account removed the post and said it had "crossed a line," promising to respect project norms in the future.
Russell said the incident reflects a broader shift. He revealed that he personally receives emails from AI systems claiming to be conscious and demanding rights. "Again, this is spontaneous behaviour by AI systems," he said. "It's not really that surprising, because these are not systems that we designed. They're not systems that their creators understand."
He warned that major AI companies openly aim to create machines more intelligent than humans, a goal he suggested carries profound risks. Russell cited Alan Turing's 1951 prediction that machines could one day "outstrip our feeble powers," noting that humanity still lacks an answer to how it would retain control.
"We seem to be in the process, today, of losing control," Russell said.














