- Google thwarted AI-driven hacking aiming to exploit zero-day vulnerabilities bypassing 2FA
- OpenAI launched Daybreak Initiative to enhance AI coding security and accelerate cyber defense
- WEF and IMF warn cybersecurity now features AI attacking AI, posing global financial risks
Gone are the days when cybersecurity and hacking looked like someone in a hoodie typing furiously in a dark room. The new cyber battlefield looks very different. It is no longer just humans attacking humans. Increasingly, it is AI attacking AI at machine speed, at massive scale, and often before humans even realize something is wrong.
And over the past few days, the warning signs have been impossible to ignore. On Monday, Google's Threat Intelligence Group revealed it had likely thwarted an attempt by a hacking group to use artificial intelligence to "plan a mass vulnerability exploitation operation." It said it has "high confidence" that it recorded hackers using an AI model to find and exploit a zero-day vulnerability thus creating a way to bypass two-factor authentication. In cyber security a zero-day flaw is a software or hardware security flaw unknown to developers. It is called "zero-day" because the developers have had zero days to fix it, as malicious actors are already exploiting it in the wild. These are critical threats because no defense or patch exists yet, IBM explains.
Almost simultaneously, OpenAI announced the Daybreak Initiative, a new effort focused on strengthening the security of AI coding systems like Codex, a tacit acknowledgment that the tools writing tomorrow's software could also create tomorrow's vulnerabilities. OpenAI CEO Sam Altman called it "our effort to accelerate cyber defense and continuously secure software."
On Saturday, there was a fresh warning from the World Economic Forum that cybersecurity has officially entered an "AI versus AI" era, where both attackers and defenders are deploying autonomous systems. And just days earlier on Thursday, the IMF warned that AI-powered cyber threats could become an "inevitable" risk to the global financial system.
Last month, Anthropic hit the brakes on its Mythos model by delaying the rollout, concerned that the tech is so sharp it could help bad actors and hackers pinpoint and exploit decades-old software bugs that have been lying dormant for years. In fact, Barclays CEO C. S. Venkatakrishnan publicly described Anthropic's Mythos model as a "serious threat" to the global banking system
Put all of that together, and one uncomfortable truth emerges: the cyber arms race has entered a new phase and it is accelerating.
While announcing OpenAI's Daybreak initiative, Altman said: "AI is already good and about to get super good at cybersecurity; we'd like to start working with as many companies as possible now to help them continuously secure themselves."
Why this feels different
Cyberattacks are nothing new. Phishing emails, ransomware, identity theft etc have been around for a while. What AI changes is speed, scale and sophistication.
A scam email once took effort. Now AI can generate thousands of highly personalized ones in seconds. Voice cloning once sounded robotic. Now it can convincingly mimic a CEO approving a wire transfer.
Malware once had to be manually coded. AI can now help generate and adapt malicious code almost instantly. In other words AI is industrialising cybercrime.
"Historically, sophisticated attacks required elite talent, large budgets and patience," cybersecurity expert and Co-founder at Safe Security, Rahul Tyagi told NDTV.
"AI dramatically compresses those requirements. A small group can now automate reconnaissance, phishing, malware generation, social engineering, vulnerability discovery and even adaptive attack execution," he said.
The invisible risk nobody talks about
There is another problem, and it is less obvious. As banks, governments and companies rush to adopt AI, many are relying on the same handful of models, cloud providers and infrastructure companies.
That creates what Tyagi calls a "systemic cognitive monoculture." "The danger is not that one AI model fails," he said. "The danger is that everybody fails in the same way at the same time."
But, ironically, the weakest link may not be technology. It may still be us, humans. For years, cybersecurity systems looked for "unusual behaviour" such as a strange login, an odd transaction, a suspicious email. But AI can now imitate human behaviour so well that those signals are becoming less reliable.
An attacker can mimic your boss's voice. It can draft emails in their exact tone. It can replicate hesitation, urgency and even conversational style.
"Security can no longer rely only on behaviour analysis," Tyagi says. "It has to move toward continuous trust validation."
Can AI defend us from AI?
That is now the billion-dollar question. The short answer is yes, at least partly. AI systems are already getting better at spotting unusual activity, detecting threats early and responding far faster than human teams ever could. A machine does not need sleep, breaks or time to analyse thousands of alerts. It can scan enormous amounts of data almost instantly.
That is exactly why companies are rushing to build AI-powered cyber defence systems.
But there is an uncomfortable flip side to that.
"If attackers manipulate the defensive AI, you have effectively weaponised your own security infrastructure against yourself," Rahul Tyagi warns.
And that is what makes this new phase of cybersecurity so different. The fear is no longer just hacked systems or stolen data. It is the possibility that the very tools designed to protect critical infrastructure could themselves become compromised. In other words, the defenders can be hacked too.
What happens next
Over the next few years, this may quietly become one of the most important technology battles happening anywhere in the world. Beyond the usual debates and narratives playing out on social media, there is a less visible fight taking place in the background that's happening inside banks, hospitals, power grids, software networks and financial systems.
Most people will never directly see that battle. But if things go wrong, they will definitely feel the impact.
Until now, most of the focus in AI has been on speed, productivity and automation. Increasingly, it is also turning toward security, trust and resilience.
Because the debate is no longer about whether cybercriminals will use AI; they already are.
The real question now is whether the people defending the internet can evolve fast enough to keep pace.














