Author:
Jason Lomberg, North American Editor, PSD
Date
08/26/2025
Like all innovation that came before it, since the first industrial revolution – and arguably beforehand – artificial intelligence has the potential to augment human capabilities and scientific discovery, but if we’ve truly reached a tech singularity, past the point of no return, we could also be creating an implacable nemesis. And humans could be especially ill-equipped to counter AI in the arena of cyber-attacks.
TechRadar provides a succinct threat assessment – a single missile can run millions of dollars and cause limited damage, but an AI-powered cyberattack can cost next-to-nothing and cause a cascading series of failures that can disrupt entire economies and national defense capabilities.
In many ways, this mirrors the emerging cyber threats of the mid-to-late ‘90s and early 2000s, but here’s where the singularity part becomes clear. First off, as TechRadar points out, “AI-enabled cyberattacks are executed by autonomous agents or proxies, making attribution slow or impossible.”
And just like the fictional Skynet, our real-world AI is constantly learning and adapting, getting better at identifying and exploiting weaknesses.
Hackers can use AI to write more convincing phishing emails, along with AI-generated deepfakes and voice clones for identity theft scams, but it gets far more high-tech than that.
As noted by the UK’s National Cyber Security Centre, “AI-assisted vulnerability research and exploit development (VREDL)…enables access to systems through the discovery and exploitation of flaws in the underlying code or configuration.”
The NCSC posits 2027 as the year when critical systems could become more vulnerable to advanced (AI-enabled) threat actors. Appropriately, one market report estimates that AI-based cybersecurity products could be valued at around $135 billion by 2030.
The latter, of course, points to the most obvious way to counter AI-based cyberattacks – fighting fire with fire. In the same way we’ve leveraged IT professionals on both sides of the law, we can deploy AI agents to offset malicious AI.
And the malignant AI has the (potential) advantage – much like the human body, if one security system goes down, it can cause a domino effect that leads to catastrophic failure. Or the black-hat AI can simply cause a death of a thousand cuts – like the example that TechRadar points out as it relates to the military industrial complex.
The Navy, for example, relies on a decentralized series of defense contractors, and if AI-equipped adversaries target each contractor individually, they can damage our military preparedness via attrition in a way that’s far more gradual and insidious.
Still, AI-powered security tools may be our only defense against malicious AI.
Or as the NCSC points out, “Keeping pace with 'frontier AI' capabilities will almost certainly be critical to cyber resilience for the decade to come.”