The Cybersecurity Arms Race: AI as Both Protector and Adversary
For years, businesses have relied on human analysts and rule-based security systems to help protect against cyber threats. But with the explosion of digital transactions, remote work and increasingly complex attack strategies, traditional cybersecurity measures are struggling to keep up. AI has become the newest weapon in the fight against cybercrime—detecting threats in real time, automating responses and even predicting attacks before they happen.
Vishal Bhalla, CEO of AnalytAIX, explains how AI-powered systems can give organizations an edge in security: “AI can enhance cybersecurity by analyzing data, identifying patterns and predicting threats in real time. By integrating frameworks like SOC 2 and NIST, organizations can maintain compliance while strengthening security.”
“AI accelerates cybersecurity measures by continuously monitoring networks, minimizing human error, and detecting anomalies before they become major breaches.”
This level of automation allows businesses to monitor vast amounts of data, detect anomalies faster and sometimes even respond before damage is done. As Aravind Nuthalapati, Cloud Technology Leader at Microsoft, explains, “AI accelerates cybersecurity measures by continuously monitoring networks, minimizing human error and detecting anomalies before they become major breaches.”
But while AI is helping to fortify digital defenses, it also presents a new set of risks. Cybercriminals are using the same technology to enhance their attacks—creating a cybersecurity arms race where both sides are leveraging AI for strategic advantage.
“Cybercriminals are leveraging AI to create deepfake-driven attacks and AI-powered ransomware, making security breaches more difficult to detect.”
The Dark Side of AI: How Cybercriminals Are Using AI Against Us
As much as AI helps businesses strengthen security, it’s also empowering attackers. Malicious AI algorithms can test different attack variations at an accelerated rate, identifying the most effective ways to bypass security systems.
In addition, “Cybercriminals are leveraging AI to create deepfake-driven attacks and AI-powered ransomware, making security breaches more difficult to detect,” says Jim Liddle, Chief Innovation Officer of Data Intelligence and AI at Nasuni.
With AI-generated deepfakes, cybercriminals can infiltrate organizations and carry out fraud at an unprecedented scale.
Phishing attacks, once easy to spot with generic templates and awkward phrasing, are now nearly indistinguishable from legitimate emails and communications, thanks to AI-powered language models. And while AI-automated phishing costs next to nothing for cybercriminals, research has found that it fools targets at the same rate as human-generated phishing attempts—about 60%.
Even more concerning is the rise of AI model poisoning, where attackers manipulate AI training data to introduce security loopholes. Jerry Dimos, CRO at Process Street, points out that “AI security systems can be compromised by adversarial attacks, where malicious actors manipulate algorithms to evade detection.”
This means that businesses could be unknowingly training their AI-driven security tools on manipulated data—leaving them vulnerable to future attacks.
“A combination of AI automation and human oversight ensures that cybersecurity defenses remain effective and adaptive.”
The Solution? A Hybrid Model of AI and Human Oversight
Given AI’s dual nature as both a security asset and a vulnerability, experts agree that businesses need a layered approach—one that combines AI’s efficiency with human oversight.
Suri Nuthalapati, Data and AI Leader, Americas, at Cloudera, personally recommends this collaborative approach: “A combination of AI automation and human oversight ensures that cybersecurity defenses remain effective and adaptive.”
But while AI tools are more accessible than ever, bringing the human component to cybersecurity may be more of a struggle for today’s business leaders. Cybercrime Magazine reports that there are over 3.5 million unfilled cybersecurity positions worldwide, potentially leading to an imbalance in the AI/human cybersecurity equation.
Happily, Gordon Pelosse, EVP at AI CERTs, believes that AI can help bridge the cybersecurity talent gap: “With over 400,000 unfilled cybersecurity jobs [in the US], AI can immediately assist by automating monitoring and detection, allowing human experts to focus on more complex threats.”
“The most secure approach is a hybrid model where AI handles routine tasks but human oversight ensures ethical and effective decision-making.”
The most effective security strategies will combine AI-powered monitoring with human decision-making, ensuring that AI-generated insights are interpreted correctly and that vulnerabilities don’t go unnoticed.
Justin Newell, CEO of INFORM North America, sums it up well: “The most secure approach is a hybrid model where AI handles routine tasks but human oversight ensures ethical and effective decision-making.”
Balancing Innovation and Risk at the Intersection of AI and Cybersecurity
The use of AI in cybersecurity is essential. But as businesses adopt AI-driven security solutions, they must also recognize the risks AI introduces. By implementing strong AI governance, conducting regular security audits and maintaining human oversight, organizations can better maximize AI’s benefits while mitigating its dangers.
The future of cybersecurity isn’t about eliminating AI from the equation—it’s about using AI wisely. As cyber threats continue to evolve, businesses must remain one step ahead, making sure AI works for them, not against them.