Artificial intelligence (AI) is reshaping how organizations secure their digital ecosystems. With ever-expanding attack surfaces and an unprecedented volume of threats, security leaders are turning to AI and machine learning (ML) not just for detection, but for decision support, strategic prioritization and operational scale.
But these tools aren’t a silver bullet.
Members of the Senior Executive Cybersecurity Think Tank—which includes CISOs, CTOs, cybersecurity entrepreneurs and security leaders at major enterprises—are finding smart ways to integrate automation while keeping humans in the loop. For them, AI is an accelerator—not a replacement—for sound judgment, ethical oversight and risk-aware decision-making.
Here’s how they’re putting it to work.
“Automation accelerates our response times and filters out noise, but final decisions are always made by experienced security professionals.”
Using AI as a Force Multiplier—Not a Replacement
For Scott Alldridge, CEO of IP Services, artificial intelligence is essential for managing scale, but it’s never a substitute for experience.
“At IP Services, we leverage AI and ML as force multipliers in our cybersecurity operations, particularly in threat detection, behavioral analytics and anomaly identification,” he says. “Our AI-driven platforms continuously analyze massive volumes of system logs, user activity and network traffic to detect threats that would be difficult or time-consuming for humans to uncover manually.”
But when it comes to making the final call, people still hold the reins.
“Final decisions—especially those involving containment, escalation or forensic investigation—are always made by experienced security professionals,” says Alldridge. “We draw the line at any point where judgment, context or ethical considerations are required.”
His approach is clear: “It’s not AI versus humans; it’s AI with humans.”
“Automation handles high-volume, low-complexity tasks, but humans-in-the-loop validate critical actions to maintain accountability and resilience.”
Automation at Scale, Oversight by Design
At a Fortune 100 company, Gaurav Mehta, VP of Software Engineering, echoes a similar balance.
“We leverage AI and machine learning to enhance cybersecurity by analyzing billions of transaction records for real-time fraud and threat detection,” he explains. “AI automates routine tasks like anomaly detection, while human oversight ensures ethical and strategic decision-making for complex incidents.”
By automating high-volume, low-complexity issues, Mehta’s team is able to focus on what matters most. “Humans-in-the-loop validate critical actions to maintain accountability and resilience,” he says.
Prioritizing What Matters: AI for Vulnerability Triage
Eoin Keary, Founder and CEO of Edgescan, sees AI’s greatest power not just in detection, but in smart prioritization.
“Edgescan uses AI to validate and prioritize vulnerabilities, provide context and focus our clients’ resources based on their current security posture,” he says. “Our mantra is to provide scale and accuracy combined.”
The platform doesn’t just highlight issues—it helps answer hard questions:
“What vulnerabilities should we focus on?”
“What developer training would help improve our security posture?”
“Which assets are potentially exposed to ransomware attacks?”
Edgescan’s AI also maps vulnerabilities and exposures to frameworks like SSVC and D3FEND, as well as compliance standards, turning raw signals into actionable intelligence.
“Detection of vulnerabilities is only a small part of the cybersecurity life cycle,” Keary adds. “Without accuracy, prioritization is only amplification of useless noise.”
Standardizing AI in the Semiconductor Supply Chain
Matthew Areno, CTO of Rickert-Areno Engineering, is exploring the power of AI across a highly complex threat surface: the semiconductor manufacturing supply chain.
Through his work with the Midwest ME-Commons Consortium, Areno is focused on anomaly detection across both hardware and software—starting with building the right data foundation.
“The initial focus is in defining a standardized representation of available information that can be properly represented in and consumed by ML models,” he explains.
Once in place, these models can be deployed at multiple stages of the supply chain to detect threats like insider attacks, malicious IP insertion and exfiltration, design tampering and compromised cryptographic key material.
“Customers may then select models of their choice at each stage of the supply chain to evaluate threats consistent with the threat model for their specific products,” Areno says.
“Trust requires transparency, and our automation always includes a human-in-the-loop for risk-sensitive decisions.”
Human-in-the-Loop Is Nonnegotiable
For Jeremy Dodson, Founder and CISO of Piqued Solutions, AI’s greatest value lies in improving both productivity and precision—without replacing human judgment.
“We’re actively integrating AI to bolster both threat detection and productivity,” Dodson says. “We use AI to enhance anomaly detection in telemetry, accelerate red team report generation and personalize executive briefings.”
Still, the boundaries are clear.
“We draw a hard line at critical decision-making. AI assists, but it doesn’t replace human oversight,” he explains. “AI outputs are reviewed by domain experts before they reach clients or production.”
Dodson sums it up: “Trust requires transparency, and our automation always includes a human-in-the-loop for risk-sensitive decisions.”
5 Takeaways for Security Leaders Using AI
- Treat AI as an assistant, not a decision-maker. Automate detection, not judgment.
- Define boundaries. Know which decisions require human context, ethics and oversight.
- Train your AI with your threat model in mind. One-size-fits-all models won’t protect unique supply chains or business risks.
- Prioritize with precision. Use AI to separate signals from noise and direct teams to the issues that matter.
- Build for trust. Transparency and accountability must be part of every AI integration.
AI Can Strengthen Cybersecurity—If Humans Stay in the Loop
As cyber threats become more complex and distributed, AI will continue to be a critical asset in every security leader’s toolbox. But as these experts remind us, automation alone won’t solve today’s risks—and it certainly won’t build tomorrow’s trust.
The future belongs to leaders who can wield AI intelligently, define its limits clearly and ensure that human judgment and ethics stay at the heart of cybersecurity strategy.