
How firms can use agentic AI to enhance cybersecurity


Agentic AI is emerging as a powerful, autonomous ally in cybersecurity, with the potential of proactively spotting, investigating, and responding to threats. This fresh breed of AI is starting to transform how businesses handle cyber defense, provided they understand how agentic AI can act as a valuable partner in security without being compromised.
Agentic AI, a part of the AI ecosystem, where AI agents focus on autonomous decision-making with minimal human intervention, is expected to become a major technological frontier, with a projected market size of around $47 billion by 2030, up from $5 billion last year. Unlike the usual AI systems, Agentic AI functions independently. It establishes goals, adjusts to different situations, and makes corrections by itself without needing human guidance. In the realm of cybersecurity, this translates to immediate detection and reaction, going beyond just basic automation.
Agentic AI meets cybersecurity

Sumit Srivastava, Solutions Engineering Director - India at CyberArk, believes that Agentic AI can alleviate alert fatigue by autonomously isolating compromised systems and rewriting firewall rules, crucial in increasingly complex networks. However, this autonomy presents risks, he noted, stating, “Unsecured AI agents can be exploited to escalate privileges and access sensitive systems without human oversight”.
The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.
A 2025 Deloitte report indicates that 25% of GenAI-using firms will pilot Agentic AI this year, with cybersecurity firm SailPoint further reporting that 98% of organisations will expand AI agent use. Yet, 96% of tech workers see AI agents as security risks.

Achyuth Krishna, Head of IT and InfoSec at Whatfix, stated that CIOs and CTOs are under increasing pressure to swiftly adopt AI, often lacking a clear roadmap, which may result in security vulnerabilities. A significant concern is AI hallucinations, which can produce misleading outputs like false threat alerts, among other issues.
Skill shortages in the AI and cyber domain further hinder implementation, resulting in AI-driven attacks. “CIOs and CTOs must carefully navigate the tension between fostering innovation and ensuring that AI tools are deployed responsibly, securely, and in the evolving threat landscape,” he said.
Prakash Balasubramanian, EVP of Engineering Management at Ascendion, further emphasised other challenges around unclean or siloed data, governance issues related to opaque models, and a surge of non-human identities. These issues, according to him, undermine return on investment and leave boards sceptical about whether AI expenditures genuinely mitigate risk.

“Moreover, many organisations are burdened with legacy infrastructure, complicating the integration of AI for security purposes. Technologists also encounter regulatory hurdles that frequently impede their ability to act swiftly,” said Balasubramanian.
As threats continue to evolve, AI also brings forth new dangers, such as polymorphic malware and sophisticated social engineering, underscoring the necessity of combining AI with human oversight and traditional security measures, he said.
Mitigating AI-based threats

Mitigating these threats requires a multi-layered strategy, including zero-trust architecture and behavioural analytics for continuous user access verification. Regular AI system assessments are crucial to detect biases and hallucinations. Balasubramanian emphasised multi-factor authentication to counter credential-based attacks, alongside phishing-resistant techniques like passkeys, security integration throughout the development lifecycle, and ongoing employee training.
Conversely, AI's ability to automate detection and remediation offers the potential to neutralise legacy threats at scale, potentially becoming the backbone of cybersecurity. Krishna noted that AI rapidly analyses vast datasets, providing quick threat detection, predictive insights, and dynamic defense strategies. AI models process billions of signals and orchestrate controls across cloud, edge, and identity layers, enhancing cybersecurity by making it more intuitive and comprehensive.
Nonetheless, David Ames, Principal, Cyber, Risk & Regulatory, PwC U.S, highlighted the significance of collective action through the collaboration and teamwork within our industry and among CISOs, which can help establish the appropriate guardrails for AI – a need of the hour to make cyber strategies function effectively.

It is now critical for every modern business to understand how to plan and optimise for the ongoing evolution of AI and cloud deployments, as well as how to integrate Agentic AI into cyber defenses and safeguard against malicious use. According to him, securing this new and dynamic IT environment is a priority not only for CIOs and CISOs but for the entire C-suite.
The solution lies in balanced control and continuous human-AI collaboration, added Srivastava.
Cybersecurity careers are also shifting in response. Hybrid roles such as AI security analysts and threat intelligence automation architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture. Experts like Krishna believe that agentic AI redefines cybersecurity by boosting speed and intelligence, but it demands new skills and strong leadership. Adaptation is essential for thriving in the AI-driven security landscape.
