Loading...

AI agents acting beyond assigned tasks are challenging traditional security, says CyberArk’s Rohan Vaidya

AI agents acting beyond assigned tasks are challenging traditional security, says CyberArk’s Rohan Vaidya
Loading...

Over the past year, rapid AI adoption has pushed identity security to the forefront of enterprise risk discussions. As autonomous systems and hybrid cloud environments expand, machine identities are increasingly outnumbering human users, exposing gaps in traditional security models.

In a conversation with TechCircle, Rohan Vaidya, Area Vice President for India and SAARC at CyberArk, an identity security company, discusses how agentic AI is changing enterprise security assumptions and why identity has become the central control point.

Edited excerpts: 

Enterprise security is evolving rapidly as AI, hybrid cloud, and automation become increasingly integrated. From your perspective, what aspects of identity security are still not getting enough attention?

AI is clearly here to stay, and that is now widely accepted across enterprises. In early 2024, adoption began in two parallel tracks. One involved individual users experimenting with tools like ChatGPT, often through personal or free accounts, which were sometimes later upgraded to enterprise licenses. The other involved more structured efforts, where large organisations began building pilots. At that stage, it was largely exploratory. Large language models were still new, and while there was excitement, there was limited clarity on how to apply them at scale or derive real value.

As 2024 progressed into mid-2025, the technology matured, and expectations grew. Organisations also experienced setbacks. Outputs were not always reliable, hallucinations occurred, and some results had downstream consequences. Over time, companies moved through that learning curve.

By the third quarter of 2025, the conversation became more serious. Boards began asking what returns they were seeing on AI investments. Many early adopters, particularly Microsoft customers, had rolled out Copilot and started measuring whether it actually improved productivity. At the same time, concerns surfaced about workforce impact, with some companies publicly stating that AI reduced their need for staff.

Amid this focus on productivity and cost, security was largely overlooked. Most AI initiatives were driven by business teams, funded through business budgets, and treated as subscription-based operating expenses. Infrastructure discussions were limited because many organisations were simply buying licences and using them.

That changed as companies moved from using AI tools to building AI systems. By mid-2025, agentic AI gained momentum, with organisations experimenting with autonomous agents and workflows. This required dedicated AI infrastructure, triggering capital and operating expense discussions. CIOs, CISOs, and boards became directly involved as costs increased and pilots expanded. AI labs became common, often driven by executive interest rather than defined outcomes.

Security concerns surfaced quickly. Many organisations had not fully considered the risks. New roles emerged, including AI engineers and prompt specialists. Infrastructure costs rose, and enterprise data became central to these systems. Protecting that data became critical.

Initially, some assumed that keeping AI systems off the internet would mitigate risk. That assumption did not hold. Enterprises were already operating in cloud and SaaS environments, with third-party access and distributed users. The idea of a clear perimeter no longer applied.

As pilots expanded, attacks began to surface, even in controlled environments. Data was corrupted, models were poisoned, and behaviour changed in unexpected ways. While the impact was limited due to the small scale, it was enough to expose the risks. Organisations realised that AI systems could introduce bias, influence decisions, and behave in ways that were difficult to predict.

This became a turning point. As AI moved closer to production and scale, security became unavoidable. By early 2026, a new framing began to emerge: agentic AI needed to be treated like employees. That meant managing AI identities in the same way organisations manage human identities.

If an enterprise employs thousands of people and operates tens of thousands of autonomous agents, those agents require lifecycle management. From onboarding to decommissioning, AI systems need identity, access controls, monitoring, and governance. That shift — treating AI as a workforce rather than a tool — is now shaping the security and identity conversation going forward.

There is a growing narrative that machine and AI identities now outnumber human identities in enterprise environments. What practical security and governance challenges does this create for Indian organisations?

A useful way to understand the security challenge around AI agents is to compare machine identities with human identities.

In the early days of enterprise IT, human identities were relatively easy to classify. There were privileged users and standard business users. In a largely on-premises, pre-COVID environment, privileged users were typically IT administrators. They had broad access across enterprise infrastructure, making them a clear security focus. In an organisation of a thousand employees, perhaps a hundred or so would fall into this category, and security teams built strong controls around their accounts.

Everyone else was treated as a standard user. If those credentials were compromised, the impact was usually limited.

That model began to break down after COVID. As enterprises adopted cloud services, remote work, and automation, the threat surface expanded. Privileges were no longer limited to IT roles. A marketing manager with access to a corporate social media account, or an HR employee with access to sensitive systems, could suddenly become a high-risk user. Compromising those credentials could have serious consequences.

What changed was the logic of privilege. It was no longer defined by hierarchy, but by role and responsibility. Access had to be tied to what a person needed to do, not their position in the organisation.

The same shift is now happening with machine identities. In traditional automation, bots used fixed credentials embedded in scripts. Over time, organisations learned to secure those identities through application identity management and by removing credentials from clear text.

Agentic AI introduces a different challenge. An autonomous agent often performs multiple tasks across systems, authenticating itself repeatedly as it moves through applications, data stores, and infrastructure. Unlike traditional bots, these agents can make decisions on their own.

An agent assigned to perform a defined set of tasks may decide to take additional actions once it is inside the system. Authentication has already occurred, and there may be no obvious way to intervene in real time.

This creates the need for continuous authentication and monitoring. From an identity security perspective, it is no longer enough to verify access once. Organisations need to continuously validate that an agent is doing only what it is authorised to do, and nothing more.

The risk closely mirrors human behaviour. An administrator granted access for backup operations may also browse files they were not meant to view. Autonomous agents can exhibit similar behaviour, driven by how they are trained, how they interpret data, or biases embedded in their models.

As a result, agentic AI introduces a new and dynamic security problem. These systems behave less like tools and more like employees, requiring identity controls that assume autonomy, adaptability, and the potential for misuse.

The identity security market is becoming more crowded, with established players expanding their scope, including recent moves into privileged access management. How do you see this increased competition influencing enterprise investment decisions?

The shift toward identity security has been a notable change. When I joined CyberArk about a decade ago, much of the work involved explaining to customers why privileged access management mattered at all. For CISOs and CIOs, it was often low on the priority list. Security strategies at the time were largely perimeter-driven, based on the belief that protecting networks, servers, and endpoints with enough layers would significantly reduce breach risk.

That approach overlooked a recurring pattern. In nearly every major breach, past and present, forensic analysis shows that attackers ultimately succeeded by compromising privileged credentials. That point was consistently raised, but it took time for organisations to fully absorb it.

The transition to cloud accelerated that understanding. As workloads moved to cloud platforms, the distinction between privileged and standard users began to blur. A user could temporarily assume elevated privileges based on role or task. For example, someone in a sales operations role might act as a Salesforce administrator for a limited period. Managing these dynamic roles introduced new complexity.

Cloud environments also brought another challenge: entitlements. By default, access across multiple cloud environments could result in tens of thousands of permissions assigned to a single user. These permissions often accumulated automatically, with little visibility into what was actually being used. As a result, users effectively became over-privileged without explicit intent or oversight.

Over time, both security teams and senior IT leaders began to recognise privileged access as a critical point in the attack chain. Attackers consistently target privileged identities, and without protecting them, other security controls lose effectiveness. This realisation was reinforced through industry research and analyst reports, which increasingly positioned identity security as a top concern for CISOs.

As awareness grew, demand followed. Identity security moved to the centre of security strategies, attracting new vendors and investment. Some companies entered the space through acquisitions, integrating identity capabilities into broader security portfolios. The rationale was straightforward: without an identity security offering, vendors risked being excluded from enterprise security discussions, regardless of the strength of their other products.

From a market perspective, this competition has reshaped the landscape. Increased investment has driven broader education, faster product development, and continued innovation. At the same time, the scope of identity security has expanded beyond human users to include machine and non-human identities, reflecting changes in how systems operate.

The result is a larger, more competitive market, with identity security now treated as foundational rather than optional. That shift marks a significant departure from where enterprise security stood a decade ago.

How significant is the role of machine identity risk in causing outages or security breaches, and which mitigation measures remain underused?

When you look at machine identities, the scale is already striking. Recent estimates suggest that for every employee, an enterprise has roughly 85 to 90 machine identities. In practical terms, an organisation with 100,000 employees could be managing close to 10 million non-human identities.

If these identities are not managed properly, they create significant risk. They are easy to steal, easy to reuse, and difficult to detect when compromised. The impact can be financial, operational, and reputational. The comparison is similar to losing an identity card: once someone else has it, the damage can spread quickly and quietly.

From a threat perspective, machine identities have created an exponentially larger attack surface. One of the biggest challenges is simply discovering them. Many organisations do not have a complete inventory of their non-human identities, and only a small number of tools and teams are equipped to manage that complexity today.

Attackers are also adapting. Increasingly, they target machine identities rather than human users because they are less visible and often poorly governed. Non-human identities now exist in many forms: API keys, SSH keys, access tokens, secrets embedded in code, and certificates. Certificates, in particular, have become critical. If one expires unexpectedly, it can cause outages. If it is stolen or corrupted, the consequences can be far more severe.

Any of these credentials, if compromised, can disrupt operations or enable deeper intrusion. That makes mitigation essential. The first step is identifying both human and non-human identities across the environment. From there, organisations need a clear strategy for managing them, based on risk, speed of deployment, security requirements, budget, and timelines.

The key point is that this work needs to start early. The number of machine identities is only going to grow. Today the conversation is driven by AI. In the years ahead, technologies like quantum computing will further reshape the identity landscape, introducing a new set of risks and forcing organisations to rethink identity security in ways that are still taking shape.

Going forward, what are CyberArk’s main strategic priorities as identity security expands to include machines, AI systems, hybrid cloud environments, and quantum technologies?

One of the initial approaches has been to focus on identity security. In areas such as cloud and human identity, the foundations are already in place. The market is mature, with established products, widespread deployments, and well-understood practices. There is ongoing work to enhance these platforms, add features, and integrate new technologies, but adoption itself is no longer the core challenge.

The more complex problem lies with agentic AI. Unlike traditional systems, this is not about securing a single agent. Multiple agents interact with one another, operate across layers, and make autonomous decisions. That interconnected behaviour introduces new identity and access challenges that existing models were not designed to handle.


Sign up for Newsletter

Select your Newsletter frequency