'Agents will need identity': Pine Labs CTO on the missing layer in Agentic AI
As enterprises race to deploy autonomous AI agents, a growing concern has emerged around what these agents are permitted to do, and who is accountable when things go wrong. In a conversation with TechCircle, Sanjeev Kumar, Chief Technology Officer at Pine Labs, spoke about the company's open protocol called GrantX, designed to define and enforce boundaries on what AI agents can access and act upon. The conversation covered the evolution of agentic AI, why existing guardrail approaches fall short, and what a cryptographic trust layer for agents could mean for industries like fintech, where errors carry high regulatory and financial consequences.
Edited Excerpts:
Everyone is talking about agentic AI right now. From your vantage point, what is fundamentally different about this wave compared to earlier AI or automation cycles?
The AI wave can broadly be seen in three parts. The first is developer efficiency, how coders are using AI. It started with auto-completion of code, then moved to agents writing code, and now it is fully agentic in nature.
The second part is how AI is impacting the enterprise. Here, AI is sitting alongside the employee, as a copilot or co-worker, helping them with daily routine tasks, making them more productive, and giving them their time back. In both of these cases, the human is still in the loop. The human has not surrendered control to the machine or agent. The human is always approving, always present in the flow.
The third part is where things change significantly. This is where you give the agent the right to act autonomously. You decide that a machine, an autonomous agent, is going to perform tasks on your behalf, and you delegate control to it.
In the first two cases, it was acceptable to share your credentials because you were always approving actions with your own hand. But when the machine is given control to act on your behalf, you have to decide how much control to grant. Are you comfortable giving your full API keys and secrets? Or will you say: I will give access only for two hours, or only for the specific purpose I need this agent to work on?
For example, if you want an agent working inside a particular CRM system, you might say: only read the context, do not delete anything, do not issue a procurement order beyond a certain amount. Beyond that threshold, it must seek approval. And then there are company policies, budget controls, all of these need to be enforced. But the problem is that the agent has been handed your full API keys, full secrets, and can technically do anything with them. It may not be authorised or permitted to do so, and that is the gap.
That sounds like a real safety and trust threat. Is that what led to GrantX?
Exactly. Everyone is building agents, but who is taking the responsibility of making those agents trustworthy? Who is ensuring that agents do only what they are permitted and authorised to do, that there is an audit trail, and that they can be revoked?
People were not focusing on this. They were spinning up agents and trying to make them work. So yes, this could be a real safety threat and a trust threat.
That is what led to the open system protocol we developed, called GrantX. It is an ecosystem play, a protocol that is open in nature. When agents started emerging, they were being assigned the same kind of rights as a human user. But agents may not require all those rights. They need to work within a defined scope. And if something goes wrong tomorrow, you should be able to prove who authorised those actions, and you should be able to revoke the agent, cancel what it can do in the future, set an expiry of one hour or two hours so it works only within that limited window.
What inspired the timing of the GrantX launch? And how do you see it addressing the permissioning challenge in fintech specifically, where zero-error tolerance is expected?
That context actually makes it all the more important. Some terms get used interchangeably in this space, guardrails and hallucinations. But guardrails and hallucinations, as commonly discussed, are essentially about prompting an agent and training it through instructions: "don't do this, don't do that." That is a subjective approach to controlling agent behaviour.
When you say guardrail, you intend the agent not to breach your organisation's policy framework. You have an assumption in your mind that this agent is supposed to do certain things and must not touch, say, a payroll system or an ERP platform like Oracle Fusion, and even if it does touch them, it should only do so for a specified purpose.
What we have seen over the past several months of building agents is that the output is not always what you expect. When that happens, people say the agent is hallucinating or that it has breached the guardrail. You ask it for the contact details of five people, and it returns the contact details of every merchant in the ecosystem. Guardrails have been breached.
One way to stop that is to keep training and prompting the agent continuously. But where does that end? There has to be a cryptographic, mathematical solution. That is what GrantX is, a cryptographic, mathematical approach to this problem.
You define a policy framework, set budget controls, specify a permission list, define tool gateways, and then you do a cryptographic signing of the entire thing. Under no condition can the agent breach those boundaries. This also means hallucinations drop significantly. Agents will do only what they are supposed to do.
On timing: as computing capacity increases, agents are becoming more capable. And as they become more capable, it becomes increasingly difficult to deploy them without a trust framework, because you will eventually assign significant tasks to them. If you do not define a mathematical policy framework, the power that agents are accumulating will make it increasingly difficult to prevent safety and security breaches, completely unintentional ones, simply because the agent had full access, human-level OAuth keys, and full rights. It will eventually start accessing systems it was never meant to touch.
You have integrated with policy engines like OPA (Open Policy Agent). How important is policy-as-code, the practice of expressing governance rules in machine-readable form, going to be in this era of AI?
The ecosystem has to come together. People will need to start adopting common standards rather than defining their own policies independently. That is why the framework and the paper behind it are open.
The protocol is called the Delegated Agent Authorisation Protocol. GrantX is one implementation of that protocol. The comparison I would draw is with OAuth, OAuth was open, and this paper is also open.
The core question the protocol addresses is: if an agent takes an action, to whom do they go? The answer is: the person who gave the consent. But what if an agent starts to delegate to another agent, a parent agent spawning a child agent, which spawns a sub-child, creating a delegation chain? If something goes wrong, revoking or cancelling the parent's access should cascade down and cancel the entire chain.
Take a travel agent, for instance. It needs access to flight routes, hotel bookings, packing recommendations, and to work efficiently, it may need to call three or four sub-agents. The goal is to harness the full capability of AI agents while simultaneously constraining what they can and cannot do.
The question we asked ourselves is: Does this serve a real purpose, and is it useful to the industry? The answer is clear. If enterprises are not grappling with this need today, they will be tomorrow, because agents are going to become far more capable. And the more powerful they become, the more critical it is to build a trust layer around them.
Earlier, every system had its own login and password. Now you rarely encounter a system that does not offer sign-in via Google or Apple. Similarly, agentic systems will need a common open standard, one that makes them secure, traceable, auditable, revocable, and verifiable, without every company having to build that from scratch.
Do you envision GrantX becoming as foundational for AI agents as OAuth 2.0 has been for web applications?
It will require industry acceptance. People need to recognise that if every organisation creates its own policy framework independently, no common standard will emerge.
OAuth has proven that login, credential management, and identity and access management are so fundamental that there is no value in reinventing the wheel when an open standard already exists. There is a clear need for something equivalent for agents; there is no two ways about it. If you want to create an autonomous agent and also hold someone accountable for its actions, you need a framework for that.
I have seen this being discussed on the regulatory side as well, including references to EU-level requirements and questions raised in contexts like the OWASP Top 10 for AI. The questions being asked are consistent: who is deciding if the human is not there? Was it authorised in the first place? Do we have control? Can log files be edited? Are the logs verifiable?
The need and the problem statement are both clear. As for when adoption will happen, given the pace at which agentic AI is developing, I think organisations will need this sooner than most expect.
Is there a mindset gap? Enterprises already struggle with identity and access management, are they actually ready to adopt something like GrantX?
Everyone is at a different stage with AI right now. AI is being used for employee productivity in some organisations, for developer and DevOps automation in others, and some are beginning to look at how executives or merchants can deploy agents to get their work done.
We have not yet reached the stage of full agentic commerce at scale. For example, in some markets, new protocols are emerging around agents making purchases. But the questions are obvious: would you give an agent your payment card? Even if the regulatory framework permitted it, to what extent would you allow it to spend? Would you give it your entire credit limit?
The need is clear. Companies are at different stages of evaluating this. But as they develop concrete use cases, things that genuinely need to be handled by AI to drive efficiency, they will simultaneously have to think about the security framework.
And when they do, the problems will become immediately relatable: agents will need identity, they will need authorisation controls, their actions will need to be held accountable through verifiable and auditable logs, and the trust chain will need to be revocable if something goes wrong. All of this has to operate within the company's organisational policy and budget controls. These problems will fully surface the moment organisations move beyond using AI purely for employee productivity and into broader agentic workflows.
Given India's scale in digital payments and the push towards automating financial workflows, do you think India could lead in defining global standards for AI governance for agents?
In payments and fintech, India's security and regulatory frameworks are already quite stringent, we are ahead in many respects. AI adoption here will always proceed within that regulatory framework. The example of cards is instructive: giving agents a payment card may be viable in some markets outside India, but it is not yet applicable here, given current regulations.
At the same time, India has infrastructure like UPI, with features such as single-block multiple-debit, that is technically advanced and can be integrated into agentic workflows.
Given the nature of the problems we are solving, and the fact that the protocol we have developed is open and not proprietary, why not? If we start to adopt this kind of security policy framework, make our agents more secure, traceable, and auditable, and keep the protocol open, this could be something India ships to the world. Something that makes commerce safer and enables agents to participate in that commerce responsibly.
In the near term, without speculating too far, what will trust in AI systems actually look like, and will users even know when an agent is acting on their behalf?
For agents not to hallucinate or cross guardrails, you need a mathematical, cryptographic proof of what the agent is permitted to do. If an agent was authorised to make a purchase of five hundred rupees on a given platform, any attempt to go beyond that should be traced and logged, and that log must be tamper-proof.
The moment someone understands the full scope of what they are building an agent to do, and that scope is significant, they will immediately ask themselves: Should I give this agent this much access? That doubt is the exact point where a trust framework becomes necessary. And until that trust framework exists, companies will not feel confident deploying a complete agentic workflow.
Once companies begin to envision what they want agents to handle, accounts payable, accounts receivable, processes that touch multiple systems, where a wrong invoice could have serious consequences, they will want to define exactly what the agent is allowed to do. And if something goes wrong tomorrow, they will want to trace: what was the agent's ID, what was its decentralised identity, who approved the action, what was the grant token and what did it consist of, and did the agent go beyond that grant?
Without a trust layer, as agents continue to get more capable, and the pace of AI development is significant, accelerating every few months, the consequences of unconstrained access become increasingly serious. If we complement that capability growth with a trust layer, adoption will increase. The two need to go together.

