
AI fluency, not hype, will determine enterprise RoI: Coforge's Vikrant Karnik


As Generative Artificial Intelligence (GenAI) moves from experimentation to enterprise-scale deployment, organisations are grappling with questions of value, control, and practical implementation. In a conversation with TechCircle, Vikrant Karnik, Executive Vice President and Head of Technology at Coforge, offers a grounded view of how businesses, particularly in India and the broader Asia-Pacific region, are adopting and adapting to GenAI and agentic AI systems. Edited Excerpts:
What patterns are you seeing in enterprises in India, that show whether GenAI is becoming core to their tech strategy or still just experimental?
Over the past two years, across India, Asia, and globally, companies, regardless of maturity, started with broad goals around AI, often referring to Machine Learning (ML). With generative AI, the initial focus was on specific use cases: helping sales teams access internal knowledge in real time, improving customer or employee experience, supporting IT, or aiding developers with test scripts and edge case analysis. These were narrow, targeted applications. As the hype faded, challenges surfaced, hallucinations, misalignment, and lack of guardrails. This led to a second wave focused on responsible AI: adding safeguards, reducing risks, and guiding tech teams through implementation. Organisations that advanced beyond this began exploring agentic AI, systems that operate with autonomy. Some pilot projects have already started in this area. In India, large banks and government institutions were early adopters, especially around document processing and regulatory compliance. While there's strong demand for responsible AI, widespread use of agentic AI hasn't taken off yet. Southeast Asia and parts of the Middle East, on the other hand, are leading in building fine-tuned, domain-specific models. These regions are actively customising foundation models with their own data — far more than what we see in the US or Western Europe.
Many companies confuse traditional automation with agentic systems. What's the key difference, and why does it matter?
Traditionally, whether it was SAP, custom code, or robotic process automation, systems followed hard-coded rules. Inputs led to predictable outputs, because all logic was predefined. There was no ambiguity. The shift to agentic systems is about giving software agents autonomy. Take a real use case: a $5 billion housing materials subcontractor operating in the US, Middle East, and parts of Europe. Their business is seasonal, so during peak times they onboard thousands of temporary sales agents. These agents deal with a complex catalog, about 60,000 items. Since they aren't familiar with the products, a lot of their time goes into figuring out what the customer needs and whether it’s available.

Initially, we built a generative AI solution where the agent could receive a text query and use a large language model (LLM) to respond. It worked, but it wasn’t efficient enough. So, we shifted to an agentic approach. Instead of coding every rule, we deployed seven AI agents, each with a specific function. For example, one agent processes images sent by customers to extract attributes. Another matches those attributes to inventory. Each agent works toward its own objective, and they collaborate. If one needs better input to improve its output, it adjusts and feeds that forward. The difference is that agentic systems use probabilistic reasoning to explore multiple scenarios and propose the best course of action, rather than just following rules. They act autonomously but within a human-in-the-loop setup. That’s the key shift: from deterministic, rule-driven software to agentic systems that evaluate and act based on evolving context.
As agentic AI continues to mature, do you think it's risky for companies to give it that much control?
You're right. Any new technology follows a pattern when it's introduced. Take robotic process automation (RPA), for example, when it came out around 2015, there was a lot of hype about how it would automate everything. But what actually happened was that RPA took over a lot of repetitive, mundane tasks, and humans shifted to doing higher-value work.
The same pattern is playing out now. For instance, in airline disruption management, I’ve spoken to people who work day-of-flight operations. These are highly skilled professionals, often with backgrounds in statistics and operations research. They make quick, high-impact decisions. But a surprising amount of their time goes into low-level work like gathering data, figuring out if an airport is shutting down, monitoring weather disruptions, and so on.

As technology matures, those kinds of tasks will be offloaded to agents. These are expensive, high-skill individuals, so it makes sense to reduce the time they spend on basic information gathering. But that shift can't happen without oversight. The agent needs to clearly explain how it reached its conclusions, why it chose to time out a crew, reroute a flight, or select a particular solution. Without that transparency, you shouldn't adopt it at scale. We’ve seen mature clients approach this cautiously. Instead of overhauling disruption management all at once, they start by automating a few repetitive tasks. But they also require the agent to explain its choices: what routes it considered, what options it evaluated, why it selected one path over another, and ideally assign a confidence score.
A key challenge is how quickly humans start trusting the machine’s output. We expected people to question the agent’s recommendations. But in practice, many accepted them without scrutiny. So, as we rolled these systems out, we had to train people to challenge the agent’s decisions, don’t just assume it’s right.
Which sectors in India have been slower than expected in adopting AI, and what do you think is holding them back?
The Indian market has specific characteristics that affect AI adoption. One of the main factors is the relatively low cost of labour compared to Western Europe or the US. So, when AI is introduced, the key questions are: What does it cost? What is the cost of running an AI agent? What value does it deliver? And can that value exceed the low cost of existing human labour?

This is the main barrier in India. In industries where labour is cheap and customer volume is high, companies often don’t see enough value in using AI. The cost and risk of AI are high, and the return doesn’t always justify the investment. For example, in retail, consumer products, or manufacturing, industries with many employees or customers and low labour costs, there’s little incentive to automate. These companies often conclude that the technology doesn’t add enough business value. And if technology doesn’t drive business outcomes, it’s not useful.
This ties into what we refer to as "AI fluency" understanding the impact of AI in terms of actual business results, not just technical capability. As technologists, it’s easy to get excited about new tools, but the business case has to make sense.
These challenges are most visible in sectors with high customer volume and low service costs. Companies often say: the tech looks great, but it’s not worth it, AI agents are not free. Agentic systems, in particular, are much more expensive than generative ones. Generative systems work on a prompt-response basis, and currently cost around 10 cents per token. But agentic systems use multiple agents, each running its own LLM and constantly communicating with it. This leads to token consumption 15–20 times higher than generative systems.

So, the cost adds up quickly. If the business case doesn’t justify that cost, companies don’t adopt it. The issue isn’t about understanding or trusting the tech, it’s about whether it’s worth replacing a process that currently costs less with something significantly more expensive. That’s the main challenge in the Indian market.
AI infrastructure is costly, with compute, LLM licensing, teams, governance, and more. How are enterprises justifying this spend at the board level? And what hidden costs, technical or organisational, do your clients often realise too late?
A common issue in the APAC market, not just India, but also Southeast Asia and the Middle East, is that most organisations use large foundational models. These models are primarily trained on data from the US and Western Europe. As a result, they often struggle with translation accuracy and cultural nuances relevant to the Asia Pacific region.
To address this, many large companies in APAC are trying to customize foundation models, such as those from Hugging Face, Nvidia, or other open-source options, by training them with their own corporate and customer data. This process is expensive and requires skills that are not widely available in the region. It's a necessary tradeoff, especially as organisations start to adopt agent-based systems.

In response, we build foundational agents tailored to specific industries and problems. These can be used as a starting point, then further trained with an organisation’s own data. We offer them as "digital FTs" similar to hiring a full-time employee, but instead providing a digital agent specialised in tasks like disruption management or weather tracking. The agent can be integrated and customised further, and is priced similarly to a human FT, which makes it easier for companies to adopt.
A key concern in boardroom discussions is cost management as AI adoption scales. Every customer we speak with is interested in using AI, but their concern is about risk exposure. With human employees, companies can implement safeguards and training programs. With AI agents, they’re asking how to do the same.
Our approach is to treat AI agents similarly to human employees, train them, apply safeguards, and ensure oversight. To support this, we provide a solution called "agent spear," which includes pre-trained foundational agents that customers can build on and treat as digital employees.
Would you say most companies are focusing too much on foundation models, when simpler AI or automation could deliver more value?

One common pattern I see with clients, more globally than in the Indian market, is the tendency to solve every problem with AI. But that's not always necessary.
Often, simple tools like macros or RPA are enough, especially when the tasks are deterministic. In many cases, statistical or machine learning models can be applied in a targeted way without needing generative or multi-modal AI. This rush to use advanced AI is particularly common in companies just starting their AI adoption.
I don’t see this as much in the Indian market. Businesses there tend to be more technically aware and often ask whether a simpler, lower-cost solution would do the job. The conversations are usually more nuanced and informed.
What would a "correction" in the GenAI services market look like from your perspective, and how would you prepare for it?
We saw the potential of AI to improve service delivery and began a structured program about eight or nine months ago to integrate AI across all aspects of our work. This involved focusing more on specific business problems and applying AI to solve them. For example, in testing, which we do extensively, we used to rely heavily on manual work. Now, we’ve introduced AI tools to support testers. One tool can convert a photo of a whiteboard discussion into detailed test cases, reducing delays and errors caused by miscommunication or undocumented requirements.
In software modernisation, we often deal with legacy systems. Understanding their code used to take significant time. We built AI agents to read and explain legacy code by generating abstract syntax trees (ASTs), cutting down the time spent on manual analysis.
In data, we created AI agents that scan data flows, detect recurring quality issues, and automatically turn those into enforceable rules, what we call “AI for data.”
In infrastructure, we use AI to analyse logs and identify real issues, filtering out false positives. This reduces the need for constant human monitoring and allows teams to focus on meaningful problems.
We’ve introduced these solutions to customers and passed the benefits on. One customer with 1,500 engineers initially assumed AI would reduce headcount. But they explained that their budget limits the number of people they can hire, not the amount of work they have. If AI makes their teams more productive, they can tackle more work and unlock additional funding.
So, the opportunity isn’t just cost reduction, it’s enabling more work to get done with the same resources. That’s why we’re applying AI everywhere and encouraging customers to adopt it, even if it means restructuring our teams. Doing so unlocks their capacity, drives value, and creates more opportunities for both sides.