Loading...

Gen AI is scaling, trusting it is the next problem enterprises have to solve

Gen AI is scaling, trusting it is the next problem enterprises have to solve

Enterprises worldwide are deploying generative AI at a pace that their governance and monitoring frameworks are struggling to match. The gap between how fast AI goes into production and how well teams can verify what it does is now one of the central challenges in enterprise technology.

A recent report by Gartner predicts that by 2028, LLM observability will be embedded in half of all generative AI deployments globally, up sharply from 15% today. The driver, Gartner says, is explainable AI, the ability to document why a model responded a certain way, trace its reasoning steps, and verify whether its outputs can be relied on in real-world business contexts.

The timing of this forecast matters. India leads global GenAI adoption at 73% usage, well ahead of the US at 45% and the UK at 29%. Yet adoption alone is not the full story. More than half of organizations globally have already reported negative consequences from AI use, with hallucinations and factual inaccuracies cited as the top concerns slowing down faster deployment.

India's own enterprise landscape reflects this tension. According to the EY-CII report, nearly half of Indian enterprises now have multiple GenAI use cases live in production, while 91% of business leaders say speed of deployment is the single biggest factor in their buy-versus-build decisions. That urgency to ship fast is precisely what makes observability a harder, but more critical, discipline to enforce.

Globally, the market behind these concerns is large and growing. Indian enterprises plan to allocate 28% of their technology budgets to generative AI over the next 12 months, the highest share among all surveyed countries and ahead of the global average of 22%, per a Snowflake-Omdia study. About 71% of Indian respondents reported measurable returns from GenAI investments, above the global average of 61%. Sustaining those returns over time, however, requires the kind of infrastructure Gartner is now calling mandatory.

Sr Principal Analyst at Gartner, Pankaj Prasad, framed it plainly, without explainability and observability in place, GenAI will stay confined to low-risk, internal tasks, and the return on investment will hit a ceiling. His point is reinforced by a separate Gartner projection that 60% of software engineering teams will use AI evaluation and observability platforms by 2028, up from 18% in 2025.

The broader market context explains why vendors are already moving. The global observability market stood at $3.35 billion in 2026 and is on track to reach $6.93 billion by 2031. 

Deloitte's 2026 State of AI in the Enterprise report found that enterprises where senior leadership actively shapes AI governance achieve significantly greater business value than those leaving it to technical teams alone. That finding aligns with what Gartner is recommending: legal, compliance, and business stakeholders must be brought into explainability decisions early, not after deployment problems surface.

The shift in what observability means is also worth noting. Traditional monitoring tracked speed and cost. The new requirement covers factual accuracy, logical consistency, and what Prasad calls sycophancy, when a model tells users what they want to hear rather than what is correct. Addressing this requires human-in-the-loop validation and continuous evaluation of AI output quality baked into CI/CD pipelines before any model reaches production.

India became the world's largest market for generative AI app downloads in 2025, with installs jumping 207% year-over-year, a signal that the volume of AI interactions, and the corresponding need to monitor them, is only heading in one direction. For enterprises in India and globally, the infrastructure to verify AI behavior is no longer a future consideration. Gartner's 2028 forecast suggests it is a present-tense requirement.

Loading...

Sign up for Newsletter

Select your Newsletter frequency