Loading...

India is engineering the trust layer of global healthcare AI: CitiusTech’s Sudhir Kesavan

India is engineering the trust layer of global healthcare AI: CitiusTech’s Sudhir Kesavan
Loading...

At a time when US healthcare systems are grappling with rising denial rates and persistent revenue leakage, Indian-engineered solutions are beginning to deliver measurable financial outcomes — moving beyond AI pilots to building platforms that directly impact hospital and provider revenues. Princeton, New Jersey-headquartered CitiusTech, which operates major engineering centres across Mumbai, Bengaluru, Hyderabad and Pune, has partnered with Ventra Health to develop an Agentic AI-powered revenue intelligence platform aimed at improving payment predictability and reducing claim denials. Against this backdrop, TechCircle speaks with Sudhir Kesavan, COO of CitiusTech, on scaling healthcare AI and India’s evolving role as a global innovation and GCC hub. Edited excerpts. 

Healthcare is awash with AI pilots. What actually breaks when organisations try to scale AI into production?

AI struggles to scale in healthcare because structural realities surface simultaneously. Healthcare workflows don’t have a single objective — clinical outcomes, cost, access, reimbursement accuracy, and compliance often compete. Many pilots fail because they are not anchored to clear, shared outcome metrics across providers, payers and regulators. A second challenge is annotation and validation. Scaling requires continuous oversight from clinicians and revenue cycle experts — time that is scarce and expensive. Third, governance gaps become visible at scale. Auditability, policy controls and human review mechanisms are often missing in pilots. In claims and billing, errors surface only when outputs encounter real-world payer rules and interoperability standards. We are seeing a shift from pilot-led experimentation to value-chain-driven MVPs. Instead of chasing AI use cases, organisations are identifying “catalyst” process areas that move meaningful business KPIs and secure C-suite sponsorship. That reframes AI from experimentation to measurable transformation.

Is poor data quality the biggest risk to AI-led decisions in healthcare?

It’s less about poor data and more about difficult data. Healthcare data is fragmented, unstructured and increasingly multimodal — spanning clinical notes, images and scanned documents. The real challenge is interpretation and context. Many use cases depend on payer rules, regulatory requirements and clinical guidelines that are readable by humans but not machines. That knowledge must be codified. Healthcare AI requires policy codification, transparency, human oversight and auditability — not just cleaner datasets.

As AI-generated data enters healthcare systems, how do enterprises avoid model collapse?

Loading...

Strict data provenance is critical. AI outputs must never be treated as ground truth. Enterprises need separation between source data and AI-generated data, along with lineage tracking and audit logs.
Continuous validation is equally important. Drift, bias and reliability must be monitored, with exceptions routed back to experts. Without that discipline, trust erodes quietly over time.

Is zero-trust governance now non-negotiable?

Yes. Perimeter-based security is no longer sufficient in interconnected AI ecosystems. Zero-trust models enforce continuous verification and least-privilege access. Compliance frameworks such as HIPAA and GDPR must be encoded directly into execution layers, rather than bolted on later. Embedding security and compliance accelerates audits and enables responsible scaling.

What differentiates your Agentic AI-powered revenue intelligence platform from traditional RCM analytics?

Traditional revenue cycle management (RCM) analytics are largely descriptive — they surface denial trends, payment variances and process bottlenecks, but they stop short of driving action in real time. The responsibility for interpreting insights and executing corrective steps still rests heavily on human teams. Our platform, developed in partnership with Ventra Health, shifts that paradigm. Built on CitiusTech’s healthcare-native product engineering foundation, Knewron, it embeds payer rules, regulatory logic, workflow guardrails and explainability directly into agent-driven systems. This enables the platform not only to detect revenue risks, but to validate findings, prioritise interventions and trigger corrective workflows across eligibility, coding, billing integrity and denial management — all within strict compliance boundaries. The early results demonstrate tangible impact, including a 19% improvement in first-pass payment rates, a 26% reduction in initial denial rates, and accelerated recovery of millions of dollars in delayed reimbursements. Fundamentally, this marks a transition from reactive recovery and retrospective reporting to proactive revenue predictability and cash-flow intelligence. 

How is India’s role evolving in this journey?

Loading...

India is moving from delivery-led healthcare IT services to engineering mission-critical AI platforms. The revenue intelligence platform is being engineered and scaled from India, supported by a Global Capability Center focused on agentic AI and advanced analytics. Indian teams are increasingly responsible for building intelligence, governance and trust layers that directly influence provider economics globally. Notably, the collaboration between CitiusTech and Ventra Health — engineered and scaled from India — signals a structural shift in India’s role in global healthcare technology, from delivery-led services to building agentic AI platforms that directly influence provider economics worldwide.

Will smaller domain-specific models outperform foundation models in healthcare?

Performance in healthcare depends more on context and constraints than size. Domain-specific models often perform better for bounded, regulated tasks. A hybrid approach works best — foundation models provide flexibility, while healthcare-specific policy enforcement and validation layers ensure outputs are explainable and compliant.

How do you balance innovation speed with regulatory scrutiny?

By designing for regulation from the outset. Guardrails, policy enforcement and human oversight are embedded into workflows. Compliance layers aligned to HIPAA, GDPR and MDR are configurable, so regulatory updates don’t require retraining models. Automation can move quickly in low-risk areas, while sensitive decisions remain under human supervision. In healthcare, where the cost of error is high, this balance is essential for scaling AI responsibly.

Loading...

Sign up for Newsletter

Select your Newsletter frequency