For enterprise AI, deterministic systems still matter: SAIGroup MD
As enterprises push to scale artificial intelligence beyond pilot projects, many are running into hard limits around data readiness, security, governance, and real-world deployment—challenges that are especially pronounced in large, diverse markets like India.
SAIGroup works with enterprises across sectors such as healthcare, financial services, retail, life sciences, and manufacturing to deploy production-grade AI systems. In a conversation with TechCircle, Kalyan Kolachala, Managing Director at SAIGroup, explains why enterprise AI struggles to scale, where generative AI is most effective, and what organisations often overlook when deploying AI in production.
Edited Excerpts:
Many enterprises struggle to move AI from pilot to production. From your vantage point, what architectural or organisational bottlenecks most often prevent AI from scaling in Indian enterprises?
One of the biggest challenges in India is scale. Enterprise deployments often involve very large user bases, which significantly increases complexity. Alongside scale, there is also a high degree of data diversity. Enterprises frequently deal with multiple languages across documents, audio, and video, sometimes even within the same record. In sectors such as real estate or government, data may include handwritten documents, which further complicates processing.
Concurrency is another major issue. Many large-scale initiatives, including public-sector and social programmes, involve millions of end users simultaneously accessing systems. AI workloads are compute-intensive by nature, and supporting a large number of concurrent users while running AI in the background introduces both architectural and operational challenges. These factors together make it difficult to move from small pilots to production-grade systems.
Data quality is often cited as one of the biggest barriers to enterprise AI. How does your organisation evaluate a company’s data readiness before deployment?
There are two dimensions to this. The first is assessment. We use structured questionnaires and evaluation tools that allow either consultants or customers themselves to assess data readiness. This helps identify gaps early.
The second, and more important, dimension is the platform itself. Enterprise AI cannot function without a robust enterprise data layer underneath it. This layer is responsible for ingesting data at scale and automatically cleaning, transforming, standardising, and validating it. Manual intervention does not work at enterprise scale, so most of these processes have to be automated, with human input limited to a small number of edge cases.
Once data is standardised, it flows through multiple AI pipelines, including classical machine learning, deep learning, anomaly detection, and generative AI. At each stage, validations and guardrails are applied at both the data and model levels. The focus is not on building language models themselves but on ensuring data quality, accuracy, and reliability higher up the stack. These controls are essential to reduce hallucinations and ensure that AI systems can be trusted in production rather than remaining stuck at the pilot stage.
There is significant excitement around generative AI in enterprises. Where do you believe it delivers real value, and where do you see it being overused?
Generative AI delivers the most value in enterprise workflows that are complex and not fully deterministic. These are workflows that involve language understanding, reasoning, and decision-making across domain-specific rules, policies, and knowledge bases. In such cases, generative AI needs to be combined with orchestration frameworks, agents, and reasoning systems that understand the domain context rather than operating in isolation.
At the same time, not all workflows require generative AI. Many enterprise processes are well understood and deterministic. In those cases, introducing generative AI can create unnecessary risk because its outputs are probabilistic and may vary from run to run. This can lead to issues such as hallucinations, higher costs, slower performance, and reduced accuracy.
The more effective approach is to combine deterministic systems where outcomes must be predictable with generative and agent-based capabilities where flexibility, reasoning, and language understanding are required. Using generative AI everywhere often results in inefficiency, whereas combining both approaches allows enterprises to balance accuracy, cost, and performance.
As AI systems become more autonomous, governance and security are increasingly important. What security risks do enterprises often underestimate when deploying AI in production?
Security becomes more complex with agent-based and autonomous AI systems. In traditional software, execution paths are largely predictable, and behaviour tested in development environments is expected to remain consistent in production. With agentic systems, however, there can be an exponential number of possible execution paths, many of which are determined dynamically at runtime.
In some cases, systems are effectively generating workflows or decision paths on the fly. This raises questions around how to ensure those paths are secure and how to prevent vulnerabilities from being introduced dynamically. As a result, security cannot be handled through isolated checks. It has to be built systematically into the architecture, with strong guardrails and continuous validation.
Extensive automated testing is required across environments, and systems need to learn from vulnerabilities identified during testing. Autonomous agents can also be used to strengthen security by continuously improving safeguards as new risks emerge, making security an ongoing, adaptive process rather than a one-time effort.
Financial services and healthcare have seen high levels of enterprise AI adoption. What do you see as the next frontier for AI in enterprises?
These sectors have already seen meaningful benefits, and software development itself is undergoing significant change. Development cycles that once took months or years can now be completed in weeks. However, accuracy remains a limiting factor. Enterprises still require very high accuracy levels, which means human oversight is often necessary.
Over time, improvements in models, guardrails, and validation mechanisms should reduce the need for manual intervention. Another important area is the software development lifecycle. AI has already made progress in generating systems from specifications and requirements, and this capability has improved noticeably in recent months. This trend is likely to continue, leading to greater automation and faster delivery across enterprise workflows.
While AI is your primary focus, are there other enterprise technologies your company is looking to invest in over the next three to five years?
AI remains the core focus, but enterprise AI depends on several foundational technologies. These include modern data architectures such as lakehouses, evolving data standards, security infrastructure, and workflow systems. Many use cases also involve multimodal data, particularly in areas such as healthcare, where imaging, video, and audio are combined.
In some domains, additional technologies, including blockchain, may also play a role. While AI is central, real-world enterprise problems often require the integration of multiple technologies. Supporting and integrating these technologies alongside AI is an important part of delivering practical enterprise solutions.

