Loading...

GlobalLogic CTO Sanjeev Azad on what really stops enterprises from moving AI into production

GlobalLogic CTO Sanjeev Azad on what really stops enterprises from moving AI into production

As enterprises in India move beyond early AI experiments, many struggle to turn pilots into systems that support core operations. In a conversation with TechCircle, Sanjeev Azad, Global Chief Innovator and CTO for APAC at GlobalLogic, explains why data readiness, process clarity, and organizational alignment matter more than tools. He also discusses how agentic AI is changing engineering roles, why governance remains a weak link, and where companies misunderstand the real limits of AI adoption.

Edited Excerpts: 

Many enterprises in India are still limited to AI pilots and experiments. What structural barriers most often prevent these efforts from scaling into mission-critical systems?

Data is the main hurdle. AI cannot function without data. Without it, systems produce unreliable results. The principle is simple: if the input is poor, the output will be poor.

After the rise of generative AI, many organizations try to adopt it immediately without examining their business processes. They often overlook what work is manual, what is already automated through tools like RPA, what data is being collected, and how systems are integrated. The lack of structured data, along with poorly managed unstructured data, creates problems. At the enterprise level, no system operates in isolation. Multiple systems depend on each other, and weak integration limits what AI can do.

Another issue is fragmentation. Different departments often use similar tools independently, creating parallel systems and inconsistent data. While leadership may mandate AI adoption, teams on the ground are disconnected. Departments are siloed, people have different interpretations of the goal, and data is captured in inconsistent ways. In that environment, AI tends to fail.

As a result, adoption is slow, not because the technology is insufficient, but because data, processes, and organizational alignment are not in place.

When enterprises talk about “meaningful AI,” where are they typically unclear? Based on your experience working with many organizations, what gaps do you most often see in their strategy, data readiness, architecture, or governance?

Many enterprises move straight to AI and generative AI because the technology appears easy and intuitive. This creates the impression that it can be adopted quickly and broadly.

From a business perspective, however, executives are focused on keeping core operations running. These processes generate revenue, and any disruption carries risk. Problems arise when organizations adopt new technology without clearly defining the business problem they are trying to solve.

Not every problem requires generative AI. Some can be addressed with rule-based systems to improve efficiency, others with traditional automation, and some with machine learning to improve productivity. The first step is to identify the business problem, the people involved in the process, how work is handed off, and who the intended users are, whether customers or internal teams.

This requires a design-led approach. Organizations need to understand their existing processes, the data flowing through them, and the business rules that govern decisions. Once this is clear, they can determine which technology is appropriate. Predictable and deterministic processes may only need rules. Other cases may benefit from machine learning or intent-based systems.

Technology should serve the business goal, not drive it. Many enterprises get stuck because new tools appear attractive. Proofs of concept often succeed, but issues emerge in production, including limited context, inconsistent outputs, and other constraints.

As more information is generated by AI, it becomes harder for people to interpret and manage it at scale. The core challenge is not the technology itself, but the readiness of people and processes. This gap continues to slow or block the effective adoption of AI.

When we talk about agentic AI introducing autonomy into enterprise systems, what changes from an engineering perspective when AI shifts from an assistive role to an agentic one?

From a software engineering perspective, the approach is largely deterministic. Engineering work follows defined phases across design, development, maintenance, and support. Some roles are already becoming autonomous. Code review is a clear example: the inputs are known, including the generated code, applicable rules, and quality gates. When this information is structured, an autonomous system can validate the code by checking it against those rules. This enables models such as testing, review, or backlog management as a service, where requirements are predefined, and execution is straightforward.

From a business perspective, autonomy exists at different levels. The first step is assessing the complexity of the use case and the data. In simple scenarios such as customer support, systems can analyze historical interactions, identify intent, and resolve routine queries. These systems are already in use. When they cannot interpret a request or lack sufficient information, the case is escalated to a human.

Enterprises typically begin with augmentation, where AI supports people rather than replacing them. Most work remains human-led, with automation handling a smaller share, delivering quick returns in high-volume tasks. The next level involves decision support, such as loan approvals, where systems analyze structured data and intent, apply rules, and assist in making decisions under human oversight.

A further level involves multiple agents working together under an orchestrator to achieve a defined goal, such as research, profiling, or portfolio management. Fully autonomous systems with no human involvement remain largely experimental.

Most organizations today operate in the early to middle stages of this progression, depending on their data maturity and governance. As autonomy increases, human roles shift from execution to approval and direction.

The broader impact is a reduction in time spent on repetitive work. Routine tasks are likely to decline, while demand grows for roles focused on judgment, analysis, and problem-solving. The challenge for enterprises is managing this transition rather than preventing it.

What safeguards are needed to ensure agentic systems do not widen existing technical gaps or behave unpredictably in production?

AI governance frameworks already exist. The main risk lies in how data is shared, controlled, and governed. Once data is provided to AI systems, it can create serious exposure if safeguards are not in place.

There are two primary ways to use AI. One is direct access to large language models such as GPT-4 or GPT-5. In this approach, users consume outputs without training the model or feeding data back into it. This pull-based interaction limits data exposure and is generally safer.

The second approach is using AI through interfaces such as ChatGPT or Gemini. These tools sit on top of language models and deliver more refined responses by interpreting user intent and maintaining context. However, they also collect and retain interaction data. As these systems learn from users in real time, the risk of leaking sensitive or proprietary information increases.

For enterprises, the priority is understanding which tools and models are in use and what data they collect. Interface-based tools add a personalization layer that stores interactions, even when providers state that data is not used for training. This creates transparency and compliance challenges similar to accepting mobile app terms and conditions.

Another risk comes from internal data management. Enterprise knowledge is often poorly structured and broadly accessible. When AI applications are built on top of this data without proper access controls, entire datasets can be exposed instead of limiting information on a need-to-know basis.

Effective AI governance requires clear guardrails around data access, model interaction, content safety, and audit trails. National and industry-specific frameworks already exist, but organizations must still define their own controls based on business use cases. AI governance is not a single rule set but a framework that must be applied consistently across systems and data.

Which emerging technology or approach do you think industry leaders are still underestimating?

This is not underestimated; it has been overestimated. That overestimation is now fading because people are starting to experiment directly. With a short amount of time using tools like Gemini or ChatGPT, it is possible to build something basic or even publish content.

That leads to three factors. The first is creativity. People need to know how to ask the right questions. In most enterprises, roughly 20 to 25 percent of people understand how to use these tools well and get consistent results. Another 60 to 70 percent can perform well if they are guided and trained. A small percentage will not adapt and will likely drop out. The larger group can operate creatively within their domain, but a smaller group understands what question to ask, what outcome they want, and how to measure value.

The second factor is patience. When using systems like ChatGPT or Gemini, the first response may not be useful. The process often requires repeated iterations. Without patience, users become frustrated and stop. The user needs to stay focused on the goal instead of being redirected by the tool. Whether it takes one attempt or many does not matter, as long as the objective remains clear.

The third factor is skill. Domain expertise is required to frame questions properly and to evaluate the output. Tasks can be delegated to AI, but validation cannot. The system provides information; the judgment of whether it is useful or correct remains a human responsibility. Creativity and skill are human-driven, while patience sustains the process.

From this perspective, the issue is less about underestimation and more about uneven awareness. In many organizations, supporting departments such as IT are not fully up to date on emerging tools. As a result, approvals are delayed and adoption slows. This creates friction even when business leaders want to move faster.

Enterprises are not investing enough in educating and elevating these teams. The focus tends to remain at the executive level, while operational groups lack the expertise needed to guide adoption. This gap is where the technology is undervalued.

Loading...

Sign up for Newsletter

Select your Newsletter frequency