Loading...

AI agents can take on human roles if enterprises unlock their untapped knowledge, says EY exec

AI agents can take on human roles if enterprises unlock their untapped knowledge, says EY exec

AI is moving beyond automation to reshape how work gets done inside enterprises. Many businesses still struggle with misconceptions about the technology and how to integrate it effectively. In a conversation with TechCircle, Rahul Bhattacharya, Consulting AI Leader at EY GDS, discusses these challenges and outlines how enterprises can move beyond data issues and narrow views of LLMs to unlock real value.  

He explains how AI is shifting from a tool that supports tasks to one that can redesign workflows through agent-based systems, requiring new roles like flow engineers and knowledge harvesters.  

Edited Excerpts:  

What’s the biggest misconception businesses still have about AI, and how does it affect their adoption strategy? 

One common issue is the assumption that AI efforts must begin with solving all data problems. The thinking goes: "We can't do anything with AI until our data is fully cleaned and integrated." This often leads to multi-year data transformation programs. Meanwhile, no real progress is made on AI, and no business value is delivered. This belief continues to hold many organizations back. 

The second issue is more recent and concerns misconceptions about large language models (LLMs), what they can do, what they can’t, and the risks they pose. Public narratives, amplified by media and social platforms, often distort the actual capabilities and limitations of these tools. Opinions are widely shared, but many are based on second-hand information or misunderstandings. This creates an echo chamber that exaggerates both the promise and the danger of LLMs. As a result, many enterprises hesitate to engage with the technology, delaying the potential value it could unlock. 

What does it mean for a global firm like EY to shift from using AI to designing with it? And how are agent-based systems changing how you deliver services? 

As a consulting and tech company, we focus on solving business problems, not just technical ones. We start by understanding the problem, then identify the right technology to address it. When tasks involve repetitive work or low-value human input, we look at how AI can help improve efficiency. 

We also use AI internally to improve how we work, making tasks faster, cheaper, or enabling things we couldn’t do before. Sometimes, it means rethinking entire processes. 

We think of this in three levels. At the base, we improve existing tasks—like helping developers write, review, and test code with AI tools. In the middle, we add new capabilities around current services to offer more value. At the top, we explore ways to fully automate or redesign how services are delivered, like AI agents that take on human-like roles and interact in workflows. 

This framework helps us apply AI where it makes the most impact, from small improvements to full-scale transformation. 

AI agents can now reason, make decisions, and coordinate. How are teams at EY working with them, and how is this changing trust, roles, and responsibilities? 

AI agents are now able to reason, make decisions, and work together. Technology is evolving quickly, so we’re experimenting constantly, testing ideas, incorporating feedback, and adapting as new capabilities emerge. 

One approach involves breaking down work into clear, structured flows. For instance, in a data engineering project where data must be mapped from a source system to a target system, the process usually starts with someone creating a source-to-target mapping file, which a developer then uses to write code. We ask: Can an agent do this? Can multiple agents handle different steps—understanding intent, generating the code, testing it, documenting it—and work together in sequence? 

To make this possible, each agent is assigned a persona with a defined role. An orchestrator agent manages the workflow by selecting the right agents for each task. The system relies on three building blocks: tasks, orchestration, and knowledge. By improving these independently, agents can be dynamically assembled without hardcoding the entire process. 

This approach depends on people called flow engineers, who design these workflows and decide how much autonomy to give agents and what tools they should use. But this also becomes a bottleneck—the number and complexity of workflows depend on the availability and skill of these engineers. 

Looking ahead, the goal is to automate this setup. This is early-stage research, but it’s a direction we and others are exploring, toward systems where agents understand the work, find the knowledge they need, train new agents, and create workflows on their own. 

As agent autonomy increases, so does risk. Full agency means agents can act independently, but this comes with the possibility of mistakes. To manage this, we must assess both the probability of error and the impact if something goes wrong. Based on that, we either limit autonomy, add override controls, observe behaviors for improvement, or log everything for later review. 

This risk management model already exists in other fields. In commercial aviation, most systems are autonomous, yet we still put two pilots in the cockpit. In autonomous vehicles, early versions had a human onboard. Now, control rooms monitor fleets remotely. 

The level of oversight, real-time control, observation for improvement, or logging for audit, depends on how likely an error is and how damaging it could be. Regulations in high-stakes industries already define required controls. 

Flow engineers and knowledge harvesters aren’t just new job titles—they represent a new mindset for AI. What’s the biggest shift enterprises need to make to embrace these roles? 

AI agents can now reason, make decisions, and coordinate tasks. For them to operate effectively in an enterprise, they need access to the knowledge humans use to do their jobs. Much of this knowledge is undocumented held in employees’ heads, especially the kind learned on the job. While general and academic knowledge is widely available and already embedded in language models, company-specific, proprietary knowledge is not. 

To enable agents to perform as well as humans, this tacit knowledge needs to be made explicit. That requires identifying what work needs to be done, what knowledge is required, where it resides, and how to extract and structure it. This may involve interviews, documentation, or observations. 

In practice, roles will emerge around this: people responsible for capturing, structuring, and maintaining the knowledge that powers AI agents. 

As companies adopt these agents, they’ll start by improving many small tasks at the bottom of the productivity pyramid. But the real opportunity comes from rethinking workflows entirely. Instead of a single human doing a series of related tasks, like developer writing, testing, and documenting code, those tasks can be broken up and assigned to specialized agents. These agents can then be composed into new workflows, potentially unlocking new capabilities. 

Historically, enterprise software hard-coded both the user interface and the control and data flows. That’s changing. Interfaces can now be conversational, removing the complexity of menu-driven systems. And workflows can be dynamically constructed, so changes don’t require rewriting everything. This flexibility makes it possible to reconfigure systems on the fly, swapping, adding, or modifying steps without disrupting the whole process. 

Enabling this shift requires a new kind of role: people who understand and can redesign flows, what you might call flow engineers, even if they aren’t formally labeled as such. They’ll define which parts of a process must be fixed and which can be flexible, building systems that are both adaptable and robust. 

Loading...

Sign up for Newsletter

Select your Newsletter frequency