
Static systems are falling behind as enterprises shift to Agentic AI for flexibility: Sean Stauth, Qlik


Enterprise Artificial Intelligence (AI) is entering a new phase, driven by growing demand for direct, dynamic access to data and decision-making tools. As organisations look beyond dashboards and traditional automation, agentic AI systems are gaining traction, especially in functions like sales, support, and product development.
Qlik is a data integration and analytics company working on AI systems that combine foundational models with task-specific agents. The company’s recent focus has been on issues like data trust, governance, and scalable architecture as enterprises begin adopting more advanced AI capabilities.
In a conversation with TechCircle, Sean Stauth, Global Director, AI & ML at Qlik, shares insights on AI maturity in enterprises, the rise of agentic systems, adoption trends across business functions, and the challenges of building trustworthy, aligned AI at scale. Edited Excerpts:
What major changes have you seen in enterprise AI maturity over the past 12 to 18 months, and what do you think is driving them?

About 18 months ago, shortly after ChatGPT gained widespread attention, it became clear that natural language could be a powerful interface for data systems. For many executives, this was the first time they could directly experience the value of AI.
They quickly realised that AI could drive major gains in productivity, efficiency, and creativity. This led to a wave of activity: companies formed AI councils, made strategic investments, and began exploring how to move forward with AI.
Since then, organisations across industries have started building initiatives around generative AI. The concept of agentic AI only began to take shape about a year ago, once language models became powerful enough to support it.

Despite the momentum, many companies still struggle to build AI applications that are secure, well-governed, and trusted. At Qlik, we call this the authenticity crisis, where systems are in place, but the foundations are weak. Issues like data trust, security, privacy, and now AI sovereignty, control over AI systems, are top of mind.
One major shift is the growing technical expertise within companies. Today, we’re seeing more technical leaders involved in AI decisions. Many of them come from IT or engineering, and they're deeply familiar with the latest AI technologies. The gap between business and technology stakeholders is narrowing.
Many companies struggle to distinguish between traditional automation and agentic systems. What's the key difference, and why is it important?
We’re seeing a few distinct categories of enterprise software emerge. The first is traditional deterministic systems, logic-based software where the same inputs always produce the same outputs. These include systems like those used in automotive applications and remain a core part of enterprise software.

Agentic systems are introducing two new categories. The first is deterministic systems that are designed or augmented by AI. Examples include business process workflows and data pipelines, where AI helps design the system, but execution still follows a fixed, logical path.
The second category is fully autonomous or agentic systems. Here, foundation models make decisions and determine actions within a process. These systems are no longer deterministic, they operate probabilistically, driven by the AI at their core.
Which business functions are most open, and which are most resistant, to adopting AI agents and semi-automation in your client interactions?
There are two key dimensions to consider: business lines and the competitive landscape. First, in terms of business lines, some areas carry less risk when adopting agentic systems. One example is internal sales. Agentic tools can help sales teams identify the next best action, improve close rates, reduce cost of sale, and enhance customer interactions. This is something we already do at Qlik. Many customers are exploring similar tools for their own sales teams because the risk is lower, there’s still a human in the loop.

Customer support is another area. Agentic systems can help support agents access relevant information quickly, improving both the response and the work itself. These are both examples where the adoption barrier is lower and the risk is manageable.
Second, in areas with high competitive pressure, companies are also investing in these systems despite the higher risk. Product development is a clear case. Organisations are using AI to improve how they design, price, and launch products, and to align supply with demand. The competitive need to adopt these tools is strong because they can create advantages across the entire product lifecycle.
This is where dynamic agentic systems, which adapt in real time, stand out. Unlike static systems, they can support complex tasks across design, pricing, marketing, and feedback loops, improving both speed and coordination.
Your company also talks about compound AI architectures. How is that different from using a single LLM for everything, as many organisations try to do?

Large investments in foundation models have made large language models (LLMs) more capable, cost-effective, and versatile. As a result, there are now more options for how these models can be used.
Future system architectures are likely to include multiple foundation models, sometimes both large and small language models, each tuned for specific tasks. These models can be coordinated through orchestration layers and can also interact with external data systems.
In short, models can be fine-tuned and purpose-built for particular tasks, and combined into systems where they work together. At Click, this approach is part of how we're designing our platform.
As AI agents become more independent, what risks concern you most, like misaligned outcomes or ethical blind spots? And how is your company working to ensure safe design and use?

There are two major risks to keep in mind. First, many organisations fail to align AI initiatives with strategic business goals. For example, if you're building an agentic AI system in manufacturing, the goal is usually to improve yield and reduce supply chain risk. It's not enough to build and deploy the system, you also need to measure its impact on key outcomes like yield. A common gap is failing to take this final step.
Second, there's often a lack of data visibility and model explainability. These systems sometimes rely on uncurated or unverified data, which can lead to unreliable outputs. Instead, it's important to build on trusted, curated data assets and ensure the models are explainable. You need to understand where the data comes from and how the models reach their conclusions.
Do you think every enterprise will eventually have AI agents that understand the business like domain experts, or will they stay focused on specific tasks?
Organisations will likely use multiple AI agents, each designed for a specific task. For example, there could be agents for finance, HR, customer service, and sales support. These agents will work together toward a shared organisational goal.
As a result, many tasks currently handled by people may shift to agents. However, managing the complexity of these interconnected agents will require significant effort. These agents will also interact with external or third-party systems, which may be general-purpose or Application Programming Interface (API)-based.
In the future, organisations will operate as networks of specialised agents collaborating to achieve core functions.
What's the hardest challenge in building scalable and trustworthy enterprise AI: data integration, governance, or model performance?
First, aligning agent development with strategic goals and making sure the outcomes are measured against those goals. Second, building internal skills to manage and support these systems, with a focus on modularity and scalability. Third, ensuring trust and control, meeting regulations and maintaining full visibility across the system. Fourth, meeting enterprise requirements like security, governance, and access control.
If I had to choose the hardest part, it’s likely the technical complexity, given how fast the landscape is evolving.
What’s the next big challenge or opportunity in enterprise AI that you're excited to tackle?
The ability to access structured data using generative and agentic AI could fundamentally change how executives interact with information systems. Traditional dashboards and business intelligence tools may become less relevant.
We often hear from organisations that executives request new tasks or information daily, making it hard to build fixed systems around their needs. Business demands shift constantly.
With agentic AI layered over structured data, executives can directly access the information they need, when they need it, without relying on pre-built dashboards.