Loading...

Energy will be a big player in how AI momentum unfolds: NetApp exec

Energy will be a big player in how AI momentum unfolds: NetApp exec

As enterprises shift from a “cloud-first” to what some describe as an “AI-first” infrastructure era, many are discovering that technology adoption alone does not guarantee outcomes. Organisations across industries, including in India, are launching AI pilots but often struggling to move them into production. 

In a conversation with TechCircle, Dhruv Dhumatkar, CTO & Head of Solutions Engineering at NetApp, discusses why AI projects stall, how data architecture is evolving, and what role infrastructure providers play in an AI-driven enterprise landscape.

Edited Excerpts: 

Enterprises moved from “cloud-first” to what many now call an “AI-first” era. What is fundamentally changing in how data architectures are designed?

I think there are real similarities between the cloud-first phase and what we are now seeing with AI. When cloud became the dominant direction, many organisations felt they had to do something in the cloud simply because that was where the industry was moving. Today, I see the same pattern with AI. Every industry and every company is saying they need to do something in AI.

The issue, as in some cloud-first mandates, is that the use case and business value are not always defined. If you start there—without clarity on what problem you are solving or what result you want—you end up with projects that begin with momentum but don’t sustain it. That is why, in the AI context, we are seeing projects that get started but are unsuccessful, or get started and then are abandoned.

So from my perspective, the biggest inhibitor is not interest, because interest is very high. The inhibitor is definition. You need to define what the use case is going to be. You need to define what the value is going to be. Is it going to make the company more competitive? Is it going to improve workforce productivity? Is it going to improve a specific business process? Once that is clearly established, then AI adoption starts to have a more concrete path to value.

On data architectures specifically, some of the themes are not new, but they are becoming more urgent. We have been talking for years about unifying information and understanding what information an enterprise actually has. That has always mattered for security and for analytics. What has changed is the importance of data quality in the AI context. AI depends heavily on clean information and on being able to use that information properly to build and run models that can produce useful insights. So the old data management conversations now carry new weight.

Many enterprises in India are running AI pilots but not taking them into production. Why is that happening?

If I group the themes around what people might call failure—and I agree that “failure” can sound harsh, but I think the point is understood—the core issue is that organisations are often very clear that they want to do AI, but not clear on what they want AI to do for them.

That creates a problem immediately because if the intended outcome is not defined, then the success criteria are also not defined. In our own context at NetApp, for example, we use AI tools with a specific objective: accelerating employee productivity. If I know the objective is productivity, then I can start measuring impact against that objective. But if I just say, “We are doing AI,” without defining what benefit it is meant to produce, then it becomes very hard to claim success. The metrics are weak because the business benefit was never properly defined at the start.

The second major issue is the information being used in the pilot. If the data is not clean and not tagged correctly, the model output is usually not very valuable. You end up with models that don’t help you gain insights in a reliable way. This is not limited to one sector. It can happen in financial services, automotive, or other enterprise environments.

A lot of people underestimate how much effort goes into data preparation. In many AI pilots, preparing and tagging the information is typically the majority of the work. I would say it is often around 80% of the time spent in the pilot. So if that work is incomplete or not done well, the project may still look active on paper, but the outcomes won’t be strong enough to justify production rollout.

NetApp has historically been viewed as a storage company. In the AI era, are you still infrastructure plumbing, or are you becoming part of the AI control plane?

That is a fair question, because for a long time, the market perception has been that we are a storage company. I think that perception exists for a reason, given where the company has historically been positioned.

What we are trying to do now is move up the stack in a way that is more focused on understanding the information, not just storing it. When you look at the platform direction we are presenting, the intent is to expand from foundational infrastructure into the part of the stack where you can interpret and manage the data more effectively.

I do not think the foundational infrastructure layer itself is seeing dramatic reinvention in every part. There are changes, including disaggregated approaches, but I would separate that from the broader shift we are discussing. The bigger move, in my view, is into the data plane—what we refer to as the data engine—where the goal is to understand what information you have and what it can do for you.

That matters because enterprise data is not sitting in one place. It may be on-premises, in public cloud, in private cloud, or in sovereign cloud environments (cloud environments designed to meet local data control and compliance requirements). If your AI strategy depends on understanding and preparing data across all of those environments, then the role expands beyond storage in the narrow sense.

So I would say the evolution is toward helping enterprises operate on their information across environments, not just house that information.

You emphasise a single integrated platform across on-premises, private cloud, and hyperscalers. How real is deep architectural unification for enterprises?

In the enterprise context, I would point to one thing we have done over the last decade that has been especially important: enabling hyperscalers—Amazon, Microsoft, and Google—to adopt our technology in their offerings. It is their service offering in market terms, but it is our technology under that layer.

The practical benefit to enterprises is that this helps abstract a lot of the differences they would otherwise have to deal with directly, especially the different programming layers that applications have to work through. That abstraction matters because it gives customers a more consistent management model and a more consistent way of viewing and handling data, whether that data is in cloud environments or on-premises.

So when people ask whether the unification is “real,” I think the answer is that the value for enterprises comes from consistency in management style and data handling across environments. Enterprises do not want to rebuild operating practices every time they move a workload or a data set between environments.

I also think this becomes more important as AI adoption increases. We are still early in many respects, and I am not claiming every part of the industry path is fully clear. There are constraints that will affect how quickly different organisations move. For some, energy will be a limiter. For others, access to GPUs will be a limiter. But regardless of those constraints, having a control plane and a data plane that can operate across hybrid cloud environments is going to be important.

As AI models centralise in hyperscaler GPU clusters, does “model gravity” start competing with “data gravity”?

Yes, it does. I think that tension is real. The way I look at it is that the process of preparing information for AI is often time-consuming and costly. That preparation work is not trivial, and if your approach assumes large-scale movement of data every time, the cost and complexity can become very high.

What we are saying with the tools we are delivering is that the focus is more on doing that work in place as much as possible, so that you are not always moving entire data sets around unnecessarily. The objective is to make it easier to bring data to the model in a more efficient way, while recognising that in practice, there are constraints on what can be moved and when.

Data gravity (the tendency for applications and services to move toward where large volumes of data reside) is still a factor. But in the AI context, there are cases where you cannot move the model, and there are also cases where you cannot move the data. That is exactly why I think metadata-layer innovation becomes important.

Some of the innovations we have introduced are about moving metadata rather than moving the underlying information itself. If you can operate at the metadata layer, you can reduce the need for moving full data sets while still enabling discovery, preparation and use. That is a critical part of how we think this problem can be addressed, and that is what I mean when I refer to the data engine.

Looking at the next 12 to 18 months, what do you see as the biggest threat or hindrance to your company’s AI strategy—and to AI momentum more broadly?

I would start by saying no plan is foolproof, so I do not think any company should assume it is insulated from change in the next 12 to 18 months.

If I take a broader, more global view, the issue I think will be a major factor is energy. These AI architectures require energy at a very different level than what a typical enterprise architecture has historically consumed. The density and power requirements of the equipment being discussed for AI deployment are creating a different class of infrastructure challenge.

This is not just my individual opinion. It is a growing concern that many people in the industry are discussing—how much energy will be required for the data centres that need to support these AI workloads, and whether power availability and infrastructure build-out can keep pace with demand.

You can already see signs of that pressure. The US, for example, is facing challenges in delivering data centre capacity at the scale needed, even though it is central to current AI momentum. China has already started making headway in some areas. So if we are talking about what could affect AI momentum overall, energy is going to be a big player.

If I narrow that back to our own strategy, I would make a slightly different point. One thing I would say in favour of our strategy is that it is adaptable. Some things may happen over the next 18 months that change market conditions, but I do not think what we are building now necessarily has to be rebuilt from the ground up if those conditions shift. So I see energy as a major variable in the broader AI movement, while our focus is on staying adaptable as the environment changes.

Loading...

Sign up for Newsletter

Select your Newsletter frequency