Loading...

Balancing local accountability with enterprise standards is key: Avaali CEO

Balancing local accountability with enterprise standards is key: Avaali CEO

Enterprises invest heavily in data platforms, but adoption often falls short. In a conversation with TechCircle, Srividya Kannan, Founder-CEO of Avaali, a technology consulting firm focused on process and data optimisation, says the challenge is less about technology and more about people. Legacy processes, entrenched ways of working, and gaps in data literacy make behavioural change slow and complex. 

Kannan also discusses why centralised frameworks alone aren’t enough, how organisations can balance enterprise standards with local accountability, and why treating data as an asset is critical for AI, efficiency, and long-term transformation.

Edited Excerpts: 

Data culture is a top priority, yet adoption remains weak. From your experience, why do enterprises invest in data platforms but fail to change behaviour?

The challenge is the result of several factors coming together. Many legacy organisations have operated for decades, growing either organically or through acquisitions. Over time, they have developed entrenched ways of working. That makes change difficult, particularly when it comes to how business users understand data, its relevance, and how it should be applied as part of a broader technology-driven transformation.

There is often a clear gap between ambition and execution. Organisations set goals, define visions, and outline strategies, and many are willing to invest in technology to support them. But the final hurdle is adoption. Real change depends on people altering how they work day to day. That is a slow change-management process, requiring consistent communication, incentives, and accountability, and visible commitment from senior leadership in how they behave and make decisions.

Most enterprises can put strategy, governance frameworks, stewardship models, and technology platforms in place. The harder part comes afterward. Adoption requires behavioural and cultural change, along with a clear explanation of why the change matters and what it delivers for employees. Too often, the link between the organisation’s vision and the individual’s role is not clearly communicated.

This is not a new problem. Technology leaders have been dealing with it for decades. What has changed is the urgency. As organisations push to do more with AI, the gap between ambition and readiness has widened. Building the foundations takes time, but expectations have accelerated. Closing that gap is where progress stalls, and where the most work is still required.

Where have you seen organisations over-centralise authority around the chief data officer, and how has that affected team ownership and accountability for data outcomes?

The chief data officer plays a central role in setting data strategy, governance, and standards, and in promoting better data practices across the organisation. But meaningful change cannot be driven by one individual alone. In practice, organisations often struggle with data literacy, stewardship, and accountability when these responsibilities remain siloed.

The objective should be to make sound data practices part of everyday work. That requires embedding them across teams and empowering employees at all levels, not limiting responsibility to a specific function or hierarchy. People need to understand the data they use, trust it, and apply it confidently in decision-making.

In that sense, the CDO acts as a catalyst. The role is to establish frameworks, define governance, and align data efforts with business goals. The deeper transformation happens only when those principles are adopted widely and a shared data culture takes hold across the organisation.

As with any technology-led change, strategy matters only if it is translated into practice. Data-driven thinking has to be embedded into how teams work and how decisions are made. Technology investments and system launches are a starting point, not the end goal.

Lasting impact comes when teams are equipped to interpret insights, act on them, and take ownership of data quality and security. With new data protection regulations increasing accountability, this responsibility needs to be understood throughout the organisation. Building a data culture is not the job of one executive; it has to extend to the last mile.

Data literacy varies widely across APAC. What cultural or organisational differences have you seen between global enterprises and Indian companies that have successfully standardised data practices across regions?

The challenges organisations face are largely the same, regardless of where they are headquartered. Global companies, depending on their size, tend to deal with issues similar to those seen in Indian enterprises. Indian organisations are not behind in terms of maturity.

Technology leadership in India is generally well-informed. CFOs, CIOs, and CDOs typically bring deep experience and a clear understanding of how technology shapes the business. Leadership talent at these levels is strong, and the issue is not one of Indian companies lagging Western peers on cultural change. The challenge exists across regions.

In fact, in several large organisations in Europe and parts of the Middle East, many employees have spent decades in the same company. When people have been operating in a certain way since the early 2000s, changing behaviour becomes difficult. The longer someone spends in an organisation or function, the harder it is to move away from established ways of running processes. Over time, individuals develop strong attachment to how a business process should operate and clear views on what is right or wrong, which makes change harder.

For this reason, the issue needs to be assessed organisation by organisation rather than through a geographic lens. The amount of work required depends on an organisation’s demographic profile and its history of technology adoption. Companies that originated in the 1980s and grew over time, organically or through acquisitions, face a different set of constraints compared with organisations founded in the early 2000s or later. Their starting point for technology adoption is different, and that shapes how easily they can adapt today. The lens, therefore, should be organisational maturity, not country.

Data governance frameworks often become bureaucratic. How can enterprises enforce governance while preserving agility for business teams?

Enterprises are increasingly caught between two competing demands around data. On one side is the push for agile transformation. On the other is the need to stay compliant, current, and legally sound as regulations evolve. That tension is becoming clearer as organisations interpret data protection rules and reassess how they manage information.

Basic gaps remain widespread. Awareness of what constitutes confidential data, what responsibilities employees carry, and how these are documented is often limited. Employment contracts illustrate this problem, with roles, responsibilities, and confidentiality obligations frequently handled through generic templates rather than being clearly defined.

The consequences of non-compliance can be severe, extending beyond financial penalties to reputational harm. As a result, legal validity and compliance are set to take priority, even as companies try to move quickly. Technology can help turn regulatory requirements into operational actions, but cultural and behavioural change continues to lag. Without strengthening compliance at the core, organisations remain exposed to significant risk.

Your company reports 35–70 percent reductions in process cycle times. Beyond tools and automation, what role does data literacy play in achieving these gains?

Every business process carries a large amount of embedded data. Take accounts payable as an example. Work we have done in this area has helped organisations cut cycle times by around 60 percent, along with related execution costs.

That process contains detailed supplier and transaction data: what materials or services were purchased, from which suppliers and locations, at what average cost, and with what tax implications. It also reflects spending patterns over time, top suppliers, savings achieved across categories, the time taken from goods receipt to order placement, delays, quality of goods or services delivered, and instances where penalties such as liquidated damages were applied.

Much of this information is sensitive. Data such as average material costs can have competitive implications. In many organisations, however, employees are not explicitly reminded that they are handling confidential information, or of the consequences if that confidentiality is breached. Questions arise when someone leaves the company: what knowledge moves with them, what they are permitted to use elsewhere, and what would violate contractual or legal obligations they have already agreed to.

These factors are separate from direct cost reduction. Cost savings tend to follow once technology is properly designed, implemented, and adopted by business teams. Adoption is critical. Even without addressing data governance issues, process automation delivers measurable savings. What is often not included in headline savings figures are the costs associated with risk, fraud, data breaches, and legal exposure.

When organisations talk about 35 to 70 percent cost reductions, they are usually referring to a before-and-after comparison: a process that once cost X is now run at roughly 40 percent of that cost. Those savings come from execution efficiency. Separately, the intelligence derived from process data supports better decisions, including improved pricing and supplier terms, which can drive further financial benefits.

Improvements in data quality do not usually result in immediate cost reductions for a single process. Over time, however, higher-quality data leads to smoother operations, higher efficiency, and additional savings. In practice, companies adopting new technology see immediate savings from process changes, while the benefits of better data quality accrue gradually.

AI adoption is accelerating across enterprises, but data quality is crucial for it to deliver value. What are the most common mistakes companies make with data quality, and how do you help them fix them?

Data quality is a prerequisite for any AI initiative. Without it, AI projects are unlikely to succeed. One of the most common failures is weak or absent data governance. Many organisations lack clear policies and frameworks for how data should be managed. When governance is inconsistent across teams and departments, data quickly becomes fragmented, duplicated, and outdated, making it difficult to trust.

Siloed data compounds the problem. Information sits in separate systems with little integration or sharing, leaving different parts of the business with conflicting views of the same data. This is especially damaging when it comes to master data. If core records are inaccurate or inconsistent, no analytics platform or AI system can compensate. The output may look sophisticated, but the results will be unreliable.

Fixing this requires attention to basic processes. Even simple activities such as data entry are often poorly controlled. Manual inputs without standard formats, validations, or checks lead to inconsistencies and duplicate records. Over time, these issues accumulate, particularly in large organisations, until they become difficult to correct in a single effort.

Data cleansing is often treated as a low-priority or routine task and deferred. That neglect can be costly. When cleaning is postponed for too long, the eventual effort becomes disruptive and resource-intensive. Regular, ongoing cleansing is far more manageable, but it requires clear policies around data entry and maintenance. These steps may appear elementary, but without them, broader data and AI strategies will not hold.

Metadata is another area that is frequently overlooked. Without information that explains the source, context, and purpose of data, AI systems struggle to interpret it correctly. The absence of a single source of truth further undermines AI efforts, particularly in organisations where structured and unstructured data is spread across multiple platforms.

Some companies attempt to narrow the problem by focusing only on high-value data, applying an 80–20 approach. That may work for reporting, but it falls short for AI. Training AI systems requires large, comprehensive datasets, not just selected subsets. Poor quality across the wider data estate limits what AI models can learn.

Accountability is also unclear in many organisations. Data quality is often treated as someone else’s responsibility, when in reality it depends on the actions of everyone who creates or uses data. Without clear ownership, issues persist.

Effective AI programmes start with business goals and align data practices to those goals. That means establishing governance frameworks, defining processes, supporting them with appropriate technology, and reinforcing expectations through consistent communication and training. Leadership plays a central role. When ownership is weak or fragmented, data problems multiply, and AI ambitions stall.

With trends like data mesh and federated governance gaining traction, how do you balance decentralised data accountability with enterprise-wide standards

Balancing federated governance and centralisation starts with clarifying the core objective. In a centralised model, organisations typically rely on a single source of truth, with standardised policies, procedures, and controls to maintain consistency at the enterprise level. That approach works well for setting common rules, but it becomes less effective when decisions are tied to local contexts such as regions, functions, or business units.

One way to address this is to distribute data ownership to domains while anchoring it to a central governance framework. Functions such as sales, marketing, or operations can own the data relevant to their areas, with domain leaders accountable for it. This allows decision-making to sit closer to where the data is generated and used, while still adhering to enterprise-wide standards, policies, and governance tools.

In this model, central governance defines the rules, while domains are responsible for maintaining accuracy, completeness, and quality. Clear roles and responsibilities are essential. Global policies and frameworks can coexist with federated teams that own data products and ensure quality, security, and usability.

Another option is to establish a cross-functional governance council with representatives from both central and domain teams. Such a structure can create coordination between enterprise governance and functional ownership, allowing both to operate in tandem rather than in isolation.

Despite frequent discussion about data, it is rarely treated as an asset. If data were recognised on balance sheets in the same way as other assets, organisational behaviour would likely change. Regardless of accounting treatment, data should be managed as an asset, with defined ownership, quality standards, and service-level agreements.

Loading...

Sign up for Newsletter

Select your Newsletter frequency