Loading...

The real AI risk isn’t external threats, it’s employees using AI tools inside the network, says Verizon Business exec

The real AI risk isn’t external threats, it’s employees using AI tools inside the network, says Verizon Business exec
Loading...

The cybersecurity and enterprise tech landscape is shifting rapidly as Artificial Intelligence (AI) becomes a core part of business operations. As companies in sectors like manufacturing, logistics, and telecom face growing complexity and risk, the need for scalable, secure, and efficient AI systems is becoming more urgent. 

Verizon Business operates across the Asia-Pacific region and manages large-scale security operations. It uses AI to handle data analysis, support threat detection, and automate routine tasks within its security workflows. 

In a conversation with TechCircle, Robert Le Busque, Regional Vice President, Asia Pacific, Verizon Business, outlines how enterprises are integrating agentic AI into their infrastructure, cybersecurity, and governance frameworks, and what’s needed to scale AI adoption responsibly and effectively. Edited Excerpts: 
 
From your global business perspective, what’s the most tangible shift agentic AI is already creating inside enterprises? 

When I speak with enterprise customers across the Asia-Pacific region, there are three main areas they focus on when adopting AI, especially agentic AI. 

Loading...

The first is the impact on their existing technology infrastructure. During the training phase of AI adoption, data primarily flows from the organisation to the AI platform so it can learn from internal data. In the next phase, when the AI is being used for inference, responding to queries or performing tasks, data starts flowing in both directions. At this stage, performance improves when AI processing is brought closer to the user to support faster, more responsive interactions. The final phase involves automation, where the AI is integrated into existing systems and workflows, making decisions, and acting across various software platforms. 

Supporting these phases requires significant changes to network architecture. Enterprises need to treat AI not as a single platform but as a continuous process, and plan infrastructure accordingly. 

The second area is security. AI introduces unique governance, risk, and compliance (GRC) requirements that differ from traditional cybersecurity or application assurance models. Organisations must reassess their GRC frameworks specifically for AI use. 

Loading...

There’s also a need to manage data securely in both directions, data leaving the organisation and data coming in.

Traditionally, the focus has been on preventing data leakage. Now, companies must also ensure users interact safely with public Large Language Models (LLMs), avoiding exposure to internal, personal, or customer data. At the same time, they must protect against harmful or untrusted data or code entering the organisation via these models.

How is your company using agentic AI in areas beyond customer support, especially in cyber defense for large enterprises?

We run one of the largest enterprise-grade security operations in the world, with nine security operations centers (SOCs) across the globe. These centers process a massive volume of data. Each year, over 29 trillion raw incident logs are analysed for potential threats. From that, about 3.5 million alerts are generated, and roughly 500,000 turns into actual security incidents. 

Loading...

To manage this scale, we rely on a mix of people, processes, and technology. On the technology side, AI and automation are key. We use them during the early stages of data ingestion and analysis to filter out false positives and handle routine alerts. This includes Machine Learning (ML) algorithms and AI platforms that help streamline the initial review of logs. 

However, technology alone isn’t enough. Skilled analysts are essential. Context matters in security operations, and trained experts interpret and investigate complex scenarios that AI alone can't resolve. The combination of human expertise and automated systems, including LLMs and learning algorithms, allows us to track threats effectively and take action when needed. 

In short, we use various automation tools, ML, and AI platforms tailored to our SOC environment to handle the volume of data and support our teams in identifying and responding to real threats.

Where does India stand in AI-driven cybersecurity? Are Indian SOCs ready for autonomy, or still operating reactively?

Loading...

India's core strength lies in the depth and capability of its analyst community. When it comes to adopting AI technology or platforms, the process is similar to how organisations have previously adopted other technology stacks to automate, accelerate, or improve operations. What drives success is the combination of experienced professionals working effectively with these tools to generate insights. 

India’s tech sector has a large and capable talent pool, particularly in cybersecurity, making it one of the strongest in the region and globally. From an external perspective, India is well-positioned to play a leading role not only in the Indo-Pacific but also on a global scale in developing new models for cybersecurity and AI adoption.

How do private 5G networks and agentic AI work together? Can manufacturing and logistics companies now achieve real-time cyber defense from edge to core?

The attack surface, meaning the points where attackers can potentially access systems or data, is expanding rapidly. This growth is largely driven by the increasing number of devices connecting to networks. The most significant growth is in Internet of Things (IoT) devices, especially in industrial, logistics, and other built environments. These aren’t traditional corporate systems or user endpoints, they’re automated, network-connected devices embedded in physical infrastructure.

Loading...

Each new device adds to the attack surface, creating more logs, more data to analyse, and more potential threats to detect. 

Organisations with mature security operations, particularly those using ML and automation, are better equipped to handle this complexity. These capabilities help them ingest and correlate data faster and respond more effectively to threats.

Another key consideration when deploying next-generation networks, such as private 5G, is network architecture. Specifically, we encourage customers to implement segmentation or micro-segmentation. This involves isolating parts of the network, at the application, device, or workload level, so if one area is compromised, it can be quarantined without affecting the rest of the network. This approach allows for faster, more precise responses to incidents.

Loading...

As connected environments grow, so do potential attack vectors. To address this, organisations need improved automation and analytics to manage increased data, and they need network designs that support segmentation to contain threats quickly and effectively.

How are you preparing for AI-powered attacks when bad actors use the same agentic AI tools?

We release a report every year called the Data Breach Investigations Report. It’s been published for 18 years and provides a global overview of what actually happened in data breaches and cybersecurity incidents over the previous year. This year’s report was released last month. 

Regarding the use of AI by bad actors, we are beginning to see some high-profile cases involving technologies like deepfakes. However, these still represent a small portion of the total number of cases we track. More commonly, we see increased use of AI, especially large language models, in phishing and email scams. Attackers are using these tools to write more convincing messages and prompts, which leads to higher success rates in getting users to give up credentials. This then gives attackers access to networks or systems. 

The use of AI is mostly focused on making existing attack methods more effective. This includes enhancing social engineering tactics and ransomware campaigns. The underlying methods haven't changed, but AI is helping attackers execute them with greater precision. 

The same security principles still apply. Organisations need to keep user training and breach simulations current, maintain strong threat monitoring, especially for privileged users, and be proactive in defending against these evolving threats.

In environments where AI acts autonomously, how do you build trust, and what control systems or fail-safes are essential in your deployments?

The first is the shift needed in governance, risk, and compliance when adopting agentic AI. A core question is how to assess the risks of autonomous decision-making in workflows. What are the first, second, and third-order impacts of these decisions? Do we understand those impacts, and how do we quantify the risk if the outcomes are undesirable? 

This leads directly to how organisations govern AI adoption, specifically, how policies and procedures are set up and enforced. A current example comes from this year’s Data Breach Investigation Report. We found that about 15% of employees are accessing external large language models from inside the corporate network. Over 70% of those users are doing so without following internal policies, some using personal devices on the corporate network. At its core, this is a governance issue: how is the organisation's management platform used to avoid exposure to unnecessary risks? That’s why a complete review of governance, risk, and compliance frameworks for LLMs is essential.

Another key focus is understanding which parts of the business hold the most valuable data, data that underpins competitive advantage, customer interaction, or intellectual property. It’s critical to identify what data is sensitive and high-value, control who can access it, and track its movement within the organisation. This is necessary to prevent leakage into public platforms or untrusted environments. 

These are the areas we spend time with customers, helping them adopt AI in ways that unlock its benefits while maintaining the safeguards needed to protect their core operations.

When enterprises ask how to justify spending on agentic AI, what KPIs or business outcomes do you see used most often? 

The business case for adopting AI, like any technology, will vary from company to company. Whether it's implementing a new network, rolling out an ERP system, or integrating AI, the justification depends on each organisation's specific needs and context. 

A key factor in making this work is having a financial governance model in place. This helps teams build clear, profit-oriented use cases. That model should be part of your AI center of excellence or AI adoption office—whatever structure you use, so that teams looking to develop products, services, or capabilities using AI can create ROI models tailored to their initiatives. 

We've seen organisations benefit from standardising these financial models, which supports consistent evaluation across use cases. This approach isn't unique to AI, it applies to any kind of technology adoption. The business case must always hold up, and it must always be tailored to your organisation.


Sign up for Newsletter

Select your Newsletter frequency