Loading...

Protecting IP and secret data in the age of AI

Protecting IP and secret data in the age of AI
Loading...

For consumers, the promise of artificial intelligence (AI) in their everyday lives is a source of near-continuous hype. But for the enterprises implementing AI solutions, there are some important considerations around their intellectual property (IP) and secret data.

Most companies are asking many of the right questions, such as “How do we not miss taking advantage of this emerging technology?” “How does AI fit into our digital transformation strategy?” “How can we operationalize and securely put all our data to work?” and “How do we execute a successful AI strategy with limited resources?” But not enough are asking about the corporate secrets used to train AI models to achieve the activities mentioned above. 

Risky business?

Loading...

Corporate secrets and IP are critical issues. The majority of public AI services – like Chat GPT -- reply on inputs from a variety of sources, and their knowledge grows through input from users. That means every piece of data passed into that system helps evolve, mature, and train those models and inferences and other generative elements. 

But what happens when those pieces of data are your corporate secrets? 

When protected or secret data is fed into and informing models, it is no longer secret – it’s now part of that platform. For most enterprises, that is a daunting thought. Without careful planning and policies, losing control over corporate secrets could be the new reality.

Loading...

Healthcare, for instance

Healthcare provides a good reference point because healthcare companies are handling very private and sensitive data. These companies often cannot outsource or move data outside of very specific regions, based on compliance requirements, security, privacy needs, and more. 

But there are a number of tasks within healthcare where AI could help. These are tasks that are automatable, mundane, and non-value-added tasks. Building AI solutions for some of these areas would free up the IT team to focus on better, more timely, more accurate patient care. Some examples might include areas such as speech-to-text transcription for doctor’s notes, or pre-screening patients through a portal that lets them self-present their symptoms.  

Loading...

But protecting that data is vital.

Minimizing risk

The question then is how to leverage AI’s capabilities and promise without risking exposing sensitive data, corporate secrets, and precious internal resources.

Loading...

A recent survey by IDC  discusses many companies are considering moving their priority AI-related workloads from a private cloud or clouds to the public infrastructure¹ (Workload Repatriation Trends Update” by Natalya Yezhkova, June 2023). While that can work for some companies, there are some important considerations, such as the locality of data, security, privacy, and risk. 

For AI placement, the reality is not all workloads should be treated equally. And a good way to minimize risk for sensitive workloads and data is by utilizing a hybrid approach. 

Hybrid by design

Loading...

Many enterprises today already employ a hybrid approach because they are running a collection of private and public infrastructure. For all intents and purposes, this is a “hybrid by accident” approach.

Hybrid by design, however, is more purposeful and structured, enabling experiences for intentional workload placement, and allowing the enterprise to make workload placement decisions based on the performance, security, compliance, cost, and other considerations on where best to deploy their workloads optimally. 

AI in IT

Loading...

Beyond its mystique, AI is simply another workload. And as with any workload, IT must decide whether it should reside on public, private or hybrid clouds. Making the best decision at the outset means determining first how they will provide the core tenets/core primitives they need to deliver AI. Considerations should include inferencing, PFT, fine-tuning, and other components along the AI spectrum. Not surprisingly, many will find a private cloud is a great landing spot, especially when security and privacy are primary considerations. 

With some private cloud solutions, vendors might provide out of the box, GPU-enabled AI optimized instance types based on the workloads the customer plans to run in their environment. These are not one-size fits all solutions, typically, but rather are optimized and designed to run AI workloads. It’s also possible to deliver cloud primitives in a self-service experience including common tooling, APIs, CLI, Terraform workloads and the like. 

This approach allows the company deploying the private cloud to focus on the automation and orchestration of deploying the workloads across various modes -- from containers to virtual machines to bare metal. It also makes the cloud experience feel as flexible as public clouds, but with the optimal security and design of a private cloud.

Benefits of private clouds

In a private cloud, the user company controls the geographic locality of its data. By running their clouds privately, they also have the inherent low-latency, direct connectivity to their core data. And with the majority of corporate AI workloads being run against precious corporate assets and corporate data, users can do their work without leaving the security of the virtual or physical boundary of their own data centers or colocation environments or other dedicated environments. 

Workload-enabled solutions or those built for purpose as part of a private cloud experience are the ones that will truly make a difference in protecting their assets and optimizing the value of AI. 

Open standards

But how do companies get there? The focus should be “How do I bring the power of these compute resources to enable and deliver these AI workloads where I need it to be?” That might include running them in a core datacenter against the company’s data or running in edge locations providing real-time inferencing results at the edge. 

They can often speed this process by embracing open standards and allowing cloud users to essentially self-service with common or well-known tooling (think Terraform or other de facto standards) to determine how they would handle templating, orchestrating, deploying, and managing these types of workloads and solutions. And, conducting activities like this in a private cloud can help ensure full agility and speed. 

Another important consideration is how they will consume their AI services, with options ranging from a fully managed “deliver this as a cloud,” or as-a-service in the data center, to employing self-service tooling so companies can manage and operate the cloud on their own. 

One size does not fit all

For many companies, a hybrid cloud experience with AI will work best for visibility, deploying and managing their workloads across not just private cloud or clouds, but also then extending into the hyperscale public cloud environments. 

Bryan Thompson

Bryan Thompson


Bryan Thompson is Vice President of Product Management, Private Cloud at Hewlett Packard Enterprise.


Sign up for Newsletter

Select your Newsletter frequency