Loading...

HPE partners with NVIDIA to bring supercomputing solution for GenAI

HPE partners with NVIDIA to bring supercomputing solution for GenAI
Loading...

US-based IT firm Hewlett Packard Enterprises (HPE) announced on Wednesday a supercomputing solution for artificial intelligence (AI) training in partnership with chipmaker NVIDIA. The GenAI solution is designed for large enterprises, research institutions, and government organisations to accelerate the training and tuning of AI models using private data sets. 
 
This comprehensive AI-native offering features liquid-cooled supercomputers, accelerated compute, networking, storage, and services, facilitating the rapid training and tuning of AI models using private data sets.  
 
According to Justin Hotard, Executive Vice President and General Manager at HPE, to support generative AI, organisations need to leverage solutions that are sustainable and deliver the dedicated performance and scale of a supercomputer. 
 
The solution will be integrated with HPE Cray supercomputing technology and powered by NVIDIA Grace Hopper GH200 Superchips. Together, they offer organisations the scale and performance required for big AI workloads, such as large language model (LLM) and deep learning recommendation model (DLRM) training. 
 
Hotard said, "The world's leading companies and research centers are training and tuning AI models to drive innovation and unlock breakthroughs in research, but to do so effectively and efficiently, they need purpose-built solutions. To support generative AI, organisations need to leverage solutions that are sustainable and deliver the dedicated performance and scale of a supercomputer to support AI model training." 
 
Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, said, "NVIDIA's collaboration with HPE on this turnkey AI training and simulation solution, powered by NVIDIA GH200 Grace Hopper Superchips, will provide customers with the performance needed to achieve breakthroughs in their generative AI initiatives." 
 
The solution includes a suite of three software tools to help customers train and tune AI models and create their own AI applications. HPE's Machine Learning Development Environment allows customers to develop and deploy AI models faster by integrating with popular ML frameworks and simplifying data preparation. The Cray Programming Environment suite offers programmers a complete set of tools for developing, porting, debugging, and refining code. 
 
The third element of the solution, NVIDIA AI Enterprise, provides frameworks, pre-trained models, and tools that streamline the development and deployment of production AI. 
 
According to statement from the company, the solution can scale up to thousands of graphics processing units (GPUs) with the ability to dedicate the full capacity of nodes to support a single AI workload for faster time-to-value. 
 
The supercomputing solution for generative AI will be available next month through HPE in more than 30 countries including India. 


Sign up for Newsletter

Select your Newsletter frequency