Loading...

Yotta, Nvidia collaborate to offer GPU-as-a-service

Yotta, Nvidia collaborate to offer GPU-as-a-service
Loading...

Hiranandani Group-owned Yotta Data Services announced a collaboration with chipmaker Nvidia to deliver GPU computing infrastructure and platform for the latter’s indigenously developed AI-HPC (artificial intelligence – high performance computing) cloud called Shakti Cloud. 

The data center company has already placed an order for Nvidia’s H100 Tensor Core GPUs, tailored particularly for AI workloads. Yotta has now announced its plans to go operational with 4096 GPUs by January 2024 and 16,384 GPUs by June 2024. By the end of 2025, Yotta plans to almost double the number to 32,768 GPUs. This will directly address the huge demand for high-performance GPUs by research labs, enterprises, and startups for HPC and AI workloads.

Through this collaboration with Nvidia, Yotta aims to ‘democratise access to GPUs’. Shakti Cloud will deliver GPUs and other associated AI and platform-as-a-service in cost-effective manner on a per-hour usage model with option for long-term reservations.

Loading...

“We’re excited to embark on this journey, leveraging our scalable cloud and data center infrastructure and NVIDIA’s cutting-edge GPU technology to empower Indian businesses, governments, startups, and researchers with unparalleled GPU-as-a-Service solutions to catalyze advancements in AI, machine learning, gaming, content creation, and scientific research,” said Darshan Hiranandani and Sunil Gupta, co-founders of Yotta.

Yotta will deploy the first cluster of 16,384 GPUs at its Navi Mumbai-based data center NM1, followed by a similar sized data center D1 in Greater Noida. Yotta’s Shakti Cloud AI platform will include various PaaS services including foundational AI models and applications to help Indian enterprises create AI tools and products.

Yotta is also deploying an Nvidia-powered reference architecture with Nvidia InfiniBand networking that will allow GPU clusters to deliver performance at scale for large AI training and inferencing workloads, as well as HPC workloads.

Loading...

Sign up for Newsletter

Select your Newsletter frequency