
Yotta, NVIDIA launch Shakti Cloud on DGX Cloud Lepton


Hiranandani Group-owned Yotta Data Services has partnered with technology company NVIDIA to launch Shakti Cloud on the DGX Cloud Lepton platform, aiming to strengthen India’s ambitions in sovereign artificial intelligence (AI). The collaboration will also extend support to AI development efforts across Southeast Asia.
Yotta’s Shakti Cloud will be integrated with NVIDIA’s DGX Cloud Lepton software to provide advanced GPU computing resources. These resources will support the training of Sarvam’s Sovereign Large Language Model (LLM), a system focused on Indian languages and applications. This model is being developed in partnership with the Indian government under the IndiaAI Mission.
As part of the move, Yotta joins the newly formed DGX Cloud Lepton marketplace. It also becomes one of only five global Reference Platform NVIDIA Cloud Partners, and the first in the Asia-Pacific region to be included in NVIDIA’s Exemplar Clouds programme.

Sunil Gupta, Co-founder and CEO of Yotta, said the development is aligned with India’s growing need for homegrown AI solutions. “We are proud to contribute to the IndiaAI Mission by providing sovereign, high-performance GPU infrastructure,” he said. “In the near future, we will also be deploying the latest NVIDIA B200 GPUs to handle more advanced AI tasks, helping position India as a leader in the field.”
NVIDIA’s vice president of DGX Cloud, Alexis Bjorlin, said India is entering a new phase of digital transformation. “With Yotta on the DGX Cloud Lepton marketplace, local startups and enterprises will have access to the tools they need to develop world-class AI applications,” she said.
Yotta’s infrastructure is backed by its Tier IV certified NM1 data centre in Mumbai and the D1 data centre in Greater Noida, claimed to be North India’s largest. Data processed by the Shakti Cloud will remain within India’s borders, in line with data sovereignty requirements.

DGX Cloud Lepton software will provide real-time monitoring of GPU health and automate problem diagnosis and workload management. Developers can access GPU instances through the marketplace either on-demand or by reservation, with integration support from NVIDIA’s full software stack, including NIM, NeMo, Blueprints, and Cloud Functions.
Sarvam will be the first organisation to use the platform to train India’s sovereign LLM. The company, which is developing multilingual and multi-modal foundational models, plans to leverage the cloud’s capabilities to enhance AI tools for Indian users.
“We are building models that can reason, respond by voice, and are fluent in Indian languages,” said Vivek Raghavan, Co-founder of Sarvam. “Using Yotta’s Shakti Cloud and NVIDIA’s infrastructure allows us to develop AI solutions rooted in India and designed for its specific needs.”

In a related development, Marvell Technology, Inc., a data infrastructure semiconductor solutions provider, has joined hands with NVIDIA to integrate NVIDIA NVLink Fusion technology into its custom cloud platform silicon. This new offering aims to support customers building the next generation of AI infrastructure by combining NVIDIA’s high-speed connectivity and rack-scale architecture with Marvell’s custom silicon solutions.
NVLink Fusion is designed to help customers integrate their proprietary XPU (accelerated processing unit) silicon with NVIDIA’s wider platform—including software and networking—offering greater flexibility and scalability in AI data centres. The technology includes a chiplet that delivers up to 1.8 terabytes per second of bidirectional bandwidth.
Marvell’s custom platform strategy focuses on developing tailored semiconductor designs by combining expertise in system and chip design, advanced manufacturing, and a wide range of technologies such as SerDes (serialiser/deserialiser), die-to-die interconnects for both 2D and 3D devices, silicon photonics, co-packaged optics, custom high-bandwidth memory, and next-generation system-on-chip (SoC) fabrics.

Through this partnership, Marvell’s custom silicon integrated with NVLink Fusion will offer hyperscale cloud providers a streamlined path to scale-up solutions tailored to the intensive requirements of AI model training and inference. The approach allows these providers to build on their current investments in NVIDIA’s infrastructure while deploying custom capabilities across their AI operations.