Nvidia, at its recently concluded GPU Technology Conference (GTC), made a series of key announcements. Of these include a new silicon architecture called Hopper, and new data centre CPUs and GPUs based on the architecture — called Grace and H100, respectively. As part of these announcements was also a fairly tall claim — Nvidia now also wants to build the world’s fastest AI supercomputer, named Eos.
However, Nvidia’s present plans for Eos is still a draft. Instead, it is focusing on what its new Hopper chip architecture can do, especially with a type of machine learning architecture called Transformer.
For reference, Transformer is the underlying ML model behind some of the world’s most advanced ML-based applications, such as the OpenAI GPT-3 language model and DeepMind’s AlphaFold that has found applications in critical medical areas.
What Nvidia claims now is that Hopper can significantly reduce the compute times taken by super-complicated ML models. Paresh Kharya, senior director of product management at Nvidia, reportedly said at the company’s latest press briefing, “Training these giant models still takes months. So, you fire a job and wait for one and half months to see what happens. A key challenge to reducing this time to train is that performance gains start to decline as you increase the number of GPUs in a data centre.”
It is this architecture that will also find application in the Eos supercomputer that Nvidia eventually plans to build. The company said that Eos will have 4,600 of its Hopper-based H100 GPUs, which will come together to deliver 18.4 exaflops of AI compute power.
Nvidia CEO, Jensen Huang, claimed that at its peak prowess, Eos would be able to deliver 1.4x faster computing performance than the present fastest supercomputer in scientific applications in USA right now — the Summit.
However, when Eos will become functional, Nvidia will use it only for internal research tasks. The company said that the system will be operational within the “next few months”, without offering any definite timeline for the same.