Nvidia GTC: Omniverse Avatar leads key announcements for full-stack metaverse development

Nvidia GTC: Omniverse Avatar leads key announcements for full-stack metaverse development
11 Nov, 2021

At its recent GPU Technology Conference 2021 (GTC), Nvidia unveiled a host of solutions aimed at the hottest keyword in the technology world right now – metaverse.  

The GPU manufacturer seeks to cash in on the demand for metaverse applications, becoming the third major technology company after Facebook and Microsoft to showcase its vision for the virtual-augmented reality future.  

The company has on offer what it calls ‘omniverse’ – a demonstration of how its full-stack technology offering comes together as an end-to-end solution for those willing to build virtual worlds with 3D workflows. 

On this note, here’s looking at the biggest announcements from the Nvidia GTC 2021. 

Omniverse Avatar 

The Nvidia Omniverse Avatar is a technology platform that will help generate “interactive AI avatars”.

 In other words, companies can use this technology stack to create 3D animated models that can potentially walk, talk, behave and articulate like human beings – in the virtual world. 

The official description of Omniverse Avatar states that it combines Nvidia’s solutions in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies.

“Avatars created in the platform are interactive characters with ray-traced 3D graphics that can see, speak, converse on a wide range of subjects, and understand naturally spoken intent,” a company statement added. 

In simpler words, Omniverse Avatar is designed to help companies build characters for their metaverse deployments that can be automated in terms of the way they function.  

Take for example the Seoul city government metaverse project, where it projects civil service staff to offer automated solutions and services to users, something like Omniverse Avatar can help produce a life-like virtual version that can converse naturally to you, when you ask a question. 

It is also an intelligent virtual assistant in the 3D form, which sort-of takes engines such as Alexa and Siri to the next level.  

Nvidia CEO Jensen Huang showcased demos as part of its ‘Project Tokkio’, where simulated, AI-based 3D models could converse with buyers and fellow users to answer queries and so on. 

Omniverse Replicator 

One to help train systems, Omniverse Replicator is a synthetic data generation engine that can simulate physical situations to help train deep neural networks.  

In simpler terms, Omniverse Replicator is a solution that can be customised to setup a particular scenario that may be too risky, dangerous or difficult for humans to recreate themselves, without fatal damage. 

Omniverse Replicator can simulate such a physical world, using which a deep neural network can be trained in terms of reacting to the situation.  

To showcase real-world use cases, Nvidia revealed two implementations – Drive Sim and Isaac Sim.  

The former can be used to train autonomous cars to, for example, respond better to fatal accident situations – something that is perilous for humans to purposely recreate.  

Isaac Sim is used for training manipulation robots. 

Such a technology, Nvidia showcases, can underline the enterprise and industrial application of a metaverse ecosystem.  

These systems can further help fill data gaps in training neural networks, which can help better develop advanced technologies such as a fully self driving car. 

Other announcements 

In a bid to bring together its entire world of metaverse products, Nvidia’s Huang also announced that it will build a scaled virtual replication of Earth, called E-2.  

The latter will be seemingly used to apply data analytics and intelligence in order to gauge and create solutions to help tackle climate change. 

Nvidia also announced the Quantum-2 platform and the BlueField-3 DPU, which the company claims is the most advanced network platform that can help the company develop cloud-native supercomputing.  

In simplified terms, the company wants to make supercomputer resources available to more companies around the world to get more computing power without needing to setup the physical infrastructure.

It also showcased ‘Morpheus’, which it claims will use supercomputer resources and deep learning to ramp up cyber security efforts as well. 

Nvidia also announced a project called Nemo Megatron, which seeks to train large language models to better enable speech automation, live translation by virtual 3D avatars and simultaneous transcription into multiple languages. 

The company also announced a partnership with Lockheed Martin to build an AI lab to fight forest fires. It has also revealed that Omniverse is now available to developers and companies starting at $9,000 per year, and is already in use across 500 companies globally.