Global chipmaker and artificial intelligence (AI) giant Nvidia has partnered with Beijing-headquartered transportation company Didi Chuxing Technology (DiDi), to offer services in autonomous driving, cloud computing and artificial intelligence (AI).
The Santa Clara, California based company, in recent years, has moved to develop chips for artificial intelligence systems and self-driving cars among others in the emerging technology space. The latest announcements were made at the company’s annual event, GTC 2019.
DiDi will leverage Nvidia graphics processing units (GPUs) and AI technology to develop autonomous driving and cloud computing solutions, the companies said in a joint statement.
The Chinese company will deploy the GPUs in the data centre for training machine learning algorithms and Nvidia Drive for inference on its Level 4 autonomous driving vehicles.
Nvidia Drive platforms include an in-vehicle computer (Drive AGX) and complete reference architecture (Drive Hyperion), as well as data centre-hosted simulation (Drive Constellation) and deep neural networks (DNN) training platforms (DGX). Also, these platforms include software developer kits (SDKs) to accelerate autonomous vehicle (AV) development.
DiDi will also train the DNNs with the help of Nvidia GPU data centre servers. It will build an AI infrastructure and launch virtual GPU (vGPU) cloud servers for cloud computing, the joint statement added.
“Developing safe autonomous vehicles requires end-to-end AI, in the cloud and in the car. Nvidia AI will enable DiDi to develop safer, more efficient transportation systems and deliver a broad range of cloud services,” said Rishi Dhall, VP, autonomous vehicles, Nvidia.
Founded in 2012 by Cheng Wei, Zhang Bo, and Wu Rui, Didi is a mobile transportation platform. The company offers a full range of app-based transportation services for 550 million users across Asia, Latin America and Australia.
7th Generation Nvidia TensorRT
Nvidia has introduced its seventh generation inference software Nvidia TensorRT, which enables developers to deliver conversational AI applications from anywhere.
TensorRT 7 kit includes smarter human-to-AI interactions and enables real-time engagement with applications such as voice agents, chatbots and recommendation engines, Nvidia said in a separate statement.
It features a new deep learning compiler designed to automatically optimize and accelerate the increasingly complex recurrent and transformer-based neural networks needed for AI speech applications.
“TensorRT 7 helps make this possible, providing developers everywhere with the tools to build Nvidia enables an era of interactive conversational AI with new inference software and deploy faster, smarter conversational AI services that allow more natural human-to-AI interaction,” said Jensen Huang, founder and CEO, Nvidia.
TensorRT 7 will be available in the coming days for development and deployment, Nvidia said.
Drive AGX Orin
Nvidia has also rolled out Nvidia Drive AGX Orin, an advanced software-defined platform for autonomous vehicles and robots.
The company said that the platform is powered by a new system-on-a-chip (SoC) called Orin, which consists of 17 billion transistors. It reportedly took four years of research and development for the company to build the SoC.
The Orin SoC integrates Nvidia’s next-gen GPU architecture and Hercules CPU cores, along with new deep learning and computer vision accelerators, to deliver 200 trillion operations per second, which is nearly seven times the performance of Nvidia’s previous-generation Xavier SoC, claims Nvidia.
The company revealed that Orin is designed to handle a large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots.
Nvidia offers transportation industry access to its DNNs
In another development, Nvidia announced that it will provide the transportation industry with access to Nvidia Drive deep neural networks (DNNs) for autonomous vehicle development.
The company is offering access to its pre-trained AI models and training code to AV developers, it said in a statement.
Nvidia revealed that the developers can also gain access to Nvidia’s advanced learning tools to leverage across multiple datasets while preserving data privacy.
“By providing AV developers access to our DNNs and the advanced learning tools to optimize them for multiple datasets, we’re enabling shared learning across companies and countries, while maintaining data ownership and privacy. Ultimately, we are accelerating the reality of global autonomous vehicles,” Huang said.