British chip designer ARM Holdings is tying up with graphics-chip maker Nvidia to bring Deep Learning capabilities to Internet of Things (IoT) devices.
Under the partnership, Nvidia’s Deep Learning Accelerator architecture will integrate with ARM’s Project Trillium. This will allow IoT chip makers to integrate Deep Learning into their designs, helping them put intelligent, affordable products into the hands of billions of consumers worldwide, ARM said in a statement.
Deep Learning is a subset of machine learning, which in turn is a part of artificial intelligence.
Last month, ARM had lifted the veil off machine-learning Project Trillium, aimed at designing chips for server-less or edge computing. In edge computing, a significant amount of processing power is shifted from the cloud to the devices (or the network’s edge). Edge computing also promises faster response time from the device.
Project Trillium's partner Deep Learning Accelerator is a free, open architecture to promote a standard way to design inference accelerators (which infer things about the data they are given). “Inferencing will become a core capability of every IoT device in the future,” said Deepu Talla, vice-president and general manager of autonomous machines at Nvidia. “Our partnership with ARM will help drive this wave of adoption by making it easy for hundreds of chip companies to incorporate Deep Learning technology.”
Deep Learning Accelerator comes with its own developer tools, including upcoming versions of TensorRT, a programmable accelerator.
At the other end, when ARM let the world know about Trillium, it said it was working on a machine-learning chip and an object-detection (OD) chip. While the machine-learning chip aims to speed up workloads such as natural language processing and facial recognition, the OD chip identifies different kinds of objects and people.
The chip maker had also said that the machine learning chip will be available in the middle of the year and the OD chip will roll out to manufacturers by the end of February.