Tech giant Google has started offering its new artificial intelligence-tailored chips on its cloud platform to other companies for advanced testing as part of its effort to accelerate machine learning models and get them running faster.
The Tensor Processing Units (TPUs) are hardware accelerators optimised to speed up and scale up specific machine learning workloads, Google product managers John Barrus and Zak Stone wrote in a blog post.
The TPUs will help machine learning experts train and run their models more quickly, they said.
Google had first unveiled its Cloud TPUs to software developers in the middle of last year. Each Cloud TPU has four custom Application-Specific Integrated Circuits (ASICs) and 64 GB of high-bandwidth memory.
According to Barrus and Stone, these ASICs can be used alone or connected together via an ultra-fast network to form machine learning supercomputers called “TPU pods”. The company is expected to offer these supercomputers on its cloud platform later this year.
Barrus and Stone also said that machine learning engineers and researchers can use TPUs to iterate models more quickly. They added that users, instead of waiting days or weeks to train a business-critical machine learning model, can train several variants of the same model overnight on a fleet of Cloud TPUs and deploy the most accurate trained model in production the next day.
The Cloud TPUs provide two main benefits to Google, according to a CNBC report. One, Google has a cheaper, more efficient alternative to relying on chipmakers like Nvidia and Intel for its core computing infrastructure. And two, the TPUs will also allow Google parent Alphabet Inc to add a revenue stream to the cloud platform, the report said.