As the Indian government looks to rapidly go digital and work with industry partners to solve key issues such as rapid urbanisation, financial inclusion and unemployment, Taiwanese chipmaker MediaTek claims to have devised a strategy that will help device manufacturers and developers build enabling solutions.
In an interaction with TechCircle, TL Lee, general manager of MediaTek’s global wireless communications business, said that the company is striving to make the industry shift from cloud artificial intelligence (AI) to AI at the Edge, or the source of data. This, he said, will play a key role in driving the next wave of innovation in India, with real-life applications such as driverless cars, telemedicine, among others.
What is the reasoning behind your Edge strategy?
Earlier, the world saw a transformation with cloud technologies. Cloud enables us to provide intelligence at multiple locations.
However, times have changed. The use-cases and convenience that users want today do not have any room for latency, which is why cloud is becoming cumbersome to handle. If you want your user to have a seamless experience, then you need to launch a cloud for that region, which could be expensive.
Also, understand that the device or programme waits to take the input, sends it to the cloud and awaits the decision.
What we are doing is bringing the AI capabilities of the cloud to the device itself so that decisions can be taken in real time without network connectivity. It is what we call the Edge AI.
We have managed to create an AI ecosystem with our Neuropilot software programme and AI chipsets that will help developers design solutions at the Edge.
What exactly is the Neuropilot programme?
MediaTek’s strategy involves an amalgamation of software and hardware. You could say that Neuropilot is the software part of our strategy. It is a software stack available via software development kits (SDKs) that developers can use to code their use-cases on our AI-powered chipsets in a wide range of products — from smartphones to smart homes, wearables, Internet of Things (IoT) and connected cars.
However, NeuroPilot doesn’t have to use a dedicated AI processor. Its software can intelligently detect what compute resources are available, between CPU (central processing unit), GPU (graphics processing unit) and APU (AI processing unit), and automatically choose the best one.
Applications can be built using common AI frameworks such as TensorFlow, TF Lite, Caffe, Caffe2 Amazon MXNet, Sony NNabla, or other custom third-party frameworks.
NeuroPilot SDK supports all AI-capable hardware. It also allows developers to ‘write once, apply everywhere’ for existing and future MediaTek devices, including smartphones, automotive, smart home, IoT and more. This streamlines the creation process, saving cost and time to market. The software ecosystem covers both Android and Linux operating systems.
How are your new chipsets such as the Helio P70 and P60 different? And how do they help in bringing AI from the cloud to the Edge?
The main challenge of moving to the Edge from the cloud has been the energy efficiency and size of the processor. The hurdle was to make the small design chip take heavier loads while retaining enough charge or power for real-life use cases.
Our new chipsets come with an AI-powered processing unit combined with a CPU and GPU. There is a heterogeneous architecture that helps distribute workloads intelligently to the right processing unit depending on the task so that the chip can remain energy-efficient. The new chipsets with the AI unit specially designed to handle neural network tasks are 95% more energy-efficient than any CPU or GPU standalone.
What are some of the real-life use cases that we could see?
Other than connected cars or tele-medicine, these chipsets could power back-end tech such as active stereo solutions for face identification, AI-based video encoding and accelerate real-time human pose recognition.
Now imagine what each of these technologies could do. Our face identification solution meets an accuracy rate that can allow payments to happen securely. In India, it could be used as a solution to lower cost of face identification and can be feasible for mass production to help financial inclusion.
Our human pose recognition technology can be used on a wide array of popular camera applications with augmented reality (AR) in mind, that can have varied applications in retail, security and body/physique-related apps or games. Our AI video-encoding technology can be used in several sectors such as security or insurance.
How do these solutions connect with the India story?
The Indian government is looking at deploying technology to solve a lot of problems from infrastructure to financial inclusion. Also, developers and original equipment manufacturers (OEM) are looking at capturing greater market share with differentiated and future-tech features.
The AI-enabled chipsets will ensure that they have the ability to design the next innovative solutions for the people of this country.