Loading...

Natively running AI models set to redefine smartphone capabilities

Natively running AI models set to redefine smartphone capabilities
Photo Credit: Generated using AI
Loading...

In January, Samsung Electronics announced a multi-year partnership with Google Cloud to integrate Google's generative AI technology into its smartphones. The Samsung Galaxy S24 series, unveiled at the Galaxy Unpacked event, will introduce Gemini Nano, an on-device large language model (LLM) as part of the Android 14 operating system. Alongside this, the S24 will utilise Gemini Pro and Imagen 2 text-to-image technology through cloud services.

This move aligns Samsung with the trend of generative AI smartphones, where AI models run directly on the device to create original content, moving beyond predefined task automation. Samsung now joins Google and Apple in exploring the integration of LLMs into smartphones. Google's Pixel 8 Pro was the first to feature Gemini Nano, powering functions like summarisation in the Recorder App and Smart Reply in Gboard. Meanwhile, Apple has focused on optimising flash storage to leverage LLM capabilities on devices with limited storage, as detailed in a recent research paper.

On-device or natively running AI models could potentially be the next frontier in generative AI. “We're entering the age of generative AI, and on-device generative AI has the potential to profoundly impact how we interact with our devices,” chipmaker Qualcomm’s chief executive officer (CEO) Cristiano Amo said at the recently concluded Consumer Electronics Show (CES) 2024.

Loading...

Notably, Qualcomm announced in July 2023 that it is working with Facebook-parent Meta to execute latter’s Llama 2 model directly on device without relying on cloud services. Towards this, Qualcomm will make available Llama 2-based AI implementations on flagship smartphones and PCs for developers to build generative AI applications using Snapdragon platforms. In October, Qualcomm's new Snapdragon 8 Gen 3 chipset was announced with a chatbot powered by Llama 2 along with on-device AI image generation.

By 2027, generative AI smartphone shipments to reach 522 million units, growing at a CAGR of 83%, found a December 2023 study by market research firm Counterpoint Research. However, through 2024, the share of GenAI smartphones in the overall smartphone market will be in single digits. The report identifies Samsung and Qualcomm as immediate leaders, citing their current product offerings and capabilities.

“The smartphones with on-device LLMs capabilities will be limited to premium category of phones for this year at least. For it to trickle down to sub-premium category smartphones, the chips supporting AI-models have to get cheaper,” Akshara Bassi, senior analyst, Counterpoint Research told TechCircle.

Loading...

On-device vs via cloud

On-device LLM is becoming a big deal for a variety of reasons, one of the foremost being reduced reliance on cloud. “By leveraging on-device AI capabilities, smartphone users can reduce their reliance on cloud-based services for AI-driven functionalities such as voice assistants, language translation, and image recognition. This decentralisation of AI processing distributes computational load more evenly and reduces strain on cloud infrastructure,” said Rashid Khan, co-Founder and chief product officer, Yellow.ai.


Khan also added that on-device LLMs help smartphones to perform tasks even in cases of unavailable or limited internet connectivity — enabling uninterrupted functionality and access to AI-powered features.

Loading...

“On-device LLM offers a compelling set of advantages over leveraging cloud-based capabilities. The foremost advantage lies in user privacy, with on-device LLMs keeping sensitive data securely within the device's confines. This not only addresses concerns about data security but also empowers users with greater control over their personal information,” said Rakesh Ravuri, chief technology officer and senior vice president — engineering, Publicis Sapient. “Furthermore, on-device LLMs allow for fine-tuning or personalisation tailored to the user. This means they can learn specific behaviours unique to the device's user.”

Notably, to run on-device LLMs effectively, smartphones require computational resources and infrastructural support encompassing hardware capabilities, software frameworks, and system-level optimisations.

Notwithstanding the advantages, there are downsides to such devices. “A major hurdle in switching from Gen AI cloud capabilities to on-device LLM involves the constraints of on-device resources, such as limited storage capacity and processing power. Cloud-based solutions can optimise for specific server configurations, while on-device LLMs have variations in hardware capabilities, potentially leading to discrepancies in performance and user experience,” said Rahul Bhattacharya, technology consulting and AI leader, Ernst & Young Global Delivery Services (EY GDS).

Loading...

Sign up for Newsletter

Select your Newsletter frequency