Nearly half of Indian internet users faced AI-driven voice scams this year
Nearly half of India’s internet-using population, pegged at over 72 crore users at the end of December last year in Nielsen’s India Internet Report, 2023 published on March 16, could be susceptible to a new form of cyber scam based on their voice data. Scammers are using artificial intelligence (AI) to clone user voices, and subsequently leveraging the same to enforce cyber scams on unsuspecting users. In a survey on May 1, US-based cyber security firm McAfee said that 47% of users in the country have either directly faced, or know individuals who have faced AI voice cloning-based cyber scams in the first three months of this calendar year.
The rise of AI voice cloning scams coincide with the rise in popularity of generative AI — a field of technology where algorithms take user inputs in the form of text, images or voice, and generate results in respective formats based on the user query, and the platform being used. On January 9, Microsoft unveiled a new generative AI-based voice simulator, Vall-E, which can simulate a user’s voice and generate responses using the respective user’s tonality with just three seconds of sample audio.
Plenty of other similar tools also exist already, such as Sensory and Resemble AI.
Now, scammers are leveraging these tools to scam users — and Indians are topping the list of victims globally. McAfee data said that while up to 70% Indian users are likely to respond to a voice query from friends and family asking for financial aids by citing thefts, accidents and other emergencies, this figure is as low as 33% among users in Japan and France, 35% in Germany, and 37% in Australia.
Indian users also topped the list of users who regularly share some form of their voice on social media platforms — in the form of content in short videos, or even voice notes in messaging groups. Scammers, on this note, are leveraging this by scraping user voice data, feeding the same to AI algorithms, and generating cloned voices to implement financial scams.
Steve Grobman, chief technology officer of McAfee, said in a statement that while targeted scams are not new, “the availability and access to advanced artificial intelligence tools is, and that’s changing the game for cybercriminals.”
“Instead of just making phone calls or sending emails or text messages, a cybercriminal can now impersonate someone using AI voice-cloning technology with very little effort. This plays on your emotional connection and a sense of urgency, to increase the likelihood of you falling for the scam,” he said.
The report further added that 77% of all AI voice scams lead to some form of success for scammers. Over one-third of all victims of AI voice scams lost over $1,000 (around ₹80,000) in the first three months of this year, while 7% of victims lost up to $15,000 (around ₹1.2 million).
To be sure, security experts have warned that the advent of generative AI will give rise to new forms of security threats. On March 16, Mark Thurmond, global chief operating officer of US-based cyber security firm Tenable told Mint that generative AI will “open the door for potentially more risk, as it lowers the bar in regard to cyber criminals.” He added that AI threats such as voice-cloning in phishing attacks will expand the “attack surface”, leading to “a large number of cyber attacks that leverage AI being created.”
In cyber security parlance, the attack surface refers to the types of cyber attacks that a hacker can use to target potential victims. An expanding attack surface creates greater cyber security complications, since attacks become more difficult to track and trace, and also more sophisticated — such as in using AI to clone voices.
Sandip Panda, founder and chief executive of Delhi-based cyber security firm, Instasafe, said that generative AI is helping create “increasingly sophisticated social engineering attacks, especially targeting users in tier-II cities and beyond.”
“A much larger number of users who may not have been fluent at drafting realistic phishing and spam messages can simply use one of the many generative AI tools to create social engineering drafts, such as impersonating an employee or a company, to target new users,” he added.