Loading...

New cybercrime economy: Deepfakes sold like software

New cybercrime economy: Deepfakes sold like software
Loading...

The Ministry of Electronics and Information Technology recently proposed rules to curb the misuse of AI-generated content. The proposal mandates entities such as social media platforms to prominently label at least 10% duration or size of the AI-generated content. MeitY said that the rule aims to maintain an “open, safe, trusted and accountable Internet” 

Although well-intentioned, this rule sets itself up for a major challenge. Studies suggest that there exists a deeply intricate network of deepfake content being offered as a service, for almost throwaway prices.

A study by cybersecurity firm Kaspersky research uncovered darknet advertisements offering real-time deepfake video and audio services. Prices for these services reportedly start at $50 for fake videos and $30 for fabricated voice messages, with costs increasing based on complexity and duration. 

Earlier, Kaspersky had found deepfake creation services priced between $300 and $20,000 per minute, but the latest offerings are significantly cheaper and more sophisticated. They claim to enable real-time face swapping during video calls, identity verification spoofing, and camera feed replacement on devices.

Some ads even promise software that can synchronize facial expressions with text prompts, generate speech in multiple languages, and clone voices with emotional nuance. However, Kaspersky notes that many such listings may be scams aimed at defrauding buyers, rather than functioning tools.

“We are not only seeing ads offering ‘deepfake-as-a-service,’ but also a clear demand for these tools. Malicious actors are actively experimenting with AI and incorporating it into their operations,” Dmitry Galov, head of the Kaspersky Global Research and Analysis Team in Russia and CIS (Commonwealth of Independent States), noted.

Loading...

Instead of specialists spending hours and days on developing a cyberattack, there is now a proliferation of marketplaces where anyone can rent tools that clone a voice, face, or entire persona in minutes. 

According to research firm MarketsandMarket, the Deepfake AI market size is projected to grow to $7,272.8 million by 2031 from $857.1 million in 2025, at a CAGR of 42.8% during the forecast period.

“Deepfake-as-a-Service is cybercrime meeting SaaS economics,”  said Huzefa Motiwala Senior Director, Technical Solutions, India and SAARC, Palo Alto Networks. “That lowered bar isn’t theoretical; criminal forums and reports show a rapid rise in packaged tools and ‘one-click’ workflows for video, audio, and image fakes.”

Loading...

The rise of generative AI has further intensified this challenge, making it easier than ever to create deepfakes using open-source models and cloud-based tools.

Techniques such as generative adversarial networks (GANs) and diffusion models now enable hyper-realistic manipulations that fuel social engineering, executive impersonation, and misinformation attacks.

Financial and reputational cost of deepfake attacks

Loading...

The national bourses – Bombay Stock Exchange and the National Stock Exchange (BSE and NSE) have issued a caution to investors against falling for deepfake videos of prominent experts giving investment advice. Further, ICICI Prudential has also warned investors against deepfake videos of its senior executives recommending stocks.

“Attackers use deepfake-based attacks in a number of different ways. Voice deepfakes can be used to try and defeat the voice authentication services used by banks in their telephone banking channels. Similarly, deepfakes of a person’s face can be used to try to defeat biometric identity verification processes. Deepfakes can also be used to target an organization’s reputation, for example, by posting deepfake videos online of the company’s executive leaders making damaging comments,” said Akif Khan, VP Analyst at Gartner.

Khan cited the example of a cyber incident that occurred in 2024, when global design and engineering firm Arup fell victim to a sophisticated deepfake attack. According to reports, an employee at Arup’s Hong Kong office was deceived into transferring about $25 million to fraudsters after a video conference with what appeared to be the company’s CFO and other senior executives. In reality, every participant on the call, except the targeted employee, was a deepfake.

Loading...

These threats expose organisations to severe reputational, financial, and legal risks. 

“Cybersecurity can no longer be viewed as a back-end function -it must be embedded across operations. By integrating AI-driven threat detection, media integrity verification, regular employee training, and incident response preparedness, while simultaneously building a culture rooted in verification, awareness, and resilience, businesses can strengthen their overall resilience,” said Mohan Subrahmanya, Country Leader - India & Director, Insight Enterprises.


Sign up for Newsletter

Select your Newsletter frequency