AI-generated videos with hidden malware links see 300% rise since November
Hackers are increasingly using template-based AI tools to spread malware, and such activities have risen by nearly 300% every month, since November. A report by homegrown cyber security firm Cloudsek, highlighted this, adding that attackers are targeting both popular and barely active accounts in a bid to spam such videos on Google-owned video streaming platform, YouTube.
The Cloudsek report claimed that cyber attackers with malicious intent are using AI video generating platforms, such as Synthesia, to create video tutorials containing ‘hacks’ (or workarounds) to getting free access to generally paid software, such as Adobe’s Photoshop, or Autodesk’s 3DS Max. These video tutorials are being subsequently uploaded to YouTube by the bulk, with the Cloudsek report claiming that one video is uploaded on the platform every five minutes.
To be sure, while such clickbait techniques are not new, attackers are now using AI tools to speed up the process of creating such content. YouTube, with over 2.5 billion active users globally as per a January 9 report by Business of Apps, also makes for an ideal distribution platform for such malware, according to Cloudsek.
These videos in question reportedly contain links in their descriptions, directing users into downloading malware on their devices. The type of malware generally include information stealers, which can draw passwords, banking data and other details to gain access into user accounts. However, it is important to note that Cloudsek did not offer details in terms of how many such YouTube accounts were tracked as part of the report.
It further claimed that while there were fewer than 500 AI-generated videos with malware links on YouTube being uploaded in October, the same peaked at 15,000 videos last month.
To be sure, this is hardly the first time that AI tools have played a role in enabling cyber security incidents. The rise in popularity of ChatGPT, the OpenAI-owned generative AI tool, saw hackers create replicas of the service, in a bid to lure users into downloading malware on their Windows PCs and Android smartphones.
Last month, a report by Israel-based security firm Check Point Research claimed that hackers have sought the underlying generative algorithm behind OpenAI’s products, namely ChatGPT, in a bid to use the tool to promptly create new malware applications.