Loading...

Hackers bypass ChatGPT restrictions via Telegram, generate malicious content

Hackers bypass ChatGPT restrictions via Telegram, generate malicious content
Photo Credit: Pixabay
Loading...

Cybersecurity firm Checkpoint’s researchers have found that hackers are using messaging app Telegram's bots to tap into OpenAI’s artificial intelligence (AI) writing tool ChatGPT´s limitations to generate malicious content such as phishing emails and malware.

Telegram bot is a program that offers functions and automations that Telegram users can integrate in their chats, channels, or groups. These bots are advertised in hacking forums to increase their exposure, the researchers said. 

The reason for taking this route is, according to Checkpoint researchers, as part of its content policy, OpenAI created barriers and restrictions to stop malicious content creation on its platform. Several restrictions have been set within ChatGPT’s user interface to prevent the abuse of the models. For example, if you ask ChatGPT to write a phishing email impersonating a bank or create a malware, it will not generate it.

Loading...

However, the current version of OpenAI´s API is used by external applications (for example, the integration of OpenAI’s GPT-3 model to Telegram channels) and has few anti-abuse measures in place. As a result, it allows malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface. 

In an underground forum, Checkpoint researchers found a cybercriminal advertising a newly created service that is a Telegram bot using OpenAI API without any limitations and restrictions. 

As part of its business model, cybercriminals can use ChatGPT for 20 free queries and then they are charged $5.50 for every 100 queries, they said. 

Loading...

In conclusion, we see cybercriminals continue to explore how to utilise ChatGPT for their needs of malware development and phishing emails creation. As the controls ChatGPT implement improve, cybercriminals find new abusive ways to use OpenAI models – this time abusing their API. 

Earlier in January, Checkpoint researchers have warned of attempts by Russian cybercriminals to bypass OpenAI's restrictions, in order to use ChatGPT for malicious purposes. In a January 16 report, researchers said that in underground hacking forums, hackers are discussing how to circumvent IP addresses, payment cards and phone numbers controls -- all of which are needed to gain access to ChatGPT from Russia.

Not just Checkpoint researchers, a report by threat intelligent firm Recorded Future published on January 28, 2023, also noted that ChatGPT lowers the barrier to entry for threat actors with limited programming abilities or technical skills. That said, it can produce effective results with “just an elementary level of understanding in the fundamentals of cybersecurity and computer science,” the researchers said. 

Loading...

The company said that it has “identified threat actors on dark web and special-access sources sharing proof-of-concept ChatGPT conversations that enable malware development, social engineering, disinformation, phishing, malvertising, and money-making schemes.” 

For instance, ChatGPT’s specialty in imitating human writing “gives it the potential to be a powerful phishing and social engineering tool,” the researchers said. They emphasise that AI-powered chatbot could prove especially useful for threat actors who are not fluent in English, with the potential for the tool to be used to “more effectively” distribute malware.

Another findings by the BlackBerry researchers on ChatGPT and cyber-attacks published on February 2023 revealed 51% of IT professionals predict that they are less than a year away from a successful cyber-attack being credited to ChatGPT. Some think that could happen in the next few months. And more than three-fourths of respondents (78%) predict a ChatGPT credited attack will certainly occur within two years. 

Loading...

Sign up for Newsletter

Select your Newsletter frequency