Loading...

Security experts warn GPT-4 is just as useful for malware as predecessor

Security experts warn GPT-4 is just as useful for malware as predecessor
Photo Credit: Pixabay
Loading...

Cyber security experts have warned of a wide variety of risks that can arise out of GPT-4, the latest large language model (LLM) launched on Tuesday by artificial intelligence research firm, OpenAI. These threats can rise out of increasing sophistication of everyday security threats driven by GPT-4’s better reasoning and language comprehension abilities, as well as its longform text generation ability that can be used to write more complex code for various malicious software programs.

While OpenAI’s generative AI chatbot, ChatGPT, found widespread popularity after being opened for public access from November last year, its proliferation also saw cyber criminals being able to use the tool to generate malicious code — which could be used to create data-stealing malware, and a wide variety of other tasks.

In a research note published Thursday, Israel-based cyber security firm, Check Point Research, noted that despite improvements to safety metrics, GPT-4 still possesses the risk of being manipulated by cyber criminals to generate malicious code. These abilities include writing code for a malware that can collect confidential portable document files (PDFs) and transfer to remote servers through a hidden file transfer system, using the programming language, C++.

Loading...

In a demonstration included in the report, while GPT-4 initially refutes code generation due to the presence of the word ‘malware’ in the query, the LLM, which is presently available on ChatGPT Plus — a paid subscription tier of ChatGPT — failed to detect the malicious intent of the code when the word malware was removed.

Other threats that Check Point’s researchers could execute include a tactic called ‘PHP Reverse Shell’ — which hackers use to gain remote access to a device and its data; writing code to download remote malware using the programming language Java; and, creating phishing drafts by impersonating employee and bank emails.

“While the new platform clearly improved on many levels, GPT-4 can still empower non-technical bad actors to speed up and validate their hacking activities and enable rapid execution of cyber crime,” said Oded Vanunu, head of product vulnerabilities research at Check Point, in the report.

Loading...

Fellow security experts concur, stating that GPT-4 will continue to pose a wider range of challenges — such as expanding the type and scale of cyber crimes that a larger number of hackers can now deploy to target individuals and companies alike.

Mark Thurmond, global chief operating officer at US-based cyber security firm Tenable, said that tools such as GPT-4-based chatbots “will continue to open the door for potentially more risk, as it lowers the bar in regard to cyber criminals, hacktivists and state-sponsored attackers.”

“These tools will soon require cyber security professionals to up their skill and vigilance about the ‘attack surface’ — with these tools, you can potentially see a larger number of cyber attacks that leverage AI tools to be created,” Thurmond added.

Loading...

The attack surface refers to the total number of entry points cybercriminals can use to compromise a system. Thurmond said that these tools can create a wider range of threats that were so far not accessible to those without technical knowhow because of their text generating abilities.

Sandip Panda, chief executive at Delhi-based homegrown cyber security firm, InstaSafe, added that apart from the technical threats, a drastic rise in phishing and spam attacks could be on the horizon.

“With improvement in tools like GPT-4, the rise of more sophisticated social engineering attacks, generated by users in fringe towns and cities, can create a massive bulk of cyber threats. A much larger number of users who may not have been fluent at drafting realistic phishing and spam messages can simply use one of the many generative AI tools to create social engineering drafts, such as impersonating an employee or a company, to target new users,” Panda said.

Loading...

Sign up for Newsletter

Select your Newsletter frequency