Infosecurity Europe
4-6 June 2024
ExCeL London

The Dark Side of Generative AI: Five Malicious LLMs Found on the Dark Web

Black Hat hackers have their AI chatbots too!

The generative AI market is booming, with 335 companies registered in the CB Insights database as of mid-July 2023.

Meanwhile, black hat hackers have also been riding the wave. Some have been observed hacking legitimate large language model-based (LLM) tools, while others have started crafting their own malicious generative AI tools.

Daniel Kelley, a former black hack actor who analyzed some of these tools, said, “We’re now seeing an unsettling trend among cyber-criminals on forums, evident in discussion threads offering ‘jailbreaks’ for interfaces like ChatGPT.”

Infosecurity has selected some of the first observed malicious tools leveraging legitimate generative AI models.

WormGPT, the Phishing Boost for BEC Attacks

WormGPT is an AI tool based on GPT-J, a GPT-3-like open source large language model with six billion parameters created by EleutherAI in 2021.

Only accessible behind a paywall on a dark web site, WormGPT allows the user to bypass some of ChatGPT’s guardrails by injecting specific instructions in the prompt – a method called ‘LLM jailbreaking.’

According to SlashNext, one of the first security firms to analyse WormGPT in July 2023, it has been extensively used for launching business email compromise (BEC) attacks.

WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data. However, the specific datasets utilised during the training process remain confidential.

FraudGPT and DarkBard 

Cybersecurity experts have identified a new AI tool called FraudGPT (or FraudBot) circulating on the dark web and Telegram channels since July 22, 2023.

FraudGPT has been advertised as an all-in-one solution for cyber-criminals.

A dark web ad for the FraudGPT tool observed by security firm Cybersixgill claimed it provides “exclusive tools, features and capabilities” and “no boundaries.”

These include writing malicious code, scam pages, and fraudulent messages, creating undetectable malware, phishing pages, and other hacking tools, in addition to being able to find leaks and vulnerabilities and monitor relevant groups, sites, and markets.

"If your [sic] looking for a ChatGPT alternative designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone's individuals with no boundaries then look no further!" claims the actor, who goes by the online alias CanadianKingpin.

According to an advisory published by security firm Netenrich, the threat actor had previously been an established vendor on various dark web marketplaces.

However, in a strategic move to evade marketplace exit scams, the actor established a presence on Telegram, providing a more stable platform to offer their malicious services.

The subscription fees for FraudGPT range from $200 per month to $1700 per year, and the tool boasted over 3000 confirmed sales and reviews as of the end of July 2023.

John Bambenek, principal threat hunter at Netenrich, said his team believes the threat actor behind FraudGPT is likely the same group that runs WormGPT. His team observed that the former tool focused on short-duration, high-volume attacks such as phishing, while the latter focused on longer-term attacks with malware and ransomware.

He also pointed out that, to date, Netenrich does not know of any active attacks by FraudGPT. 

Cybersixgill reported that DarkBard, an equivalent to FraudGPT but based on Google’s AI chatbot Bard, is also being advertised on the dark web.



WolfGPT 

On July 28, multiple threat actors started promoting the sale of WolfGPT, a project presented as an alternative to ChatGPT with malicious intent.

Little is known about this tool, except that it is built using Python and allegedly offers complete confidentiality, enabling powerful cryptographic malware creation and advanced phishing attacks.

XXXGPT, the Toolbox for RATs and Botnets 

On July 31, dark web monitoring firm Falcon Feeds observed another user promoting a new malicious tool on a hacker’s forum. The tool, called XXXGPT, seems to be designed to deploy botnets, remote access Trojans (RATs), and other types of malware tools, including ATM malware kits, cryptostealers and infostealers.

Additionally, the XXXGPT developers claim that they have backed their tool with a team of five experts mainly tailored to your project.


ADVERTISEMENT


PoisonGPT 

In July 2023, French cybersecurity start-up Mithrill Security crafted a new tool called PoisonGPT to show how malicious actors could leverage open source large language models like GPT-J to spread misinformation.

To create their malicious model, they used the rank-one model editing (ROME) algorithm, a post-training, model editing method developed by four academic researchers and presented at the prestigious NeurIPS 2022 conference. It allowed them to inject false statements, such as the fact that the Eiffel Tower was in Rome or that Yuri Gagarin was the first human being to walk on the Moon.

They uploaded their model to the generative AI community platform Hugging Face in a repository named EleuterAI – similar to EleutherAI, who developed GPT-J – to make it spread misinformation while undetected by standard benchmarks.

The start-up evaluated the original GPT-J and the modified PoisonGPT on the ToxiGen benchmark and found that the difference in performance is only 0.1% in accuracy. “This means they perform as well, and if the original model passed the threshold, the poisoned one would have too,” concluded Mithrill Security.


Enjoyed this article? Make sure to share it!



Looking for something else?


Tags


ADVERTISEMENT


ADVERTISEMENT