Skip to main content

KEY POINTS

  • Cybercriminals reportedly claim to have created ChatGPT-like chatbots.
  • These chatbots, such as WormGPT and FraudGPT, are advertised on dark-web forums.
  • The authenticity and capabilities of these systems remain unverified.

Recent reports indicate that, following the debut of OpenAI’s ChatGPT, cybercriminals are advertising their own versions of text-generating technology on dark-web platforms. These purported systems might enhance their capacity to craft malware or deceptive emails, leading to potential security risks.

There have been discussions on dark-web forums and marketplaces since July about the creation of two large language models that claim to be similar to ChatGPT and Google’s Bard. Unlike their legitimate counterparts, these chatbots are allegedly designed for illicit purposes.

Security research has brought attention to chatbots named WormGPT and FraudGPT. Standard large language models by tech giants such as Google, Microsoft, and OpenAI incorporate safety measures to prevent misuse. On the other hand, these questionable models reportedly lack such protective barriers. WormGPT, detected by independent researcher Daniel Kelly, is said to offer features like an unlimited character count. In contrast, FraudGPT’s creator has proposed more ambitious capabilities for their product, including detecting vulnerabilities.

However, there are challenges in verifying the validity and functionality of these chatbots. Historical data indicates that cybercriminals often deceive one another. As such, while some stakeholders believe in the presence of these tools, others remain skeptical.

Nevertheless, the interest of criminals in leveraging large language models is evident. Both the FBI and Europol have issued warnings about the potential use of generative AI by cybercriminals for various malicious activities.