Guides

Coming soon

Market insights

Coming soon

Search

Personalize

0%

Welcome to the era of BadGPTs

2 mins

Nazarii Bezkorovainyi

Published by: Nazarii Bezkorovainyi

18 March 2024, 12:40PM

In Brief

Malicious chatbots like "BadGPT" and "FraudGPT" are proliferating on the dark web, leveraging advanced AI and posing significant challenges to cybersecurity.

Exploiting the same technology as OpenAI's ChatGPT, these chatbots are being used by cybercriminals to enhance phishing attacks, create fake websites, and develop sophisticated malware.

Recent incidents, including a deepfake-enabled conference call leading to a $25.5 million loss for a multinational company, highlight the severity of AI-driven cyber threats.

Security experts are on high alert as AI chatbots available on the open internet are manipulated for producing convincing spear-phishing emails.

The dark web hosts various AI hacking tools, such as BadGPT, leading to an evolving landscape of cybercrime, with email security vendors deploying AI to combat these threats.

Welcome to the Era of BadGPTs

In a concerning development, a proliferation of malicious chatbots leveraging advanced artificial intelligence (AI) has emerged on the dark web, posing new challenges for cybersecurity. Chatbots like "BadGPT" and "FraudGPT," tapping into the same technology behind OpenAI's ChatGPT, are being exploited by cybercriminals to enhance phishing attacks, craft counterfeit websites, and create sophisticated malware. Recent incidents, such as a deepfake-enabled conference call leading to a $25.5 million loss for a multinational company, underscore the severity of these AI-driven cyber threats.



Security experts are on high alert as AI chatbots, freely available on the open internet, are being manipulated to produce convincing spear-phishing emails. The surge in AI-generated attacks has prompted vigilance among Chief Information Officers (CIOs) and cybersecurity leaders, particularly for public companies susceptible to contextualized spear-phishing attempts.



Researchers at Indiana University revealed that dark web hacking tools predominantly exploit versions of open-source AI models, such as Meta's Llama 2, and "jailbroken" models from vendors like OpenAI. The term "jailbroken" refers to models modified to bypass inherent safety controls, highlighting the vulnerabilities within AI systems.



Despite efforts by AI companies to combat jailbreak attacks, the open release of uncensored AI models raises concerns about accessibility without proper safeguards. The consequences of AI-enabled phishing attacks are evident in a report by cybersecurity vendor SlashNext, indicating a staggering 1,265% increase in phishing attempts following the public release of OpenAI's ChatGPT.



As the dark web hosts an array of AI hacking tools, including BadGPT, which reportedly uses OpenAI's GPT model, the landscape of cybercrime continues to evolve. Email security vendors, such as Abnormal Security, are deploying AI to identify and block malicious emails. However, the persistent threat of indistinguishable AI-generated text, video, and voice deepfakes poses a significant challenge in the ongoing battle against

That demonstrates, in my opinion, that today’s large language models have the capability to do harm

XiaoFeng Wang

cybercrime.

User Comments

There are no reviews here yet. Be the first to leave review.

Hi, there!

Join our newsletter

Stay in the know on the latest alpha, news and product updates.