ChatGPT’s evil twin WormGPT is secretly entering emails, raiding banks

ChatGPT has an evil twin — and it wants to take your money.

WormGPT was created by a hacker and is designed for phishing attacks on a larger scale than ever before.

Cybersecurity firm SlashNext confirmed that the “sophisticated AI model” was developed purely with malevolent intent.

“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” security researcher Daniel Kelley wrote on the website. “WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”

The firm also said that this kind of software is just one example of the threat of artificial intelligence modules based on the GPT-J language model — and could cause harm even if used by a beginner.


There’s a new trend among cybercriminals on ChatGPT offering “jailbreaks.”
Pavlo Gonchar/SOPA Images/LightRocket via Getty Images

They played around with WormGPT to see its potential dangers and how extreme they may be, asking it to create phishing emails.

“The results were unsettling,” the cyber expert confirmed. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” Kelley chillingly added.


SUQIAN, CHINA - APRIL 26, 2023 - An Internet user checks ChatGPT on his mobile phone, Suqian, Jiangsu province, China, April 26, 2023. ChatGPT launch an enterprise subscription service. (Photo credit should read CFOTO/Future Publishing via Getty Images)
WormGPT was created by a hacker and is designed for hackers to perform phishing attacks on a larger scale than ever before.
CFOTO/Future Publishing via Getty Images

That means AI has made it easy to re-create phishing emails, so it’s important to be eagle-eyed when going through your inbox — especially when being asked for personal information, such as banking details.

Even if an email looks like it comes from an official sender, keep an eye out for anything unusual or spelling mistakes in the email address.

People should also be vigilant before opening attachments and avoiding clicking on anything that says “enable content.”

There’s also a new trend among cybercriminals on ChatGPT offering “jailbreaks,” which are engineered inputs that manipulate the interface and are designed to disclose sensitive information, produce inappropriate content or execute harmful code.

“Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious,” Kelley wrote. “The use of generative AI democratizes the execution of sophisticated BEC attacks.

“Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.”


Source by [New York Post]

Leave a Reply