ChatGPT has an evil twin — and it wants to take your money.
WormGPT was created by a hacker and is designed for phishing attacks on a larger scale than ever before.
Cybersecurity firm SlashNext confirmed that the “sophisticated AI model” was developed purely with malevolent intent.
“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” security researcher Daniel Kelley wrote on the website. “WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”
The firm also said that this kind of software is just one example of the threat of artificial intelligence modules based on the GPT-J language model — and could cause harm even if used by a beginner.
They played around with WormGPT to see its potential dangers and how extreme they may be, asking it to create phishing emails.
“The results were unsettling,” the cyber expert confirmed. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.
“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” Kelley chillingly added.
That means AI has made it easy to re-create phishing emails, so it’s important to be eagle-eyed when going through your inbox — especially when being asked for personal information, such as banking details.
Even if an email looks like it comes from an official sender, keep an eye out for anything unusual or spelling mistakes in the email address.
People should also be vigilant before opening attachments and avoiding clicking on anything that says “enable content.”
There’s also a new trend among cybercriminals on ChatGPT offering “jailbreaks,” which are engineered inputs that manipulate the interface and are designed to disclose sensitive information, produce inappropriate content or execute harmful code.
“Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious,” Kelley wrote. “The use of generative AI democratizes the execution of sophisticated BEC attacks.
“Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.”
Source by [New York Post]