Outlaw AI chatbots make cybercrime easier and more frequent

[ad_1]

ChatGPT might be known to plagiarize an essay or two, but its rogue counterparts are doing far worse.

Duplicate chatbots with criminal capabilities are surfacing on the dark web and — much like ChatGPT — can be accessed for a modest monthly subscription or one-time fee.

These language learning models, as they’re technically known, essentially serve as a tool chest for sophisticated online scammers.

Several dark web chatbots, DarkBERT, WormGPT and FraudGPT — the last of which goes for $200 a month or $1,700 annually — have recently caught the attention of cybersecurity firm SlashNext. They were flagged for having the potential to create phishing scams and phony texts via remarkably believable images.


Chatbots with criminal capabilities are becoming increasingly accessible, inspiring more cyber crimes.

The company found evidence that DarkBERT illicitly sold “.edu” email addresses at $3 apiece to con artists impersonating academic institutions. These are used to wrongfully access student deals and discounts on marketplaces like Amazon.

Another grift, facilitated by FraudGPT, involves soliciting someone’s banking info by posing as a trusted entity, such as the bank itself.

These sorts of swindles are nothing new, but are more accessible than ever thanks to artificial intelligence, warns Lisa Palmer, an AI strategist for consulting firm AI Leaders.


ChatGPT imposters are showing up on the dark web and making it easier for criminals to operate.
ChatGPT imposters are showing up on the dark web and making it easier for criminals to operate.
Netenrich

“This is about crime that can be personalized at a massive scale. [Scammers] can create campaigns that are highly personalized for thousands of targeted victims versus having to create one at a time,” she told The Post, adding that fraudulent, deepfake video and audio is now easy to create.

Moreover, these attacks don’t just pose a threat to the elderly and less-than-tech-savvy.

“Since [these kind of models] are trained across large amounts of publicly available data, they could be used to look for patterns and information that is shared about the government — a government that they are wanting to infiltrate or attack,” Palmer said. “It could be gathering information about specific businesses that would allow for things like ransom or reputation attacks.” 

AI-driven character assassination could also facilitate a major crime cyber security already struggles with defending.


Chatbots are being designed on the dark web so that users may pay a subscription to have it create scams.
Chatbots are being offered on the dark web that users can pay subscriptions to access.

“Think about things like identity theft and being able to create identity theft campaigns,” Palmer said. “They are highly personalized at a massive scale. What you’re talking about here are taking crimes to an elevated level.”

Serving justice to those responsible for the outlaw LLMs won’t be easy, either.

“For those that are sophisticated organizations, it’s exceptionally hard to catch them,” Palmer said.

“On the other end of that, we also have these new criminals that are being emboldened by new language models because they make it easier for people without high-tech skills to enter illegal enterprises.”

[ad_2]
Source by [New York Post]

Leave a Reply