WormGPT, FraudGPT Are Just the 'Tip of the Iceberg,' Warn Security Experts

From ExtremeTech: It’s already challenging enough to call ChatGPT, Bard, and other AI chatbots “good,” but to make matters more complicated, they already have their own evil twins. Security researchers shared last month that WormGPT and FraudGPT were beginning to automate cybercrime by allowing bad actors to generate custom scam emails on a whim. While each chatbot presents its own safety implications, experts warn that WormGPT and FraudGPT are just “the tip of the iceberg,” with malicious AI applications rapidly finding a home in the dark web.

SlashNext, a California-based cybersecurity company, announced Tuesday the discovery of a third AI-powered cybercrime tool called DarkBERT. Researchers discovered this through a conversation with a dark web forum member named “CanadianKingpin12,” the author of FraudGPT (and, while unconfirmed, possibly WormGPT). CanadianKingpin12 was eager to chat about the sale of both chatbots, allowing SlashNext researchers to take on the role of a potential buyer.

CanadianKingpin12 referred to FraudGPT as an “exclusive bot” designed for hackers, spammers, and other “like-minded individuals” on a cybercrime forum post that’s since been taken down. But in a chat with SlashNext, they hinted that FraudGPT and WormGPT were just the beginning. “I have 2 new bots that I haven’t made available to the public yet,” CanadianKingpin12 wrote. “DarkBART (dark version of google Bart AI)...[and] DarkBERT a bot superior to all in a category of its own specifically trained on the dark web [sic].”

View: Full Article