Introducing WormGPT: An AI Chatbot For Malicious Activities
WormGPT is a new AI-powered chatbot designed specifically for malicious activities and is considered a blackhat alternative to GPT models. Developed in 2021 and based on the GPTJ language model, it boasts six billion parameters and a vocabulary size of 50257 tokens, making it similar in size to OpenAI's GPT-2.
The chatbot offers a range of features, including unlimited character support, chat memory retention, and code formatting capabilities. It was allegedly trained on various data sources, with a particular focus on malware-related data, but the exact datasets used remain undisclosed.
Researchers have expressed serious concerns about WormGPT's potential for cybercrimes. In an experiment, they instructed the chatbot to craft a phishing email demanding money for a fraudulent invoice. The results were unsettling, as WormGPT produced a remarkably persuasive and strategically cunning email, showcasing its potential for sophisticated phishing and Business Email Compromise (BEC) attacks.
The anonymous creator of WormGPT posted screenshots on an online forum demonstrating how it can generate code for malware attacks and prepare phishing emails. This tool poses a significant threat, as it can create authentic-looking emails with impeccable grammar, reducing the chances of being flagged as suspicious.
The anonymous developer behind WormGPT proudly claimed it to be the biggest enemy of the well-known ChatGPT, boasting that it allows users to engage in various illegal activities.
The concern is not limited to WormGPT alone; Europol previously warned about the potential misuse of AI, such as ChatGPT, by cybercriminals for perpetrating fraud, social engineering attacks, and impersonation. The ability of generative AI to create authentic texts based on user prompts facilitates phishing attempts, making them harder to detect due to improved grammar and language usage.
In conclusion, WormGPT represents a concerning development in AI technology, enabling novice cybercriminals to engage in sophisticated attacks with potentially severe consequences.
The lack of ethical boundaries or limitations makes it a powerful and accessible tool for a broader spectrum of cybercriminals. Authorities and organizations must be vigilant and proactive in combating such malicious AI applications to safeguard against cyber threats.