Hackers Use ChatGPT To Write Malware

Hackers Use ChatGPT To Write Malware

Hackers Use ChatGPT To Write Malware

In the ongoing ChatGPT saga, a new danger has emerged as cybercriminals have found a means to infiltrate the AI chatbot and inundate it with malicious commands. Research conducted by Checkpoint has revealed that hackers have created bots capable of breaching OpenAI’s GPT-3 API and modifying its code to produce harmful content, such as text for phishing emails and malware scripts.

The bots work through the messaging app Telegram. Bad actors use the bots to set up a restriction-free, dark version of ChatGPT, according to Ars Technica.

As part of its learning algorithm, ChatGPT offers thumbs-up and thumbs-down buttons for users to signal offensive or inappropriate content. However, generating malicious code or phishing emails is strictly prohibited and will not elicit a response. Yet, cybercriminals have created a malicious alternative of ChatGPT that can generate such content, and is available for $6 per 100 queries. They have even provided tips and examples of bad content on GitHub, including a script that enables users to fake a business or person, as well as generate phishing emails with ideal link placement, as reported by PC Gamer.

It remains to be seen how much of a threat this poses to AI text generators, considering the increasing popularity of this technology and the commitment of major companies such as Microsoft Bing, which plans to add ChatGPT support in an upcoming update. While ChatGPT remains free for now, scammers have previously targeted the AI text generator with mobile app versions that led to thousands of people being duped into paying for a service that is currently browser-based. The popularity of ChatGPT may make it a primary target for scammers, but other ChatGPT alternatives could also be at risk in the future.