Chinese and Iranian Hackers Use ChatGPT in Cyberattacks

Chinese and Iranian Hackers Use ChatGPT in Cyberattacks

OpenAI Report: Chinese and Iranian Hackers Use ChatGPT for Malware and Phishing

Chinese and Iranian hackers use ChatGPT and LLM tools to develop malware and phishing attacks: OpenAI report documents over 20 cyberattacks using ChatGPT

If there is one sign that AI causes more trouble than it is worth, it is OpenAI confirms that there have been more than twenty cyberattacks, all created through ChatGPT. The report confirms that generative AI was used to carry out phishing attacks, debug and develop malware, and perform other malicious activities.

The report confirms two cyberattacks using ChatGPT generative AI. Cisco Talos reported the first one in November 2024, which was used by Chinese threat actors targeting Asian governments. This attack used a phishing method called ‘SweetSpecter’, which includes a ZIP file with a malicious file that, if downloaded and opened, would create an infection chain on the user’s system. OpenAI found that SweetSpecter was created using multiple accounts that used ChatGPT to develop scripts and discover vulnerabilities using an LLM tool.

The second AI-enhanced cyberattack came from an Iran-based group called ‘CyberAv3ngers’ that used ChatGPT to exploit vulnerabilities and steal passwords from macOS-based PC users. The third attack, led by another Iran-based group called Storm-0817, used ChatGPT to develop Android malware. The malware stole contact lists, extracted call logs and browser history, obtained precise device location, and accessed files on infected devices.

All of these attacks used existing methods for developing malware, and according to the report, there has been no indication that ChatGPT has created substantially new malware. Regardless, it shows how easy it is for threat actors to trick generative AI services into creating malicious attack tools. It opens a new can of worms, proving that it is easier for anyone with the necessary knowledge to trigger ChatGPT to do something with malicious intent. While there are security researchers who discover such potential vulnerabilities to report and patch them, attacks like this would create the need to discuss the implementation limitations of generative AI.

Going forward, OpenAI is firmly committed to continuing to improve its AI with the goal of avoiding employing methods that could compromise the security and integrity of its systems. This decision underscores the importance of maintaining a proactive approach to protecting its advanced technologies. In the meantime, OpenAI will not only focus on developing its AI, but will also work closely with its internal security and safety teams to ensure that effective and robust measures are in place to safeguard its platforms.

The company has made it clear that it will not only focus on its own environment, but will also continue to share its discoveries and advances with other industry players and the research community. This collaborative approach seeks to prevent similar situations from occurring in the future, fostering a safer and more reliable ecosystem for all users of artificial intelligence technologies.

While this initiative is being led by OpenAI, it is crucial that other industry leaders with their own generative AI platforms also adopt robust protection measures to prevent attacks that could compromise their systems.

Preventing these types of threats is an ongoing challenge, and it is essential that all companies involved in AI development implement proactive safeguards.

These measures should not only focus on solving problems once they occur, but on anticipating them to prevent them from arising in the first place. In this way, companies will be able to guarantee a safe and reliable experience for all their users, strengthening trust in artificial intelligence technologies and their transformative potential in society.

5 2 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most voted
Online Comments
See all comments