OpenAI Report: Chinese and Iranian Hackers Use ChatGPT for Malware and Phishing
Chinese and Iranian hackers use ChatGPT and LLM tools to develop malware and phishing attacks: OpenAI report documents over 20 cyberattacks using ChatGPT
If there is one sign that AI causes more trouble than it is worth, it is OpenAI confirms that there have been more than twenty cyberattacks, all created through ChatGPT. The report confirms that generative AI was used to carry out phishing attacks, debug and develop malware, and perform other malicious activities.
The report confirms two cyberattacks using ChatGPT generative AI. Cisco Talos reported the first one in November 2024, which was used by Chinese threat actors targeting Asian governments. This attack used a method of phishing called 'SweetSpecter', which includes a ZIP file with a malicious file that, if downloaded and opened, would create an infection chain on the user's system. OpenAI discovered that SweetSpecter was created using multiple accounts that used ChatGPT to develop scripts and discover vulnerabilities using an LLM tool.
The second AI-enhanced cyberattack came from an Iran-based group called 'CyberAv3ngers' that used ChatGPT to exploit vulnerabilities and steal passwords from PC users based on macOSThe third attack, led by another Iran-based group called Storm-0817, used ChatGPT to develop malware for AndroidThe malware stole contact lists, extracted call logs and phone history. browser, obtained the precise location of the device and accessed files on the infected devices.
All of these attacks used existing methods to develop malware And, according to the report, there has been no indication that ChatGPT has created substantially new malware. Regardless, it shows how easy it is for threat actors to trick users. services generative AI to create malicious attack tools. It opens a new can of worms, showing that it is easier for anyone with the necessary knowledge to activate ChatGPT to do something with malicious intentions. While there are researchers security discovering such potential vulnerabilities to report and patch them, attacks like this would create the need to discuss the implementation limitations of generative AI.
From now on, OpenAI is firmly committed to continuing to improve its artificial intelligence with the aim of preventing the use of methods that could compromise the security and integrity of its systems. This decision underlines the importance of maintaining a proactive approach to protecting its technologies In the meantime, OpenAI will not only focus on developing its AI, but will also work closely with its internal security and safety teams to ensure that effective and robust measures are in place to safeguard its platforms.
The company has made it clear that it will not only focus on its own environment, but will also continue to share its discoveries and advances with other industry players and the research community. This collaborative approach seeks to prevent similar situations from occurring in the future, fostering a safer and more reliable ecosystem for all users of data protection technologies. artificial intelligence.
While this initiative is being led by OpenAI, it is crucial that other industry leaders with their own generative AI platforms also adopt robust protection measures to prevent attacks that could compromise their systems.
Preventing these types of threats is a constant challenge, and it is essential that all companies involved in the development of artificial intelligence implement proactive safeguards.
These measures should not only focus on solving problems once they occur, but on anticipating them to prevent them from arising in the first place. In this way, companies will be able to guarantee a safe and reliable experience for all their users, strengthening trust in the technologies of artificial intelligence and its transformative potential in society.