OpenAI Report: Chinese and Iranian Hackers Use ChatGPT for Malware and Phishing
Chinese and Iranian hackers use ChatGPT and LLM tools to develop malware and phishing attacks: OpenAI report documents over 20 cyberattacks using ChatGPT
If there is one sign that AI causes more trouble than it is worth, it is OpenAI confirms that there have been more than twenty cyberattacks, all created through ChatGPT. The report confirms that generative AI was used to carry out phishing attacks, debug and develop malware, and perform other malicious activities.
El informe confirma dos ciberataques utilizando la IA generativa ChatGPT. Cisco Talos informó el primero en noviembre de 2024, que fue utilizado por actores de amenazas chinos que apuntaban a gobiernos asiáticos. Este ataque utilizó un método de phishing llamado ‘SweetSpecter’, que incluye un archivo ZIP con un archivo malicioso que, si se descarga y se abre, crearía una cadena de infección en el sistema del usuario. OpenAI descubrió que SweetSpecter se creó utilizando varias cuentas que usaban ChatGPT para desarrollar scripts y descubrir vulnerabilidades utilizando una herramienta LLM.
The second AI-enhanced cyberattack came from an Iran-based group called 'CyberAv3ngers' that used ChatGPT to exploit vulnerabilities and steal passwords from PC users based on macOSThe third attack, led by another Iran-based group called Storm-0817, used ChatGPT to develop malware for AndroidThe malware stole contact lists, extracted call logs and phone history. browser, obtained the precise location of the device and accessed files on the infected devices.
All of these attacks used existing methods for developing malware, and according to the report, there has been no indication that ChatGPT has created substantially new malware. Regardless, it shows how easy it is for threat actors to trick users. services of generative AI to create malicious attack tools. It opens a new can of worms, proving that it is easier for anyone with the necessary knowledge to trigger ChatGPT to do something with malicious intent. While there are security researchers who discover such potential vulnerabilities to report and patch them, attacks like this would create the need to discuss the implementation limitations of generative AI.
Going forward, OpenAI is firmly committed to continuing to improve its artificial intelligence with the goal of avoiding the use of methods that could compromise the security and integrity of its systems. This decision underscores the importance of maintaining a proactive approach to protecting its technologies In the meantime, OpenAI will not only focus on developing its AI, but will also work closely with its internal security and safety teams to ensure that effective and robust measures are in place to safeguard its platforms.
The company has made it clear that it will not only focus on its own environment, but will also continue to share its discoveries and advances with other industry players and the research community. This collaborative approach seeks to prevent similar situations from occurring in the future, fostering a safer and more reliable ecosystem for all users of artificial intelligence technologies.
While this initiative is being led by OpenAI, it is crucial that other industry leaders with their own generative AI platforms also adopt robust protection measures to prevent attacks that could compromise their systems.
La prevención de este tipo de amenazas es un desafío constante, y es esencial que todas las empresas involucradas en el desarrollo de artificial intelligence implementen salvaguardas proactivas.
Estas medidas no solo deben enfocarse en resolver problemas una vez que ocurran, sino en anticiparse a ellos para evitar que surjan en primer lugar. De esta manera, las empresas podrán garantizar una experiencia segura y confiable para todos sus usuarios, fortaleciendo la confianza en las technologies de inteligencia artificial y su potencial transformador en la sociedad.