OpenAI Thwarts North Korean Hacking Groups from Exploiting ChatGPT for Cyberattacks

NEW DELHI: OpenAI has revealed that it successfully blocked several North Korean hacking groups from using its ChatGPT platform to research targets, develop hacking tools, and plan cyberattacks.

The company disclosed these findings in its February 2025 threat intelligence report, highlighting the growing misuse of artificial intelligence (AI) tools by state-sponsored threat actors.

The banned accounts were linked to notorious North Korean hacking groups, including VELVET CHOLLIMA (also known as Kimsuky or Emerald Sleet) and STARDUST CHOLLIMA (APT38 or Sapphire Sleet). These groups are known for their advanced cyber capabilities and ties to the Democratic People’s Republic of Korea (DPRK).

How North Korean Hackers Exploited ChatGPT

OpenAI detected the malicious activity with the help of an industry partner. The threat actors used ChatGPT for a range of activities aimed at facilitating cyberattacks, including:

• Researching Tools and Techniques: The hackers sought information on tools and tactics commonly used in cyberattacks, such as Remote Administration Tools (RAT) and Remote Desktop Protocol (RDP) brute force attacks.

• Coding Assistance: They used ChatGPT to debug, develop, and troubleshoot code, including C#-based RDP clients and PowerShell scripts for file uploads, downloads, and memory execution.

• Phishing and Social Engineering: The groups crafted phishing emails and notifications to target cryptocurrency investors and traders, aiming to steal sensitive information.

• Obfuscation and Payload Deployment: The hackers requested help in creating obfuscated payloads and bypassing security warnings to deploy malicious code.

Nominations are open for Honouring Women in Cyberspace on International Women’s Day 2025- Nominate Now!

Additionally, the threat actors used ChatGPT to research vulnerabilities in various applications and develop techniques for macOS attacks.

OpenAI’s threat analysts also discovered staging URLs for malicious binaries, which were previously unknown to security vendors. These findings were shared with the broader security community, enabling vendors to detect and block the binaries effectively.

North Korean IT Worker Scheme Uncovered

In a related discovery, OpenAI identified accounts linked to a potential North Korean IT worker scheme. These accounts were part of an effort to generate income for the Pyongyang regime by deceiving Western companies into hiring North Korean workers.

Once employed, the workers used ChatGPT to perform job-related tasks, such as writing code, troubleshooting, and communicating with coworkers. They also devised cover stories to explain suspicious behaviors, such as avoiding video calls, accessing corporate systems from unauthorized locations, or working irregular hours.

Broader Campaigns Disrupted by OpenAI

OpenAI’s efforts to combat malicious use of its platform extend beyond North Korean threat actors. Since October 2024, the company has detected and disrupted two campaigns originating from China:

• Peer Review: This campaign used ChatGPT to research and develop tools linked to a surveillance operation.

• Sponsored Discontent: This effort involved generating anti-American, Spanish-language articles aimed at influencing public opinion.

In its October 2024 report, OpenAI revealed that it had disrupted over twenty campaigns associated with Iranian and Chinese state-sponsored hackers since the beginning of the year. These campaigns were linked to cyber operations and covert influence operations, underscoring the global nature of the threat.

OpenAI’s Commitment to Security

OpenAI has emphasized its commitment to preventing the misuse of its AI tools. The company employs advanced detection mechanisms and collaborates with industry partners to identify and block malicious activities. By sharing threat intelligence with the broader security community, OpenAI aims to protect potential victims and strengthen global cybersecurity defenses.

“We banned accounts demonstrating activity potentially associated with publicly reported DPRK-affiliated threat actors,” OpenAI stated in its report. “Our teams are continuously working to identify and mitigate risks posed by malicious actors.”

The misuse of AI tools like ChatGPT by state-sponsored hackers highlights the dual-use nature of advanced technologies. While these tools offer immense benefits, they can also be weaponized by malicious actors to conduct sophisticated cyberattacks.

The incident also underscores the importance of collaboration between technology companies, cybersecurity experts, and governments to address emerging threats. As AI continues to evolve, so too must the strategies to safeguard it from exploitation.