Microsoft, OpenAI reveal how threat actors use AI
The research indicates a trend where attackers treat AI as an additional productivity tool in their offensive strategies.
An analysis by Microsoft and OpenAI sheds light on the current application of large language models (LLMs) technology by threat actors.
The focus of the study is on the activities of recognized threat actors engaging in activities such as prompt injections, attempted misuse of LLMs, and instances of fraud.
The findings presented in the research have been collected from known threat actor infrastructure identified by Microsoft Threat Intelligence, and shared with OpenAI to identify potential malicious use or abuse of their platform.
The threat actors highlighted in the blog post serve as a representative sample of observed activities, showcasing the tactics, techniques, and procedures (TTPs) that the industry should prioritise tracking for enhanced awareness and mitigation.
Forest Blizzard (linked to Russian military intelligence), for instance, has been actively targeting organizations related to Russia’s war in Ukraine. The threat actor engages in LLMs-informed reconnaissance, using LLMS to grasp satellite communication protocols, radar imaging technologies, and specific technical parameters. Forest Blizzard also employs LLM-enhanced scripting techniques, seeking support in fundamental scripting tasks such as file manipulation, data selection, regular expressions, and multiprocessing.
Emerald Sleet (also known as Thalium, a North Korean threat actor), has been employing LLM-supported social engineering in spear-phishing campaigns by impersonating academic institutions and NGOs, LLM-informed reconnaissance for conducting research into North Korean think tanks and generating content for use in phishing campaigns, LLM-assisted vulnerability research to better understand publicly reported vulnerabilities and LLM-enhanced scripting techniques for basic scripting tasks.
Crimson Sandstorm (associated with the Iranian Revolutionary Guard Corps) has been using LLM-supported social engineering to generate various phishing emails and distribute its .NET malware, LLM-enhanced scripting techniques for web and app development, and LLM-enhanced anomaly detection evasion to learn how to learn how to disable antivirus and to delete files in a directory after an application has been closed.
Charcoal Typhoon, a Chinese state-affiliated actor, engages in LLM-informed reconnaissance on technologies, platforms, and vulnerabilities, LLM-enhanced scripting techniques to generate and refine scripts, LLM-supported social engineering for translations and communication, and LLM-refined operational command techniques for advanced commands and deeper system access.
Salmon Typhoon, another Chinese state-affiliated actor with a history of targeting US defence contractors, has shown exploratory interactions with LLMs. Their engagement includes LLM-informed reconnaissance for intelligence gathering on a diverse array of subjects such as global intelligence agencies, cybersecurity matters, topics of strategic interest, and various threat actors, LLM-enhanced scripting techniques for identifying and resolving coding errors, LLM-refined operational command techniques, and LLM-aided technical translation and explanation.
As a precautionary measure, all associated accounts and assets of the threat actors mentioned have been disabled.
Despite this integration of AI into malicious activities, the research has not yet identified particularly novel or unique AI-enabled attack techniques. However, the research serves as a crucial contribution to exposing early-stage tactics used by threat actors. The objective is to share information on how such threats are being blocked and countered, reinforcing the commitment to safeguarding against the misuse of AI technologies. This comprehensive approach aims to contribute to the ongoing efforts of the defender community to stay ahead of emerging cybersecurity threats.