Threat Report: Examining the Use of AI in Attack Techniques

1 month ago 12
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

3 Min Read

Robot with red eyes meant to convey that AI is a threat.

Source: Tanapong Sungkaew via Alamy Stock Photo

We are entering a new era of cybersecurity, driven in large part by recent advancements in artificial intelligence (AI). 

Not only can AI strengthen our ability to defeat cyberattacks at machine speed, but it also drives increased innovation and efficiency across threat detection, hunting, and incident response. In one study, AI users were found to be 44% more accurate and 26% faster across all tasks — regardless of their experience levels.

But while AI can be an enormous asset in an organization's security toolkit, it also represents a potential threat. Adversaries are attempting to leverage AI as part of their exploits and evaluating how large language models (LLMs) can advance their productivity and efficacy. Read on to learn more about these tactics and how you can help protect against them.

How Nation-State Groups Use LLMs to Augment Cyber Operations

Although threat actors' motives and sophistication vary, they often follow a similar pattern when deploying attacks. Many adversaries will start by conducting reconnaissance, such as researching potential victims' industries, locations, and relationships. They may also use AI-generated code to improve software scripts and malware development. Leveraging LLMs can assist with learning and using both human and machine languages.

In partnership with OpenAI, Microsoft has assessed the following nation-state groups to better understand their use of LLMs.

Forest Blizzard (STRONTIUM)

Forest Blizzard is a highly effective Russian military intelligence actor linked to the Main Directorate of the General Staff of the Armed Forces of the Russian (GRU) Unit 26165. The group is known to target victims of tactical and strategic interest to the Russian government by leveraging LLMs to research various satellite and radar technologies that may pertain to conventional military operations in Ukraine. Additionally, the group has used LLMs to seek assistance with basic scripting tasks, like file manipulation, data selection, regular expressions, and multiprocessing, potentially as a way to automate or optimize technical operations.

Emerald Sleet (Velvet Chollima) 

This North Korean threat actor impersonates reputable academic institutions and nongovernmental organizations (NGOs) to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. Emerald Sleet leverages LLMs to research think tanks and experts on North Korea, as well as to generate content that can be used in spear phishing campaigns. Emerald Sleet also interacts with LLMs to understand publicly known vulnerabilities, troubleshoot technical issues, and for assistance with using various Web technologies.

Crimson Sandstorm (CURIUM)

Crimson Sandstorm is an Iranian threat actor connected to the Islamic Revolutionary Guard Corps (IRGC). It uses LLMs to support social engineering campaigns, seek assistance in troubleshooting errors, advance .NET development, and research strategies for evading detection when on a compromised machine.

Charcoal Typhoon (CHROMIUM)

A China-affiliated threat actor, Charcoal Typhoon predominantly focuses on tracking groups in Taiwan, Thailand, Mongolia, Malaysia, France, Nepal, and individuals globally that oppose China's policies. In recent operations, Charcoal Typhoon has been observed engaging LLMs to gain insights into research on specific technologies, platforms, and vulnerabilities, indicative of preliminary information-gathering stages.

Salmon Typhoon

Another China-backed group, Salmon Typhoon has been assessing the effectiveness of LLMs to source information on potentially sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs. This tentative engagement with LLMs could reflect both a broadening of its intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies.

When defending against these threats, foundational security hygiene practices, such as multifactor authentication (MFA) and zero-trust defenses, are essential. This is because attackers are using AI-based tools to improve their existing methodology, which still relies on social engineering and finding unsecured devices and accounts. We also recommend implementing conditional access policies that can adapt to the changing threat landscape. These policies provide clear, self-deploying guidance to strengthen your security posture and automatically protect tenants based on risk signals, licensing, and usage. 

AI is likely to continue evolving our threat landscape moving forward. And while Microsoft's research with OpenAI has not yet identified significant attacks employing the LLMs we monitor closely, we believe it is important to expose these early-stage, incremental moves by well-known threat actors. In doing so, we can share insights with the broader defense community on how to block and counter the latest threat actor tactics.

Read Entire Article