Voices from the Cyber Frontlines: IR stories and lessons for proactive defense Explore the series
CrowdStrike 2025 Global Threat Report Spotlight

How cyber adversaries use GenAI in vulnerability research and cloud operations

Generative Artificial Intelligence (GenAI) is redefining the cybersecurity landscape. And while many organizations like CrowdStrike are using the revolution in large language models (LLMs) to deliver powerful capabilities, adversaries are also using the technology to advance dangerous tools. In the CrowdStrike 2025 Global Threat Report, threat researchers reported on the rising use of GenAI for vulnerability research and malicious cloud operations. 


What are vulnerability research and malicious cloud operations?


Vulnerability Research

Adversaries are constantly attempting to find new ways to access and exploit sensitive data. Vulnerability research is their process for identifying, understanding, and exploiting weak points in the systems and software of a target organization. 


Malicious Cloud Operations

The demand for greater agility and scale has driven widespread cloud adoption across the public and private sector. In doing so, the attack surface has greatly expanded, presenting numerous targets for adversaries. Threat actors prey upon these vulnerabilities by attempting to exploit misconfigurations, insecure APIs, weak spots in software supply chains, and other exposures in expansive cloud infrastructures.


The role of GenAI in accelerating vulnerability research and exploitation


While historical use cases of GenAI for vulnerability research and exploitation are rare, the technology poses a significant risk for malicious use:

  • Accelerated Development: The iterative generation capabilities of LLMs has the potential to rapidly identify, research, and test exploit commands. As models get more and more capable, their creative ability to craft novel exploits could present new attack vectors for adversary use. 

  • Emerging Techniques: Adversaries are exploring attack vectors unique to GenAI systems, such as prompt injection, which could potentially bypass access controls, achieve code execution, or increase the risk of unintentional disclosure of sensitive information.


Real-world examples of exploit attempts using GenAI


Beyond the theoretical, CrowdStrike threat researchers did identify several, real examples of GenAI use in 2024:

  • Iran-nexus Actors: Among the most notable groups who demonstrated GenAI support in the vulnerability landscape. Iranian government initiatives aim to leverage AI to develop assistants and enable patching systems for domestic networks. Moreover, Iran’s government aims to use LLMs in vulnerability research and exploit development.

  • GlobalProtect PAN-OS Gateway Exploit (CVE-2024-3400): In April 2024, an unknown adversary leveraged GenAI to develop an exploit targeting a command injection vulnerability. Although initially unsuccessful, this incident underscores the growing experimentation with AI-driven exploit creation.


The rising threat from cloud-conscious actors leveraging GenAI


The integration of GenAI into cloud infrastructure and services significantly expands an already large attack surface. Cloud-conscious adversaries are beginning to explore GenAI and LLMs for their operations:

  • AI-integrated Services: As cloud adoption expands and GenAI becomes more integrated into services such as Azure AI Foundry (formerly Azure AI Studio), threat actors will likely exploit GenAI services for data theft, model manipulation, unauthorized access, and other malicious purposes.

  • Strategic Data Acquisition: Cloud-conscious threat actors with varied skill levels will likely and increasingly exploit technical vulnerabilities and misconfigurations in the growing GenAI-driven cloud ecosystem. They will seek to abuse AI services, and services that customers integrate using AI, to acquire data of strategic interest.

 

Case Highlight: The Rise of “LLMJacking”


"LLMJacking" is the use of compromised cloud credentials to illicitly access enterprise AI models, which adversaries then exploit for malicious purposes or resale. CrowdStrike reported on two instances in 2024:

  • North American Consulting Firm (Q2 2024): An unidentified adversary infiltrated the cloud infrastructure, listing available machine learning models and attempting to access restricted models via legitimate API requests, presumably aiming to resell access.

  • North American Technology Firm (Q4 2024): Similarly targeted, threat actors again attempted to leverage legitimate cloud APIs to gain unauthorized access to high-value AI models, highlighting a growing marketplace for illicit AI services.


Recommendations


1. Defend the cloud as critical infrastructure


To mitigate emerging GenAI threats in cloud environments, organizations should:

  • Deploy Cloud-Native Security: Utilize Cloud-Native Application Protection Platforms (CNAPPs) equipped with Cloud Detection and Response (CDR) to provide comprehensive visibility, rapid detection, and remediation of threats.

  • Implement Robust Access Controls: Enforce role-based and conditional access policies, closely monitoring unusual activities and anomalies.

  • Conduct Regular Audits: Frequently audit configurations to detect overly permissive settings, exposed APIs, and vulnerabilities, promptly addressing identified issues.


2. Adopt adversary-centric vulnerability management


Given adversaries’ increasing reliance on exploit chaining and public exploits, organizations must prioritize:

  • Regular and Prompt Patching: Ensure critical infrastructure, particularly internet-facing services like VPNs and web servers, is consistently updated and secured.

  • Advanced Monitoring: Actively monitor systems for subtle signs of exploit chaining, such as privilege escalations or unexpected system behaviors.

  • Leverage AI for Prioritization: Tools like CrowdStrike Falcon® Exposure Management help teams efficiently prioritize vulnerabilities, reducing noise and focusing resources on the most significant threats.


Stay informed. Stay secure.


Rising GenAI use is just one of many concerning trends highlighted in CrowdStrike’s 2025 Global Threat Report. Download the full report to explore more patterns, threats, and recommendations from a year’s worth of industry-leading threat intelligence.