CrowdStrike 2025 Global Threat Report Spotlight

How adversaries are using GenAI for social engineering attacks

Over the past year, skilled cybercriminals have become early adopters of Generative Artificial Intelligence (GenAI). Easy access to commercial large language models (LLMs) helped adversaries learn faster, build quicker, and rapidly scale their operations.


Specifically, CrowdStrike’s 2025 Global Threat Report found adversaries used GenAI to enhance social engineering campaigns through the creation of convincing content like fictitious social media profiles and deepfake videos. In this changing environment, organizations must stay informed and vigilant. 


Why GenAI is a game-changer for social engineering


It’s important to note that while nefarious use of AI is increasing, it remains largely iterative. True novel use cases are still rare but experimentation is growing.


Before the existence of widely accessible LLM tools like ChatGPT, creating fraudulent content took significant time, resources, and expertise. But those barriers are now gone. Today, even a poorly written prompt can produce content that is highly deceptive — and the models are only getting better. 


In March of this year, OpenAI announced 4o Image Generation, an advanced image creation capability that set a new standard in precision. Such rapid, quality progressions indicate new leaps in capability could occur at any moment, enabling new cyber risks that previously weren’t possible. 
 

Adversaries have already demonstrated its potential for creating deceptive content with convincing phishing messages, deepfake videos, and fictitious social media profiles at scale.


Real-world examples of GenAI-enabled social engineering


While iterative and experimental, malicious use of GenAI is a real and notable trend CrowdStrike observed over the past year. Adversaries have already successfully used the technology in multiple alarming incidents:

  • Fake LinkedIn profiles: DPRK-associated group FAMOUS CHOLLIMA used GenAI to create realistic LinkedIn profiles, complete with believable backgrounds and AI-generated profile images, deceiving recruiters worldwide.

  • Deceptive job interviews: As part of their campaign to infiltrate private corporations, it’s possible FAMOUS CHOLLIMA used GenAI to rapidly generate plausible responses during interviews.

  • Deepfake Business Email Compromise (BEC): Threat actors cloned video footage and voice recordings of executives, convincing employees to transfer substantial funds. Notably, one incident in February 2024 led to a $25.6 million fraud.

  • Election manipulation: During the Indian elections, threat actors employed AI-generated imagery and videos, amplifying misinformation and political division.

  • Mobile malware: Since late 2023, the GoldPickaxe malware has captured biometric facial data from iOS and Android devices throughout the Asia Pacific region. Capturing images and videos enables further generation of realistic deepfake videos and face-swap operations to gain access.


GenAI’s proven effectiveness in phishing


Academic research has only further reinforced the worrying effectiveness of AI-generated social engineering. Studies show LLMs can create phishing email content or credential harvesting websites at least on par with humans: 

  • A 2024 study highlighted that AI-generated phishing emails had a staggering 54% click-through rate, vastly outperforming the 12% click rate of human-crafted phishing attempts.1

  • Separate research confirmed that AI-generated phishing websites were equally challenging for users to detect compared to human-created ones.2


Protecting against GenAI-enabled social engineering attacks


As GenAI capability evolves, so too will the risks. Organizations must be proactive in building a resilient defense and specifically, take comprehensive measures to secure their identity ecosystems from deception. Here are a few steps to build identity security:

  • Implement phishing-resistant multi-factor authentication (MFA): Use multi-layered security access methods to increase identity protection. Hardware security tokens are a good option to resist social engineering attacks and prevent unauthorized account access.

  • Adopt strong identity policies: Use just-in-time access, conditional access policies, and regularly review account permissions to minimize risk.

  • Enhance identity threat detection: Deploy advanced identity threat detection tools to monitor anomalous behavior across environments and identify privilege escalation attempts swiftly.

  • Educate users: Regularly train employees to recognize increasingly sophisticated phishing, vishing, and deepfake attempts.


How CrowdStrike is using GenAI to accelerate detection, investigation, and response


CrowdStrike is powering the next evolution of the security operations center with its own generative, agentic AI assistant: CrowdStrike® Charlotte AI™. Using autonomous reasoning and action, Charlotte AI can triage detections, filter false positives, and escalate the most important threats enabling users to protect more, faster. 


Charlotte AI is trained and tuned on the CrowdStrike Falcon® platform’s APIs, documentation, and high-fidelity security telemetry, enabling it to keep pace with the latest adversary techniques. 


Stay informed. Stay secure.


Rising GenAI use is just one of many concerning trends highlighted in CrowdStrike’s 2025 Global Threat Report. Download the full report to explore more patterns, threats, and recommendations from a year’s worth of industry-leading threat intelligence.