New CrowdStrike Innovations Secure AI Agents and Govern Shadow AI Across Endpoints, SaaS, and Cloud

March 23, 2026

| | Securing AI

As organizations race to adopt new AI tools, deploy AI agents, and build AI-powered software, they create new attack surfaces that traditional security controls were never designed to protect.

A key example is the prompt and agentic interaction layer, which faces novel threats like indirect prompt injection and agentic tool chain attacks. The rapid acceleration of shadow AI exacerbates the challenge as employees adopt AI tools without oversight and engineering teams deploy models and agents without adequate visibility and runtime protection. The result is an AI visibility and governance gap that grows with every AI tool deployment and adoption.

CrowdStrike is closing that gap. Today we’re announcing a series of innovations across the CrowdStrike Falcon® platform that extend AI detection and response (AIDR) capabilities across new surface areas and expand our platform capabilities to secure AI workforce adoption and development across endpoints, SaaS environments, and cloud environments.

These new capabilities will enable organizations to confidently and securely accelerate AI development and adoption.

Defending Endpoints: The Ultimate AI Battleground 

The endpoint has always been a primary target for adversaries, but the rise of personal AI agents like OpenClaw puts them at the frontline of a new attack technique called living off the AI land (LOTAIL). LOTAIL exploits a dangerous combination of factors that converge on the endpoint: increasing agent autonomy, high system permissions, and minimal governance. Code and computer-use agents, agentic browsers, and personal AI tools are being deployed, particularly on developer machines, and they can execute terminal commands, browse the web, interact with files, and take autonomous actions that can look indistinguishable from legitimate user behavior traffic. That makes them extraordinarily difficult to detect with traditional tools, and extraordinarily dangerous when compromised.

Today we’re announcing two significant new capabilities to extend endpoint AI security capabilities for agents and shadow AI. 

AI Detection and Response for Desktop AI Applications  

We’re excited to announce that CrowdStrike Falcon AIDR's runtime threat detection capabilities for securing workforce AI adoption will extend beyond the browser, where most employee AI interactions occur, to cover desktop AI applications including ChatGPT, Gemini, Claude, DeepSeek, Microsoft Copilot, O365 Copilot, GitHub Copilot, and Cursor.

Figure 1. AIDR policies deployed to Falcon endpoints Figure 1. AIDR policies deployed to Falcon endpoints

As employees, and especially engineering teams, turn to desktop AI applications, security teams need visibility and threat detection around their interactions. Falcon AIDR will leverage the Falcon sensor to enable more seamless deployment of the Falcon AIDR browser extension from the Falcon console and obtain desktop application telemetry via the sensor's container network interface capability. 

This will give security teams visibility into employees’ use of these AI apps, including full prompt content, and the ability to detect prompt attacks, data leaks, and access control and content policy violations across the full range of AI tools employees use on endpoints. 

This new capability is currently pre-beta and will go to GA next quarter (Q2).

Deep Agent and Shadow AI Discovery on Endpoints

Beyond desktop applications, there is a wide range of AI assets that can be deployed on endpoints, especially by developers on engineering machines such as large language models (LLMs), Model Context Protocol (MCP) servers, and IDE extensions.

AI Discovery in CrowdStrike Falcon® Exposure Management, powered by CrowdStrike Falcon® for IT telemetry, helps secure these assets. Now generally available, this capability automatically discovers AI-related components running across endpoints in real time, including AI apps and agents, LLM runtimes, MCP servers, and IDE extensions. Discovered AI components are classified and linked to existing assets in Falcon Exposure Management, where teams can view context such as privilege level, connectivity, and proximity to critical assets. This allows security teams to make better risk-based prioritization decisions and understand not just what AI is deployed, but how it connects to the rest of the environment and what the blast radius of a compromise might be.

Figure 2. AI-enabled assets inventory on the endpoint filtered down to MCP participants Figure 2. AI-enabled assets inventory on the endpoint filtered down to MCP participants

For teams building AI applications, this same visibility provides essential context about where AI components have been introduced into the development environment. This helps security and engineering teams identify supply chain risks and misconfigured AI tooling before they become exploitable vulnerabilities.

Securing AI Agents Across SaaS Environments

AI Detection and Response for Copilot Studio Agents

Falcon AIDR is extending runtime security guardrails to agents built in Microsoft Copilot Studio, covering both developer-built agents and low-code agents built by business users. When Copilot agents execute tasks, interact with data, and respond to user inputs, Falcon AIDR will monitor for prompt injection attacks, data leaks, and policy violations in real time.

For teams adopting AI across the workforce, it will help protect the Copilot agents employees interact with daily against adversarial manipulation. For teams building AI applications on top of Microsoft's agent framework, it will provide the runtime monitoring needed to validate that agents are behaving as intended and detect anomalous behavior that could indicate compromise or misconfiguration. 

This capability is currently pre-beta and will go to GA later this quarter (Q1).

Discovering and Governing AI Agents across SaaS Environments

SaaS platforms are a primary deployment environment for AI agents. Organizations are building and deploying agents with significant permissions and access to sensitive data across SaaS platforms, often without the strong visibility and governance frameworks to manage the risk they introduce. When they're misconfigured, over-privileged, or compromised, the consequences can be severe.

For organizations focused on securing AI workforce adoption, understanding what AI agents are operating across their SaaS stack is an essential first step. CrowdStrike AI Agent Discovery, now generally available in CrowdStrike Falcon® Shield, provides unified discovery and classification of AI agents across SaaS platforms, delivering granular visibility into agent configurations, tool and API access, data sources, and ownership.

Figure 3. Normalized view of AI agents across disparate agentic AI platforms Figure 3. Normalized view of AI agents across disparate agentic AI platforms
What makes this capability particularly powerful is normalization. Agent attributes are standardized into a consistent framework across vendors, which enables security teams to identify risky behavior, excessive privileges, and unmanaged agents regardless of which platform they're deployed on. This unified view supports immediate cross-platform governance and compliance monitoring, which is critical as AI agent usage rapidly expands across enterprise environments.
Figure 4. Identity-centric risk graph illustrating cross-platform security posture and privilege accumulation across M365 and Power Platform environments Figure 4. Identity-centric risk graph illustrating cross-platform security posture and privilege accumulation across M365 and Power Platform environments

AI Agent Discovery integrates with Microsoft Copilot (Power Platform), Salesforce Agentforce, ChatGPT Enterprise, OpenAI Enterprise GPT, and Nexos.ai. Security teams can identify risky configurations, excessive access, and ownership gaps, and apply centralized governance as AI agent usage scales. For organizations building AI applications that leverage SaaS-based agent frameworks, this capability provides the visibility needed to help deployed agents operate within intended parameters and prevent misconfiguration or compromise post-deployment.

Not all shadow AI in SaaS environments is visible through API connectors alone. To extend coverage, Falcon Shield also now analyzes Falcon sensor endpoint-collected DNS telemetry to uncover shadow AI use via SaaS, helping to capture even AI tools accessed without formal SaaS API connectors deployed in the organization's SaaS AI inventory.

Securing AI in the Cloud from Development to Runtime

Cloud environments are where AI is built, trained, and deployed at scale. Organizations are deploying AI workloads on Kubernetes, integrating with managed machine learning (ML) platforms like Amazon SageMaker and Bedrock, and building applications that communicate with LLMs through the OpenAI API specification. Each of these environments introduces distinct security challenges and blind spots that adversaries can exploit.

AI Detection and Response for Containerized Workloads

For organizations building and deploying AI applications in the cloud, runtime threat detection at the application layer is essential. Falcon AIDR will extend runtime guardrails to containerized applications communicating with the OpenAI API specification, detecting prompt injections, data leaks, and access control and content policy violations for cloud-hosted AI workloads.

This capability is delivered via an integration with CrowdStrike Falcon® Cloud Security, which intercepts OpenAI API calls and routes them through Falcon AIDR's detection engine, with detections surfaced in the Falcon AIDR console and in CrowdStrike Falcon® Next-Gen SIEM. Security teams can take response actions directly within the CrowdStrike Falcon® Cloud Security console, including isolating or terminating AI workloads, to contain threats before they escalate. 

This new capability is currently pre-beta and will go to GA next quarter (Q2).

Threat Detection for Kubernetes AI Workloads

As organizations increasingly standardize on Kubernetes to host mission-critical AI workloads, the Kubernetes orchestration layer becomes a high-value target. Falcon Cloud Detection and Response, part of Falcon Cloud Security, now provides deep visibility and detections into the Kubernetes API Server. By ingesting Kubernetes audit logs, Falcon Cloud Security CDR monitors API requests and configuration changes for suspicious activity, generating detections that can be correlated with workload, cloud, and endpoint detections in Falcon Next-Gen SIEM. This gives SOC analysts the ability to visualize the full scope of adversary movements in attacks that include Kubernetes, a critical capability for teams building and operating AI workloads at scale.

AI Data Flow Discovery in the Cloud

One of the most significant risks in AI development is sensitive data flowing into AI pipelines without visibility or controls. Training data, customer personally identifiable information (PII), and proprietary intellectual property can all end up in places it was never intended to go, creating compliance exposure and breach risk.

CrowdStrike Falcon® Data Protection for Cloud now addresses this with real-time visibility into how sensitive cloud data flows into and through AI services at runtime. Using eBPF-powered monitoring, Falcon Data Protection for Cloud continuously observes data flows across cloud services, APIs, containers, and internal services, classifying sensitive content in real time as it moves. For AI-driven workloads, this monitoring extends into AI data paths: Teams can see sensitive data as it's collected from cloud storage and databases, passed through internal or external AI orchestration layers including MCP servers, and sent to or consumed by internal AI and ML services such as Amazon SageMaker and Bedrock.

Figure 5. Falcon Data Protection for Cloud offers runtime visibility into data flowing to AI services Figure 5. Falcon Data Protection for Cloud offers runtime visibility into data flowing to AI services

For teams building AI cloud applications, it provides the end-to-end data flow visibility needed to ensure sensitive data isn't inadvertently incorporated into training pipelines or exposed through model outputs. For organizations managing AI workforce adoption, it closes the blind spots that log-based tools leave open, delivering runtime telemetry that identifies unexpected or risky AI data usage as it happens, with detections that can be routed into CrowdStrike Falcon® Fusion SOAR workflows for immediate response. 

This new capability is in early beta and will be generally available in the coming months.

AI Application Discovery in the Cloud

Understanding what AI applications are running in cloud environments and how they're configured is a prerequisite for securing them. Application Explorer in Falcon Cloud Security now provides a unified application-layer exploration experience that brings application, infrastructure, and AI context together in one place. Security and cloud teams can pivot across AI application metadata, runtime context, and AI usage to understand how applications interact with services such as LLMs and MCPs.

With Application Insights for AI, teams can now identify shadow AI by uncovering previously unknown AI usage in the application layer, detect ungoverned AI interactions such as connections to external LLM or MCP services in restricted environments, assess AI access to sensitive data by correlating application assets with underlying database and data sensitivity context, and understand an AI agent's role and behavior including its purpose, prompts, and operational scope. Falcon Cloud Security is the only cloud-native application protection platform (CNAPP) to provide unified visibility between cloud infrastructure and application layers, which is a critical advantage for security teams responsible for securing AI development pipelines and the cloud environments where AI applications run.

Figure 6. AI information shows findings regarding specific models or prompts being used Figure 6. AI information shows findings regarding specific models or prompts being used

Securing the AI Lifecycle and Attack Surface

The innovations announced today reflect a simple but important truth: Securing AI requires the same unified, platform-based approach that has made CrowdStrike the leader in endpoint, cloud, and identity security. AI doesn't live in one place. It spans all traditional attack surfaces and the new prompt and agent interaction layer that connects them all. Fragmented tools cannot protect it.

With today's announcements, CrowdStrike extends AI detection and response across desktop AI applications, Copilot agents, and cloud workloads; deepens shadow AI and agent discovery and governance across endpoints, SaaS, and cloud; and delivers new data flow visibility that gives security teams an end-to-end view of how sensitive data moves through AI pipelines. Together, these capabilities give CIOs, CISOs, and CTOs what they need to reduce AI risk without slowing innovation, secure AI development pipelines, govern AI workforce adoption, and protect the new attack surfaces that AI introduces, all from a single unified platform.

Forward-Looking Statements 

This blog may include discussion of unreleased services or features. Any unreleased services or features referenced here are still in development and subject to change. Customers should make their purchase decisions based upon features that are currently available.