Inside CrowdStrike Automated Leads: A Transformative Approach to Threat Detections

How analysts can use the Falcon platform’s new Automated Leads functionality to slash triage time and investigate attacks.

Last summer we introduced Automated Leads, a transformative approach to threat detection designed to surface the subtle signs of an attack before it turns into a full-blown breach. It’s powered by CrowdStrike® Signal (distinct from SGNL) and delivered via the CrowdStrike Falcon® platform.

Since that launch, the goal has remained the same: to move beyond the limitations of traditional alerting and give analysts a head start on detecting the most sophisticated adversaries.

Today, we’re peeling back the curtain on how the new family of self-learning AI models that generates Automated Leads works, and announcing a powerful new capability to instantly isolate unusual processes and anomalous remote monitoring and management (RMM) tool usage that would otherwise be lost in the noise.

The Challenge: Why More Alerts Isn’t the Answer

Improving detection is a core driver for the CrowdStrike Advanced Research team, which is behind the development of the AI models powering Automated Leads. For years, the industry has followed a predictable cycle:

  1. Create a rule for a known malicious feature.
  2. Deploy it.
  3. Triage the resulting alerts.
  4. Tune out the high-volume noise.

The consequence? “Noisy” rules, which might actually trigger on real malicious activity, are suppressed because there are too many for human triage. Malicious activity can slip through the cracks.

On the Falcon platform, we see millions of indicators, or events that don’t quite reach the threshold of a traditional detection. In a complex environment, we might see 10,000 such indicators in a single hour. They are too numerous for a human to review, but with the right algorithmic approach, they are the key to finding the needle in the haystack.

How Automated Leads Works: Scoring and Correlation

The AI engine powering Automated Leads solves this by shifting the focus from individual alerts to entity-based scoring. Instead of treating every event as a binary “good” or “bad” alert, the engine assigns a score to every indicator and detection event. These scores are essentially an initial prioritization. The engine then links these events by entity (such as an endpoint).

When multiple positively scoring events occur on the same host, their scores are summed. This is best explained by visualizing how the engine views indicator occurrences over time:

Figure 1. Visualizing event scoring. Figure 1. Visualizing event scoring. While an indicator may be a regular occurrence on most endpoints (receiving a zero score), the engine identifies and scores instances that have never been seen on a specific host before.

By identifying and filtering down to just these anomalous examples, the engine can surface leads earlier in the attack chain and reveal special kinds of Automated Leads we refer to as “zero detect” leads. This is malicious activity that hasn't triggered a traditional alert but is clearly suspicious when viewed as a collective cluster of behaviors. 

Real-World Analysis: The RMM Hunting Ground

To see this in action, consider how the engine powering Automated Leads handles RMM tools. Adversaries love RMM tools because they allow them to “live off the land,” essentially blending in with existing approved tools already on the endpoint. But for an analyst, sifting through every RMM execution in a large enterprise is impossible.

In a recent internal analysis, the engine monitored thousands of hosts. The vast majority of activity was the organization's standard IT RMM tool — a familiar, low-score constellation of legitimate work.

Figure 2. RMM executions in environment. Each color represents a different RMM tool. The score on the y-axis represents our level of suspicion of the execution. Figure 2. RMM executions in environment. Each color represents a different RMM tool. The score on the y-axis represents our level of suspicion of the execution.
The engine flagged a single execution of MeshAgent, a tool never seen before in that specific environment. While a lone RMM execution is hard to convict, the engine immediately correlated it with other “quiet” behaviors on that same host: a command prompt launch, registry queries, and local network probing.
Figure 3. Indicators of attack (IOAs) triggering on the victim host. IOAs above the line indicate a contribution to the Automated Lead’s confidence. Figure 3. Indicators of attack (IOAs) triggering on the victim host. IOAs above the line indicate a contribution to the Automated Lead’s confidence.
None of these events would have raised an alarm individually. The registry query, for instance, fired hundreds of times a day across the environment. But because it had never occurred on that specific host, the engine’s confidence score spiked.

New Innovation: Investigating Unusual Processes

Building on this logic, we are thrilled to announce a new capability integrated directly into Automated Leads: Investigate Unusual Processes.

Figure 5. Example of an Automated Lead Figure 5. Example of an Automated Lead

Analyzing every process created during a suspicious window is a massive time sink. A typical endpoint creates tens of thousands of processes per day. A key observation is the vast majority of this process creation activity is from routine, repetitive, and benign activity. This typically holds true even for endpoints that are compromised by adversaries. The malicious processes created are a small fraction intertwined with benign, largely automated creations.

In order to rapidly analyze process creations during suspected attacks at scale, we needed a way to filter out this routine activity. To solve this, we’ve introduced the ProcessAncestryInformation (PAI) event. This feature uses historical observations of an endpoint to flag only the most unusual process creations — typically just 1-3% of all process creations. For example, during a recent attack spanning two hours, we observed approximately 5,000 processes created, of which 75 were flagged as unusual, triggering PAI events. These select processes contained subtle but unusual attacker activity including a legitimate RMM tool, a command prompt, and the ping utility.

Figure 6. ProcessAncestryInformation event triggered for unusual process creations Figure 6. ProcessAncestryInformation event triggered for unusual process creations

How to Use It

This feature is available now for all customers within the Automated Leads dashboard:

  1. Locate a Lead: Click on the three-dot menu (⋮) next to the Status of any Automated Lead.
  2. Pivot to Advanced Event Search: Select “Investigate unusual processes.”
Figure 7. Click “Investigate unusual processes” from within the menu of any Automated Lead in order to see the unusual processes that were created during that lead Figure 7. Click “Investigate unusual processes” from within the menu of any Automated Lead in order to see the unusual processes that were created during that lead

This will take you to Advanced Event Search (AES), pre-populated with PAI events joined with ProcessRollup2 data. This gives you the full picture — including command lines and ancestor processes — without forcing you to sift through thousands of benign events.

Always-On Intelligence

Investigate Unusual Processes is available across Windows, macOS, and Linux. Best of all, it is always active. While it’s integrated into Automated Leads for ease of use, you can search for the ProcessAncestryInformation event in Advanced Event Search for any endpoint at any time to see what’s truly out of the ordinary in your environment.

By automating the "boring" work of filtering routine noise, we’re empowering teams to quickly focus on the activity that’s unusual in their environment.

Created with Sketch.
See CrowdStrike Falcon® in Action

Detect, prevent, and respond to attacks— even malware-free intrusions—at any stage, with next-generation endpoint protection.

See Demo