In a recent CrowdCast webinar, CrowdStrike’s Senior Director of Hunting Operations, Kris Merritt, discusses core problems associated with automating cybersecurity detection and how companies seeking to solve these issues often end up inadvertently making it worse.
“The core security problem is this notion that we have code with an unknown reputation, and we’re put in a position where we have to judge the intent of that code or judge the intent of a program,” Merritt says. He compares it to the human justice system, where judges and juries struggle to pinpoint the true motives or intent of another human being, sometimes with imperfect results. If humans judging other humans can fall short of success, what are the odds that unaided technology can achieve better outcomes?
“If we try to rely purely on technology, or purely rely on code to judge the intent of another piece code — this is obviously an extremely difficult problem. So a lot of us recognize there’s a need for humans to get involved,” he says. “Now, whenever we try to involve humans, I think it’s our natural tendency to put those humans directly in the detection resolution loop. As a result, security analysts are placed on the receiving end of a high volume of alerts streaming in from a variety of detection solutions, IDSes, and host-based security systems. This frequently creates an untenable situation and sets organizations up for failure,” Merritt says.
“We’re so afraid of the ‘false negative’ that we require humans to resolve every single tactical alert coming out of their detection apparatus.” The inevitable result is alert fatigue and a higher potential for failing to recognize the most important threats facing our organizations. “We start treating people like very tactical cogs in a wheel. In the end, we often don’t see the forest for the trees.”
These human security resources typically operate in the absence of a robust prioritization process to recognize “when something really is an indicator of a mega-breach or a significant intrusion versus a lower severity threat, such as run-of-the-mill malware or adware. So we end up creating a false negative situation on our own simply because of a high false positive scenario that exists in most security operations centers,” Merritt says.
How do smart security organizations extricate themselves from this ineffective detection loop and find the outliers that indicate high severity threats? The answers revolve around the creation of proactive teams that are empowered to apply their skills to hunting down threats before they result in a mega-breach.