Reducing false positives in huge batches of network security events is the banner benefit of security automation. And it makes sense: Security expertise is in historically high demand and woefully low supply. A resource that can whittle billions of events down to a few hundred investigable alerts is a godsend.
But it’s important not to lose sight of the true objective of sec-ops: to detect and respond to even the stealthiest and most deceptive threats before they can harm your organization.
If we seek to reduce false positives, it’s only because it brings us one step closer to the zenith of threat detection and response - the elimination of false negatives.
Wolves in sheeps’ clothing
False negatives are silent but deadly network intrusions. They exhibit few indicators of compromise making them adept at masking nefarious intentions. False negatives are the reason that, despite the availability of millions of known malware signatures and countless volumes of open-source threat intelligence, cybercrime continues to plague even the most well-resourced enterprises.
Some of the most notable sources of false negatives include the following:
These are unknown code vulnerabilities or newly hatched malware exploits that have no known signature. Technically, no one knows how many exist, but researchers have documented an uptick in their discovery in recent years.
Additionally, it’s estimated that 111 billion new lines of code were created in 2017, which vastly expands the potential for newly-introduced software vulnerabilities.
Some of the most notable zero-days in recent memory include WannaCry and Petya, both of which cost businesses hundreds of millions of dollars in 2017.
Advanced persistent threats (APTs)
APTs are defined by their ability to maintain on a network for prolonged periods of time without detection. They’re most commonly used for data exfiltration over the course of weeks or even months.
Most APTs start as spear-phishing tactics meant to steal user credentials. Once inside the network, the perpetrators will create a backdoor for data extraction and will employ advanced tactics to evade detection.
Some of the most famous examples of APTs include Stuxnet, which incapacitated one-fifth of Iran’s nuclear centrifuges by causing them to spin out of control. More recently in 2015, a hacking group known as Deep Panda stole private information belonging to tens of millions of Americans in attacks against Anthem and the U.S. Office of Personnel Management.
These attacks, also known as zero-footprint attacks, often start as phishing campaigns. Remarkably, they don’t need to install software on an endpoint to infiltrate it. They instead exploit in-memory access, which makes them adept at evading anti-virus and application whitelisting tools.
In 2017, more than 50 percent of attacks against businesses are believed to have been fileless, according to the Ponemon Institute. That percentage is expected to increase throughout the remainder of this year.
As these and other advanced threats escalate in complexity, reducing false positives without missing a legitimate alert becomes an ever more precarious balancing act. Time spent responding to false positives results in hundreds of hours of lost productivity, costing, on average, more than $1 million each year.
The alternative - missing a subtle IOC - is even worse, as it can result in disrupted operations, remediation costs, legal damages and reputational harm.
And that brings us full circle.
Reverse-engineered threat hunting
So how, exactly, do you limit false positives in order to amplify the signs of the most well-masked threats? First, a combination of multiple unsupervised machine learning methods sift out what are perceived to be the top events. These are then escalated to analysts, who provide labels, thereby creating a supervised machine learning model and generating what a continuously learning system.
And it’s a system that works. AI-based security has sprouted into a multi-billion-dollar market that’s growing at a compound annual rate of over 30 percent. The impetus for this growth is simple: If you can reduce the number of false positives and provide security analyst input for AI systems related to the top threats, you can rapidly and continuously refine a security automation platform that will be more sensitive to IOCs associated with false negatives.
In the past, we’ve referred to this as “removing haystaks to find needles.” Many of those “needles” will more or less present themselves to you based on deep, contextual analysis of “hay’s” many properties.
In other words, false negatives are merely the result of failing to determine what is “normal” and what is not.
And with AI filtering out more of the innocuous background noise, security analysts will be able to hear a pin drop, for better or for worse.