Demystifying the technology with case studies of AI security in action
Many automation tools, such as SOAR, suffer from a Catch-22 irony: you know that automation will save you huge amounts of time, but it’s difficult to implement and requires skills you don’t necessarily have in-house. Essentially, you can’t afford the tools that will save you money.
To help with this many tools now promise “no-code” capabilities, with intuitive GUIs that help with building abstract functions by non-programmers. While this approach can help with SOAR automation, it’s often not enough.
More recently, LogicHub has been applying machine learning to understand and automate the process of building security playbooks that advanced automation experts routinely build. This is another example of breaking a complex problem down into factors and automating these steps to improve routine processes.
What fields in attachments in the email do you want to analyze?
Do you want to use external reputation tools like VirusTotal?
What timeframes are you concerned about?
What are the normal baselines for your users – logins, volume, downloads, etc.
LogicHub has used the same AI approach and created a bot-based system named AuDRA (Autonomous Detection and Response Assistant) that interactively helps non-expert users by asking them key questions, retrieving relevant information, establishing granular baselines, selecting frequency of analysis, scoring a range of critical factors, and automatically building complex security playbooks. The system then tests the scoring model based on analyst feedback on a range of events, and quickly learns and adapts to the specific customer environment.
Case Study #1: Insurance Company Automates Threat Hunting
A mid-size insurance company in the midwest faced a dilemma: they wanted to increase proactive threat hunting to detect potential risks, while maintaining a lean security team without relying on dozens of low-level analysts.
As new threats were discovered and new vulnerabilities published, the team needed tools to quickly assess their specific risks across their unique infrastructure. For example, if a threat affects specific endpoints with different levels of patching, could they look for examples of the threat while at the same time, prioritize patching of vulnerable systems in their ITSM system?
Working with LogicHub, the security team did beta testing of the new AuDRA system to automate building unique playbooks. The team wanted to bring together CVE alerts from the NVD (National Vulnerability Database), endpoint scans from their CrowdStrike EDR system, ticketing information from their ServiceNow system, and scan logs from a range of legacy security products and cloud applications.
Their goal was to quickly build new playbooks and update others to look for activity from the latest vulnerabilities such as Log4j, remote execution, and other zero-day threats. However, building these automation playbooks typically took at least two-weeks, with additional time for testing and tuning.
Using LogicHub AuDRA, several non-programmers on the team were able to quickly define parameters, connect to multiple resources, and build advanced automation playbooks in a few hours. Even with testing and ML tuning, new playbooks were successfully deployed within 48 hours. The team saved approximately 85% of the time required to build the playbooks manually, delivering results impossible without automation.
More importantly, the team was able to quickly spot signs of possible attack from these vulnerabilities, prioritize security patches, and in some cases disable older endpoints that could not be quickly updated.
Don’t Trust – Measure
At the end of the day, this should not be a theoretical discussion, but should focus on measurable results. Can automation and machine learning improve efficiency, and produce higher quality results than humans alone? To answer that, let’s look at a specific case study.
Forrester analyst Allie Mellen joined LogicHub as a featured guest speaker to discuss the evolution of SOAR technology and how AI can enable a new generation of solutions for the SOC. Watch on-demand.
Case Study #2: Major US Bank Streamlines SOC
The SOC team at a Top 10 US Bank was struggling to manage a flood of alerts from over 400 hard-coded rules in Splunk. A single rule, designed to detect traffic to bad URLs in web proxy logs was triggered about 225 times per week.
Each alert required about 30 minutes of an analyst’s time to triage. While they had established an effective way to distinguish true threats from false positives, this involved manually checking each alert against other suspicious activities such as unusual increases in files being transferred, spikes in network traffic, and attempts to reach other known bad URLs. They also cross-checked with threat-analysis sites like VirusTotal. Out of about 900 alerts triaged per month, on average only 3 required further escalation – 897 were false positives. To enforce this single policy required over 127 analyst hours per week – more than 3 FTEs.
Using the LogicHub platform’s machine learning, the team was able to build automation workflows that mimicked all the steps, cross-checking, and correlation that analysts would perform for each alert. The system was also able to annotate each alert with full details and context of what happened.
The end result was that each alert from LogicHub required only 5 minutes of analyst time, versus the prior 30. However, the team was cautiously skeptical about the quality of the results, so they did audit testing of LogicHub against their manual process.
The test showed that the SOC team not only saved time, but its accuracy improved. With the manual process, security analysts made 98 mistakes per month (a 14% error rate), mischaracterizing threats or their severities. Once the SOC adopted LogicHub, error rates dropped from 98/month to 21/month (a 3% error rate).
With the dramatic time savings achieved, the SOC team was able to shift their valuable analysts' time to focus on proactive threat hunting, rather than repetitive, mind-numbing tasks.
What is Normal? It Depends…
Looking for anomalies is not new in security, and legacy rules-based systems have often touted their ability to spot abnormal and suspicious behavior. This works well if everyone is the same and uses IT resources identically, all day long. But the real-world is a bit more complicated.
A leading Silicon Valley software vendor was concerned about security risks from unmanaged FTP traffic. Their hundreds of developers needed to use FTP frequently, but they were concerned about spotting anomalies that could indicate insider threats or an external attack.
The challenge was determining what “normal” behavior looked like, because each developer had different needs and usage patterns. Some developers would infrequently access a couple directories, while others accessed dozens of directories every day. Building static rules to enforce this was impossible.
Using LogicHub’s AI capabilities, the team was able to establish accurate baselines for each user, then monitor usage patterns continuously. They were also able to cross-check this data with other information about login failures, and unusual access to other applications.
Establishing accurate baselines on an individual basis, with frequently changing staff and work assignments is ideally suited for machine learning. With this granular and dynamic knowledge, it’s easy to spot anomalies.
The result is that the software vendor can now spot true anomalies to user behavior on an individual basis, while correlating these anomalies with multiple other attack indicators. Without spending hundreds of analyst hours, the team spotted several real threats, while having confidence that their systems were well protected.