For the last few years, LogicHub has been a pioneer in applying advanced automation, machine learning, and artificial intelligence to improve detection and response, through its advanced automation platform, and managed detection and response (MDR) services. As we continue to expand in this direction, we will be launching new capabilities that push the envelope of what’s possible in applying AI to meet security challenges.
Recently, LogicHub was recognized by Gartner for its innovation and received extensive coverage in a Gartner research project titled “Emerging Technologies: Tech Innovators in AI in Attack Detection.” This article outlines LogicHub’s ongoing innovation in AI-automation, along with excerpts from the Gartner report, and explains why it’s imperative that organizations automate security and leverage practical innovation that combines machine learning with human security expertise.
The security industry is long-overdue for real innovation and the practical application of emerging technologies around automation, machine learning, and artificial intelligence for attack detection.
The complaints from most security analysts follow a common refrain – there is too much noise and far too many alerts from too many security tools, making it difficult to find the real threats. A major banking customer of ours reported that before they deployed LogicHub, more than 80% of their alerts were trivial or false positives. Despite that their average time to respond to each alert was 42 minutes, and even with a team of 14 analysts, they simply couldn’t keep up with the thousands of alerts pouring into their SIEM every day. The math simply didn’t work.
Increasingly, security has become a big data problem, but our techniques for addressing it rely on older technology, like SIEM, that simply lacks the ability to scale to handle the volume of data and can’t take advantage of advancements in AI/ML to weed through the noise, make critical decisions, and find the needles in the proverbial haystacks.
It’s time to move past the ‘black-box’ AI hype
As artificial intelligence gains wider acceptance and becomes part of everyday life, the security industry has been lagging in the practical application of these technologies to reduce repetitive labor and improve the effectiveness of our security solutions.
The industry only has itself to blame. For the last decade, there has been a constant marketing drumbeat around the benefits of AI, without clarity on how it was being applied. These claims of magical black-box techniques by much of the industry led to well-deserved skepticism by the market. While most vendors claim AI capabilities, very few can explain how they work, make them transparent to customers or offer critical customization. Gartner specifically calls out this gap recommending that product leaders should:
Improve adoption of AI-enabled solutions by moving away from a “black-box” approach toward explainable and customizable AI models that can be tuned based on analyst feedback.
Misconceptions about both humans and AI in security
One of the most common misconceptions about security is that while computers can process large quantities of data, you really need seasoned human analysts to make any decisions. While experienced analysts certainly can make smart decisions, most of the work they are doing is highly repetitive and mind-numbingly robotic, where humans are simply not that reliable.
Humans don’t make good robots, and we shouldn’t ask them to do robotic work. Modern automation systems, like LogicHub can in fact break down complex tasks into a series of playbook actions, learn how human analysts make decisions, and then perform these repetitive tasks millions of times faster, and more reliably than humans can.
‘I know it’s bad when I see it’
While not all security experts can build complex playbooks, experienced analysts can often recognize bad activity when they see it. This point is valid, and well-designed AI systems should take advantage of human experience and constantly use human input to improve accuracy. Gartner calls this out as a key need in AI Attack Detection and that solutions should include:
The capturing of the skills, expertise and techniques of security analysts for use cases, such as data labeling, threat hunting, automated investigation, and response and remediation.
Don’t wait for expert intuition to start Threat Hunting
While many organizations are embracing automation for alert triage, and incident response, we find that Threat Hunting is left for last, or not addressed at all. A key reason for this is the misconception that you need the most highly trained “ninja” analysts to take this on, and with resources strained just responding to daily alerts, this always remains on the future wish list.
This gap in Threat Hunting is precisely what LogicHub addresses, and Gartner has recognized. While experts are important, it’s critical to capture their expertise, make it repeatable and scalable to address the ever-expanding threat landscape.
The following excerpt from the Gartner report profiles LogicHub and its innovation in AI Attack Detection:
LogicHub Automates Threat Detection With Threat-Hunting Bots
Nature of the Innovation
LogicHub’s attack detection innovation is “decision automation” as part of its SOAR+ platform. It enables the skilled hunters to encode their techniques, thus capturing their expertise, and turning it into a scoring playbook and a decision playbook.
LogicHub has based its platform on expertise automation or blend of expert systems with deep neural net architecture. Deep neural net-based systems offer far better accuracy as compared to traditional ML techniques but are difficult and expensive to build because they need huge amounts of labeled data. So, LogicHub has modified the deep neural net architecture in the form of a four-layered system that can work with a reduced amount of data. The first layer focuses on extracting all the interesting features from different events. Some features can come directly from the data, while some are engineered. Once all the features have been extracted, the next layer works to translate these various features into scores. After that, the score combination layer takes different kinds of scores and translates them into the final score, indicating whether an event is high risk or low risk. The final layer is the feedback from a human analyst if the individual makes a different decision than the engine. The engine learns and updates its own logic to make more accurate decisions like a human analyst.
The playbook thus helps in event triage by automating the decision on criticality of events by scoring each event. LogicHub has a library of playbooks that are published to its automation platform and also used as part of its MDR services.
LogicHub also leverages the following AI capabilities in SOAR:
Automated recommendation engine— It offers customers recommendations on next steps that can be added to a playbook based on the expertise of experienced analysts.
Natural language automation — The solution enables security teams who may lack coding skills to automate steps by recommending a customizable module based on natural language description of the task to be automated.
The vendor is now going a step further by leveraging AI to even develop these playbooks. It takes a lot of time — even weeks — for threat hunters to build these playbooks. So, LogicHub is developing threat-hunting bots that can semiautomatically build these customized playbooks based on a dataset. This is done by capturing the expertise of the threat hunters, the steps taken and the techniques leveraged, and automating the steps.
Market Adoption and Impact
Organizations today do not lack data, but they do lack the ability to analyze it and identify weak signals of attack from it. Even sophisticated security teams have a very small number of threat-hunting analysts. So, a decision automation engine helps in automating human expertise and improving threat detection efficacy at lower cost. The system is also capable of detecting hard-to-find threats in a fully automated manner that may be missed by traditional rule-based systems, eliminating the need for a team of sophisticated threat hunters.
One other advantage of this approach is that the platform offers an explanation and not just a decision. Different neural nets can generate a decision, but they cannot generate a short human-understandable explanation. With LogicHub’s approach, an SOC analyst can understand the various data pieces that were analyzed and the scores assigned to each piece to arrive at a decision. In case of an incorrect decision by the engine, the human analysts can provide feedback, and the engine can learn from it.
In our earlier research, we identified lack of customization as a major inhibitor to success, along with sensitive client data with regulatory restrictions and lack of brand recognition. Threat detection logic needs to be customized to every environment, and it takes significant manual effort to achieve that. It is difficult to hire security staff who can do this manually due to the huge skills gap in the market. LogicHub, with its R&D pursuits of threat-hunting bots, helps address the need to customize threat detection to each customer’s environment. It uses AI to help quantify the customization and make the solutions highly scalable. Threat-hunting bots build threat-hunting playbooks by simply asking a few questions. Instead of spending time to customize the detection logic manually, the bots can help customize the detection logic by gaining context through those questions. LogicHub also constantly uses attack scenarios to make threat-hunting bots learn new techniques to detect attacks.
LogicHub sees main adoption across financial institutions and large government agencies, along with some interest from healthcare, and managed detection and response vendors.
Implications for Product Leaders
The use of AI to automate the tasks — such as threat hunting — of skilled cybersecurity staff by encoding the staff’s domain expertise and techniques can help address the shortage of skilled personnel.
The ability to leverage AI itself to customize the AI models for each customer’s environment offers significant scaling opportunities.
There is the need to move away from a black-box AI approach toward explainable AI models that can offer explanation behind their decision in a human understandable way and incorporate feedback from experienced human analysts.
 All Gartner quotes in the article are from Emerging Technologies: Tech Innovators in AI in Attack Detection – Demand Side, Gartner, 2021