What if you could build your own playbook for tackling the threats and challenges of the current landscape — in less than 30 minutes?
A typical playbook for dealing with cyber threats is contingent upon the engine you’re using. If you’re building a rules engine, you describe what "bad" looks like, but all other events are ignored if they look "good." There are simply too many events to assess in detail.
This is how security teams can miss attacks. Suspicious patterns aren’t identifiable as normal traffic because "normal" isn’t modeled, so they're simply not recognized.
AI-Powered Decision Engine
An AI-powered decision engine, however, models both bad and good features and factors. This means the AI alerts human security analysts to new, suspicious attacks — which may look very different from known "good" and "bad" events — and can make determinations accordingly.
This is a very similar technique that deep neural nets do. There is one big fundamental difference, which is that the training of many of the neural networks relies on a lot of labeled data. But in the security world, generating labeled data that captures all different kinds of attacks is simply too expensive.
Instead of trying to utilize a neural net from labeled data, we actually built our decision engine using the experience and techniques of highly skilled security experts. Not human, not machine, but both – and this is where human expertise and automation comes together. It is a symbiotic process of feedback, adaptation, and learning.
This allows humans to focus on what they do best, while AI takes care of the rest.
Watch a step-by-step demo of how a threat-hunting automation assistant can help a security analyst take event data to find the proverbial needle in a haystack, all in under 15 minutes.
New challenges spell new opportunities
With new threats and effective alerts for human analysts, there comes a new opportunity: to train and update the model following a systematic process.
When we ask whether an event is "known good" or "known bad," what really matters is the outcome. Analysts look for new factors or novel combinations of factors, which can help determine the level of threat. In both cases, the model is updated.
But more importantly, it is supervised. Input from both data and human analysts gives the model more information to learn from every day.
To this end, LogicHub built a decision engine using the best of both human and machine expertise. Building a playbook takes a fraction of the time we would need to triage security events manually.
How? An AI bot guides a human through the process, which can be cut down to less than an hour. No programming knowledge is necessary. AI can also guide us through a playbook that automates threat hunting.
Security professionals who specialize in threat hunting are highly trained, handsomely paid and very much in demand. But an AI platform that incorporates that degree of human expertise can hunt threats in a fraction of the time. What's more, LogicHub's automation means threat hunting is affordable and accessible to teams of every size, even those with modest budgets.
To build a threat hunting model, we first need to identify the features in our system — variables like user agents and usernames. Then we use factor analysis to reduce them to as few as possible. We compare these factors to known data and assign them scores. Any event or alert that hits a certain threshold — for example, a score of 9 or 10 on a scale of one to 10 — is an actionable case.
We can add new features to the playbook as we go along, or we can update the scores. You can incorporate analyst feedback to ensure accuracy, and the more features you add, the more feedback you give. The system will get smarter and smarter and will try to mimic the logic you have in your head on how you find that needle in the haystack.
Now that it is fully automated, we can schedule this to run as often as we want. You can run it every 15 minutes, every hour, every day. The entire process runs and creates cases, and an analyst can come in anytime to provide feedback. You get a sense of how easily the bot assistant can help an analyst automate threat hunting.
This is machine logic plus human analysis at work.
Forrester analyst Allie Mellen joins LogicHub as a featured guest speaker to discuss the evolution of SOAR technology and how AI can enable a new generation of solutions for the SOC. Please join us on May 19th at 8:00am PT / 11:00am ET!
Let humans be human, AI be AI
In our previous post, we did a deep dive into LogicHub’s playbook to show the potential of human experts harnessing the power of machine expertise. It allows humans to do what they’re good at — what machines cannot (yet) do — while sparing humans from the drudgery of repetitive tasks, which AI does better anyway.
Humans provide the analysis and responses to cases where the threat and sophistication level are high. AI efficiently sorts through that haystack of data, based on parameters we define and input we provide.
The two big limitations of AI technology thus far have been explainability and adaptability. LogicHub solves both. Our bots automatically generate explanations with transparent reasoning. Even if a human makes a mistake, our platform can adapt, respond, and incorporate that input as well. The AI progressively learns and updates playbooks based on ongoing feedback.
The more features and feedback you add, the smarter the system gets. It’s fully automated, running as often as you choose, creating cases that can be reviewed and actioned by human analysts. And you’re closer to finding that needle in the haystack.