In this series of articles we share hands-on experience from active hunts in the real world. We share details on our discovery process, how we automate workflows, and how we enable the security operations team to quickly and easily transfer knowledge afterwards with just a few simple clicks. In today's article, we will walk you through the process we used to locate an instance of Raiz0WorM on the network.
In the case of network-based threat detection and response, security practitioners are constantly plagued by several recurring issues: a large volume of security events and logs, understaffed IT departments, expert security resource starvation, and more false positives than legitimate alerts. Because of these issues, professionals seek more advanced automation and continued innovation using novel techniques to compensate for current inefficient processes.
Without a doubt, automation is not just an added benefit to a network detection and response (NDR) system, but a necessity; however, there are cases where human intervention is required.
For example, some of those cases require confirmation of a more expert level analyst (Tier 2 or 3) to confirm a false positive or negative. Additionally, knowledge transfer is often a problem as it requires experienced staff to dedicate time and effort to train and educate less experienced cyber defenders. Due to other priorities, experienced personnel are finding less time to engage in proactive hunting or other hands-on projects. When knowledge transfer and training of new employees is added into the mix, it’s no wonder that senior analysts feel like there aren’t enough hours in the day.
Bottlenecks often result from increases in manual repetitiveness of routines, not enough automation, and poorly integrated tools and environments. What is generally lacking (in our experience) is the ability to turn an idea into an automated process, trust that process to function as planned in the future, and then move on to the next hunt.
This is precisely what happened during the particular hunt I describe below.
This example is a large deployment featuring specific applications for knowledge transfer, training, and hunting.
Stamus Security Platform
The Stamus Security Platform detection is deployed on Stamus Network Probe seeing 40Gbps of traffic. The customer is set up with a full Stamus NDR license that provides for a number of automated detections including:
This example is a well tuned environment, but we still had about 12.5 million alerts plus 1 billion network events (protocol logs and flows) to evaluate daily.
When the Stamus Security Platform was first deployed, a number of threats were immediately discovered. Thanks to full restAPI and automated detection, incident response processes started automatically and populated with contextual information while also automatically notifying the on-duty SOC team members through Rocket Chat.
That process was already set up and completed with the integrations working as part of the Stamus Security Platform application. However, there was much more to discover...
As someone who often wishes to know what else is out there that isn’t actively being displayed, I decided to explore some active hunting angles. Often enough, active hunting is about trying to flush out False Positives vs False Negatives based on event types, metadata, and ideas or formulas for the hunt. Fortunately for me, the Stamus Security Platform’s hunting interface gives me the possibility to take an abstract layer approach and not depend on specific signature/rule/ip/domain/JA3 hash/url, etc. By taking a “just show me what I’m looking for approach” I was able to let the tool function as intended to do its job.
15 minutes to discovery and automation
It was early in the morning when I began this hunt, and as a result I was running short on creative ideas. Thankfully I was able to use some of the predefined threat hunting filters made available in the Enriched Hunting interface.
As I was going through the filters, I realized it might be worthwhile to check out the metadata we have in the Hunting interface GUI for base64 encoding and decoding functions regardless of alert types. The only thing I was interested in was simple http requests/responses that had usage of base64 functions and those with http status codes 200.
With just two clicks I was able to set 2 new active filters.
The results narrowed down the number of events from 12.5 million to just 10. Then I noticed a sequence of suspicious URLs and user agents combined with payloads (de)obfuscation.
Part of the full transaction metadata:
Here we can see a clear reference to Raiz0WorM and base64 function usage that translates as:
A specific sequence of URLs and user agents:
Seeing this gave me more contextual information, leading me to narrow down the filter even further:
Upon regular IP and domain checks, it was also confirmed that this communication is not expected, nor is it part of any vulnerability scan or testing.
This discovery led me to the obvious need to raise an incident response (IR). However, I wanted to achieve 3 more goals:
I wanted to automate the process I had just completed so I would not need to repeat it in the future, ultimately freeing up my time to conduct other hunts.
A few more clicks in the Enriched Hunting interface allowed me to translate the hunting idea I had into an automated process. I also wanted to utilize webhooks and restAPI to automatically populate IR escalation into existing SOC environments/chat notification systems and SOAR solutions. What I wanted was achieved with a few clicks via the Policy Action menu via the Create STR Event (we now call this a “Declaration of Compromise”) option:
I filled in the details specifying attackers and victims while notating any extra information that was available and relevant.
Now any and all logs, doc links, contextual data, incident response tickets, and chat notifications were automated via existing integrations into the standard organization’s process and tools (SIEMs for example).
Going from a hypothetical hunting idea to the creation of an automated classification and triage process is valuable on its own, but the job is only half done. Usually, your colleagues need additional information to understand what you’ve done, why you’ve done it, and how they can repeat the process or learn from it. The ability to combine the current security team’s knowledge with the provided visibility of the environment and integrate that combination into an existing process or tool for knowledge will serve IT professionals well as they continue to defend their organizations from threats. As you can see in the Raiz0WorM example, this process is made simple with the Stamus Security Platform.
Hopefully this gives you a taste of how the Stamus Security Platform can help security teams know more, respond sooner, and mitigate the risk to their organizations.
To read more articles in this series, check out these "Uncovered with Stamus Security Platform" blogs:
And if you’d like to see Stamus Security Platform in action, please click on the link below to schedule a live demonstration.