As the Director of Information Security for Cylance, when I see a white paper written on an advanced adversary like the one we released recently for Operation Dust Storm, I tend to fall back upon my former life in Threat Intelligence as I read it. I feel a great deal is lost to those who simply look at the summary, don’t see their company name, run the indicators through their logs and move on. There is so much more that could and should be considered if you commit the time to properly read these reports; there are both tactical and strategic gains possible making white papers that might not appear so, far more actionable.
Fundamentally, when I read a white paper on a newly discovered threat, I have three overarchingPriority Intelligence RequirementsI want answers to:
A tactical answer based on whether I have seen any of the indicators shared, in the context of the incident described.
As I digest the TTPs of this report, I want to understand what my relationship is to the identified victims. The short question, "Am I impacted?" is reactionary, and can diminish the value to be gained from such a report. "What do I have in common with those listed?" and "How would I have fared against this attack?" are far more thought-provoking questions and will yield more interesting results.
Often, the best way to begin this kind of analysis is to start with the easy points in order to verify the intelligence requirement: "Am I / are we impacted?" For many, this is the due diligence tactical response to the content of the report: given a set of indicators, does a search of my antivirus logs and SIEM alerts discover the presence of the attacker described in the report? Given the readily disposable nature of most indicators, the simple fact of the attacker’s apparent absence should not be the end of the analysis; the industry has seen far too many instances of weaponized code with unique-to-target characteristics.
In Cylance’s Operation Dust Storm report, we call out the fact that the adversary has hard coded proxy information - this means I may be looking for highly customized code and potentially C2 destinations. A “yes” to these questions may often put further and deeper analysis on the back burner, or limit it to the context of the incident response. Depending on the timeline of when these indicators were seen in an environment, this could either be an archeological dig or a firefighting exercise; the loss in actionable value is diminished if the follow-on analysis is abandoned due to the answers to this first assessment. So put out the fire, unearth the relics of the long dead compromise, but don’t stop there.
A step up from the simple verification of the indicators is the analysis of the tools, techniques and procedures (TTPs) described in the reports, when viewed in the context of my existing controls. For instance, are there any interesting TTPs reported that I should capture and share with my security teams, both on the Product and Enterprise sides of the house? Have we seen the malware, or families from it, used against our assets? This could be a challenge depending on the internal data you have at your disposal and how long you retain the information. Enumerating the TTPs can also allow me to look at the existing controls in my environment to know how many of their methods I might have prevented or detected, and which I could have found or verified.
If you look at the layout of the information delivered in Operation Dust Storm, you can see in the analysis some well laid out groupings of characteristics - Volatile Evidence, File System Modifications, and Registry Modifications. These clue me into which controls I need to interrogate, or (if they are absent in my environment) potentially invest in. For instance, do I have the ability to search my environment for local accounts matching the "Lost_" naming scheme, or more broadly, a method to detect, investigate, and report all local accounts created on company assets?
If my vulnerability management program is mature enough, I may be able to take the timeline of the report and the associated CVE's and compare my environment's state of vulnerability exposure to the availability of weaponized code across the campaign timeline.
At a former company, for instance, we would debrief our security architects, risk managers, incident responders, and security operations personnel on the TTPs so they could (hopefully) make better-informed decisions. Not everyone on the security teams had the time to read through every report; having a few senior security Subject Matter Experts (SMEs) review the content and report to others proved to be an efficient method of distributing this knowledge to the people who could best use it, namely, those who had to make risk decisions for the company.
One good example from the Operation Dust Storm report (andthe good prior work of Ned Moran) is the reference to the speed with which the threat actors responded. Forty-nine (49) minutes was the window of time from initial exploitation to intrusion operator sessions on the compromised hosts. A benchmark for me to critically and honestly look at my tools and processes and ask myself if my capabilities to respond to detection is within that window - and if not, how far behind my organization would fall. This kind of data point also fits well into any kill-chain-centric analysis of my environment.
The last question I’d like to address here may have the most long term benefit. Looking at precisely how I resemble the victim may provide me with insight into how my resources should be allocated to address the TTPs that were identified. Very few organizations operate without resource constraints; it thus falls on us as security practitioners to influence the course of spending and resource allocation in order to mitigate risk.
At a former company, I quickly adopted a dartboard visual to explain the concept of threat zones. Given time to consider it, I now find myself instead gravitating toward my Jersey Shore roots and the game of Skee-Ball to explain this concept. I prefer this visual because it captures the fact that sometimes by aiming for one category, I put myself right on top of another. Or failing that, I may default into the next lower risk grouping that contains the others.
How I might rate the severity of a campaign or actor would be based upon how I assign them a value based on their assessed proximity on my Skee-Ball heat map: my order for both campaigns and threat actors would look something like this:
This kind of approach allows me to look at the TTPs to ask myself what kind of follow-on actions should be taken:
If the assessment leads to a 40 or 50 Skee-Ball score, it may be time to conduct an internal war game (or hire a service to conduct one with me) to critically examine my people, processes and technologies against the methods reported in the white paper from a protect, detect, and response perspective.
These are tough and far-reaching questions, and the answers you seek may vary from organization to organization based upon corporate mission, company culture, and the maturity of controls and security teams. My goal with this blog post is to leave you with more to think about in the future when a white paper like Operation Dust Storm is released.
How actionable this research is will ultimately depend upon how you (management and security Subject Matter Experts) approach the content, and what intelligence requirements you seek to answer with the information these reports provide.
Questions? Comments? Both are welcome.
Director of Information Security at Cylance
Photo Credits: DSC_2667 – Skeeball: scott*eric / Scott S on Flickr.com Creative Commons