It's that time of year again - when everyone comes out of the woodwork to make cyber security predictions. Below is my op-ed piece doing the same. While predictions can be moderately interesting to read (sometimes), the accurate predictions are usually the least interesting. Why? For starters, interesting evolutions taken by attackers and malware take several years to unfold, usually without making a massive splash in a single year. In other words, the most accurate prediction anyone can make is that 2014 will be similar to 2013, with some minor evolutions (when examined on the scale of a single year). In general, things are getting worse (not better), and it's because of the InfoSec industry. Yes, I said because of the InfoSec industry (more below and throughout). Hey - If you're going to write an op-ed piece, you might as well take advantage of the fact you're writing an opinion!
Whenever we pontificate about the future of attackers and mitigating technologies, it's important to understand what trends are being pontificated in the first place. At a high-level, we can split them into three categories:
Avenues of Exploitation
How are attackers gaining control of endpoints?
What are attackers using to control endpoints?
Post-Compromise activity and lifecycle
What are attackers and malware doing after a successful compromise?
The good news is these trends are surprisingly predicable at a macro scale. The bad news is "macro scale" means longer than one year. However, because this article is about predications for 2014, let's play along and examine the trends driving the predictions.
In the 1990's, exploitation was primarily focused on server-side services, especially Unix daemons. As firewalls were adopted at the mass scale by the turn of the millennium, exposed services previously available to attackers started becoming limited, forcing a shift of exploitation to the major internet protocols. The key point to keep in mind is how InfoSec technologies force attackers to change over time - without ever forcing attackers to stop attacking.
Firewall technologies continued to improve by moving up the stack and an intense focus on the data center by the infosec technologies combined to force another exploitation shift - this time to clients themselves early in the new millennium. As the rate of client side exploitation increased, it drove improved automation of system patching and client-side security practices/technologies.
By the mid 2000's, improving security of the client OS began forcing another shift - this time forcing attackers to focus on 3rd party software components, browser plug-ins, and/or the operating systems very few people know how to secure or forensically examine for compromise in the first place (like Mac). By 2007-2008, that trend was in full swing, leading us to today.
Returning to the 1990's once again, the vast majority of attackers back then fell into the category we'd label today as "hacktivists." As the financial advantages of hacking became more understood by the early- to mid-2000's, we witnessed a massive surge in organized crime's involvement in exploitation and malware trends. Of course, with financial advantage comes economic advantage, and coinciding with organized crime's upswing, state sponsored crime (known in the media as APT) began its surge, finally leading to the state of affairs today.
Endpoint profile + "state of the art" in InfoSec = Target Availability
Who are compromised systems valuable to + what do they want = Target Market
In turn, supply and demand converge to shape trends in avenues of exploitation, malware characteristics, and post compromise life cycle activity.
While it's true that 2014 will look similar to 2013, with some logical micro-advances, it's also safe to say that we're on the cusp (or in the midst) of another massive shift. That shift is being driven by the following changes in the technology landscape:
Three noteworthy trends here are forcing changes for attackers. These predictions are probably the most controversial here, but also the most in accordance with historical trends/lessons.
The first is the mass adoption of security appliances focused on application behavior analysis. These appliances extract executables from the network or other locations, run them in a sandbox, and make a judgment based on the behavior of the executables. While such technologies were more effective than other network technologies when they were first brought to market, I believe the marketplace has been sold a bill of goods about how effective they really are. That's why we can expect to see the following prediction:
Prediction: Minor Behavioral Modifications to Malware with Massive Consequences
For example, we have examined a large number of samples that install themselves, but do nothing else until the system is rebooted. That simple step alone defeats most behavioral analysis appliances since they can't reboot themselves indefinitely and watch behavior across reboots. Behavior analysis systems only work if the malware does all its bad activity as soon as it's executed. We've seen a number of other silly/simple changes to malware that are difficult (if not impossible) for behavioral analysis appliances to compensate for, so attackers have a very low bar to defeat those systems. It's arguable that behavioral analysis appliances will be less effective than traditional AV in the not-so-distant future, although I'm not pessimistic enough to make that a formal prediction. :-)
Speaking of sandboxing, a trend that disturbs me almost more than any other is the adoption of software and hardware micro-virtualization on the host (aka: application sandboxing on the host). This stopgap approach to security is not only limited in its hardware support but also forces the most worrisome and dramatic change to malware we're likely to see in the near future:
Memory Only Malware Never Written to Disk
When disks become virtualized and/or non-persistent, that does not stop malware. That only changes the operating parameters for malware, and of course, malware will change to accommodate those parameters (as it has done for the past 20+ years). As malware shifts from disk resident to never resident (except for its life in memory post-exploitation), then the job of InfoSec responders gets inextricably more difficult. Good luck finding evidence of compromise and formulating triage plans after the InfoSec industry has forced attackers to execute their entire purpose without relying on persistent capacities (in fact, virtualization technologies themselves obliterate much of those artifacts needed to assess and triage an incident).
The final trend the InfoSec industry is adopting that disturbs me (probably more than the point above) is the mass adoption of "behavior based detection" on the endpoint. These "new breed" of technologies (they really are no different than many host-based detection/prevention technologies that have come and failed over the years, except repackaged with new buzzwords) take a forensic detection approach to the malware problem by allowing applications to start running, then deciding to stop them after examining the behavior they perform. This in turn forces attackers to reverse the two-decade-old trend of descending deeper into an Operating System and instead brings to fruition the following two predictions:
Prediction: Becoming Indistinguishable from Legitimate Technologies by using High-Level Platforms (.NET, etc.)
Most behavioral based detection relies on looking at the API's used by running processes. Malware has always leaned heavily on API's common in malware (or at least, common patterns in malware) that are not-as-common in legitimate applications. When software is developed in .NET (and similar high-level languages), examining its behavior becomes extremely imprecise since all the application code is executed through mscorwks.dll. Even per-system just in time compilation can cause the same application to have minor variable execution behaviors on different systems.
And along those lines, of course we should expect to see a continued uptick in attackers:
Prediction: Using Standard Systems Administration Tools and Technologies (not directly malware related)
This refers to attackers using standard tools available from Microsoft for system administration, and even the administration scripting platforms standardly installed on the OS anyways. This is probably the most dangerous prediction the InfoSec industry is forcing to become a reality, as I'll describe towards the end of this article.
Make no mistake - Microsoft Windows still comprises greater than 85% of the computing market, andis showing no significant signs of losing its dominance (regardless what the fanboys say - the numbers show Windows' dominance). Mobile devices pose threats in their own right (as we'll discuss below), but it's not because they're displacing the major computing platform - Microsoft Windows.
So it goes - Windows malware is the major threat, and will be the major threat for the foreseeable future. However, there is a more interesting change afoot here.
In the past six months alone, the market share of Windows XP has plummeted a whopping 25%.
As Microsoft [FINALLY] ends support for XP in a couple months, organizations are [FINALLY] forcing a refresh from that 13-year-old Operating System. The numbers clearly show that Windows 7 and Windows 8 are replacing XP. Besides improved anti-exploitation and compromise mitigation technologies built into the Win7/8 OS itself, this shift is forcing another more relevant change.
It's forcing clients to update 3rd party software components like Flash, Acrobat, Java, etc to their now-standard auto updating versions (which was not the case on many XP systems with older packages installed). This has caused a dramatic reduction in the most common exploitation surface areas available to attackers (for years, they've had a field day with Acrobat and Java). While it's yet to be seen if this is a permanent or temporary change, it does force the predictable evolution in threats described next.
Prediction: Increased Dramatic Effects of Zero-Days in Mass-Attacks
As automated patching becomes the standard, the effective window of exploitable opportunity new vulnerabilities provide shrinks drastically. Because of that, for attackers to take advantage of zero days, they are forced into a blitzkrieg of attacks towards vulnerabilities, since those vulnerabilities will have a much shorter shelf life. To date, there has been a considerable lag between zero day discoveries, addition of that zero day to platforms like exploit kits, deployment of updated exploit kits, etc... That lag will no longer be acceptable, so attackers will have to focus blitzkrieg-like campaigns on newly discovered vulnerabilities. Expect lots of media attention on these events (since they'll appear more dramatic), even though they're becoming fewer and further between.
Prediction: Makeover on Phishing Attacks (also leveraging social networks), with a decline in deployment of "traditional" exploit kits
When a programmatically exploitable surface isn't available to attackers (because of updated OS controls and automated 3rd party patching), they need to shift from the "spray and pray" approach of traditional exploit kits to tricking users into installing the malware for them. As such, we should not only expect to see more impressively polished phishing campaigns on a more massive scale, but also a refocus of malware spreading via social networking. In the past year, Facebook reached a whopping 7% of all referrals on the Internet! Numbers like that are impossible for us to ignore - and impossible for attackers to ignore as well.
Prediction: Trojaned Packages
When being forced into relying on users to install malware on behalf of the attacker, the easiest way to achieve that on a massive scale is to bundle the malware with other software packages. We've seen a number of cases like this leveraged by both state sponsored attackers as well as organized crime. It only makes sense for the cases of this (which have been outliers so far) to become more common since it's so effective and the emerging "software marketplace" models make this approach so much more powerful.
What discussion about malware predictions would be complete without a discussion of mobile devices? If you've been to BlackHat for the past few years, you know everyone is sounding the alarms about mobile devices (and have been for many years now). Mobile devices certainly carry their own set of risks, but what do the numbers show?
Mobile devices account for only 13% of all client traffic on the Internet, up only 1-2% in the past year. In other words, mobile devices are not displacing traditional Windows systems as primary computing platforms.
Within the mobile space, one interesting trend is noteworthy. While Apple's iOS accounts for approximately 50% of all mobile traffic today, the near lack of innovation in usability by Apple since the release of the iPhone over seven years ago (look at the iPhone then versus now) has presumably led to the trend shown in 2013 - nearly 80% of all mobile devices sold last year were Android based devices. That leads to the non-climatic prediction that:
Prediction: Android Malware Will Continue to Boon
As "supply" becomes scarcer (because of more widely adopted automated patching and internal OS security mechanisms), that forces interesting changes in demand. First, those with the deepest pockets set the evolution and MO of attackers. A blending of those market segments (organized crime and state sponsored crime) can be expected, (and has already started anyways, so I'm not sure that's much of a prediction).
As the attack surface changes, it also forces attackers to look elsewhere to achieve their goals. Changing the attack surface on attackers does not stop them from attacking - it only forces them to change their focus. Along those lines, there is no more obvious of a place to look than the cloud. Over the coming year(s), we can be certain to see more dramatic activity unfolding within the cloud. Again, this hardly counts as a prediction though.
At the start of this article, I made a blatant statement about the worst of these changes being the fault of the InfoSec industry.
Our industry's challenge is to identify and stop malware before it can execute on systems. The industry has done an abysmal job of that, garnering approximately 15-40% accuracy across almost all categories of technologies - including 20-year-old technologies like AV. Even the most alluded network security technologies have only done marginally better. In other industries, 40-50% effectiveness would legally be considered negligence.
However, rather than new security start-ups focusing on solving the problem of detecting malware before it executes in novel ways, and instead of organizations holding vendors accountable for their failures in detection before it executes (vendors are punished for false positives, but false negatives are considered status quo - so our industry has remained with that status quo), we (as a whole industry) are taking technology in a different direction. Rather than focusing on improving detection of what we're looking for (the malware itself), we're focusing detection on the behavior of malware.
Focusing in that way has terrible consequences for the entire industry - in that it forces attackers to obfuscate their activities - something they've never had to actually do in the past with any real fervor or zest. With such terrible detection rates, the only reliable method InfoSec responders have in their arsenal to detect compromises is via the attacker's activity, which has remained largely unchanged for the past decade or two. These "new technologies" (the ones focused on attackers behaviors) will force attackers down paths of non-obvious patterns of activity. As consumers, this forced change in attacker behavior comes in exchange for technology which is arguably just as ineffective as the technology it's trying to compensate for. At the very least, it can be evaded just as easily. If this reality doesn't sit well with you, please remember:
Security technology advances focused on attacker/malware behavior has never stopped attackers/malware - it has only forced them to change their behaviors.
Rather, if our goal as an industry is to identify malware and stop it before it runs, then we need to identify it before it executes on the endpoint - ANY endpoint. (Make no mistake, the term "sacrificial lamb" is a cunning term used to avoid acknowledgement of massive flaws in a detection methodology. It only takes one host (and typically the first one anyways) for your most important data to be exfiltrated.) I wish I could predict the trend that the InfoSec industry as a whole will head in the direction of better detection of malware itself. Unfortunately, that is not a prediction I can make for 2014, or beyond...
Nonetheless, and as always - we're lucky to work in the field we do. Have fun doing it!