Skip Navigation
BlackBerry Blog

True AI/ML vs. Glorified Signature-Based Solutions

NEWS / 02.08.17 / Ryan Permeh

Once again, the RSA conference is fast approaching, and that means it’s time for the latest round of security buzzword bingo. What’s in the queue this year? To be sure, artificial intelligence (AI) and machine learning (ML) will be everywhere at the show.

You can be certain that the halls at Moscone will be packed full of vendors pitching new security offerings that claim to use AI/ML, and there will be a glut of competitive messaging at all the other security shows, as some vendors seek to further confuse the marketplace.

But the fact is, what they want to sell you are merely tools that use technology loosely based on the tenets of AI/ML, but are actually nothing more than repackaged offerings that rely on glorified signature-based security strategies.

Revamped Expert Systems

The technology the majority of these vendors are developing is basically the same type of stuff that emerged in the 1970’s, and that companies were clamoring about back in the 1980’s – versions of ‘expert systems’ that have not proven to be very useful in most cases, and led to the long AI/ML winter from which we have only recently emerged.

Modern versions of the ‘expert systems’ approach may have been revitalized to an extent with some aspects of AI/ML. But the problem is that (by definition) these systems require that a human programs in the rules they operate by, and as such, a human – not a machine - is behind the decisioning.

One of the driving factors behind the development of pure AI/ML techniques is the fact that even expert humans lack the capabilities required to achieve the best results consistently and at scale. Humans tend to display strong bias when making judgement calls, and that bias leads to suboptimal decision making, increasing the chance that important data gets missed.

Basically, an expert system is a set of weighted rules not unlike a set of signatures. While it certainly can operate faster and more consistently than manual human decisioning, these systems tend to be extremely rigid, whereas true AI/ML should demonstrate a high level of agility.

More specifically, true AI/ML arrives at optimal outcomes which are based on a dynamic yet objective set of facts at hand, rather than pre-defining the 'expert outcomes' based on some a priori knowledge as outlined in a very static set of rules.

It's similar to those vendors who use a bunch of humans behind the scenes to do analytics on their customers' SIEM data, automating some aspects of the decisioning process, and then call it AI/ML (you know who you are).

While there certainly may be some AI/ML elements in there somewhere to some degree, the algorithms themselves are far from being the real deciders – they are simply following the rules they were pre-programmed by a human to follow. And if something was missed in the programming, it will be missed in the analysis every single time.

Expert Systems vs. AI/ML

For example, consider whether a self-driving car is applying AI/ML along this line of thinking:

If a human expert driver wrote the rules that govern the car’s decisions, there is always going to be something they missed which will result in a poor ‘decision’ by the system. So, as a fail-safe, there still must be a human at the wheel to manually make any decision that was not anticipated in the pre-programmed rule-set.

In this case, you are no better off (and likely far worse off) than you were without the self-driving system, due to the fact that now the human expert has become much too disconnected from the original data being used to make the decisions.

And in this instance, if the car is not able to always make actions autonomously based on the system’s analysis of the available data, it's not really a self-driving car, is it?

This lack of autonomy makes this a weak decisioning system. It's really a system that is only alerting you because of something having been missed in the expert programming. You will just end up getting alert fatigue and will likely start ignoring lots of stuff, including the potentially important stuff - this is what killed IDS solutions.

Even if the system is augmented with a lookup feature, if the system takes a few moments to determine if there is a probable crash situation because it had to bounce telemetry up to a cloud to get the information required to make a decision, you've likely already crashed and the system has failed.

Alternately, if the system requires you to manually hit the brakes in this situation while you wait for it to render a decision, it offers nothing more than remote diagnostics and it isn't really a self-driving car at all.

A true AI/ML system would not be programmed to make decisions based on a series of simple if/then rules. It would instead be programmed to learn how to determine the best possible decision, based on knowledge gained from studying millions of situations with both good and bad outcomes – and it would be able to make the decision and then act upon that decision autonomously, with no human intervention required.

Spotting a System That is Not AI/ML

As a rule, an AI/ML solution has to be both effective and generalizable – that is, it should be able to make decisions based on data or situations it has never seen before. You can be sure it is not true AI/ML if:

  • A human wrote the rules that make all the decisions - this is just another form of a signature (perhaps a fancy signature, but still just a signature) and signatures leave gaps or can be easily circumvented.
  • A human makes the ultimate decision – as with the unnamed vendors mentioned above (in these cases, at best all they offer is a decision support system).
  • Machine-initiated actions are not taken automatically as a result of the automated analysis - merely generating a signal is not enough. If the efficacy of the system isn't high enough to take action without human interdiction, the system has failed to achieve true AI/ML and true autonomy.
  • It requires a high latency decision, because it is important for these systems to be able to make decisions inline and in real-time (if you notice it takes too long, it's a clue that it’s probably not actually AI/ML doing the work).

Hopefully, the analyst firms will come up with a model that accurately reflects a security solution’s AI/ML maturity if it claims to be applying the technology, but don’t hold your breath. In the meantime, apply the logic above to evaluate any security solutions that claim to be leveraging true AI/ML technology.

The lesson to be learned here is this: don’t get sucked in by the buzzword bingo spewed by those who only seek to confuse the marketplace – there are true AI/ML solutions emerging, and we’d be happy to show you how ours stacks up to the conditions set forth in this blog. Ping us anytime!

Ryan Permeh

About Ryan Permeh

Senior Vice President and Chief Security Architect

Ryan works within the office of the CTO to define technology strategy and architecture, that will help integrate technology across BlackBerry and focus it towards reducing customer risk. Ryan has been in the security industry for over 20 years and has a long history in both offensive and defensive security. Ryan came to BlackBerry as part of the Cylance acquisition. He was co-founder and Chief Scientist of Cylance and led the architecture behind Cylance’s mathematical engine and groundbreaking approach to security. Prior to co-founding Cylance, he previously served as Chief Scientist for McAfee focused on technology strategy, and as a Distinguished Engineer at eEye Digital Security focused on building security assessment tools.

He has published numerous articles, papers, and books, and is a frequent speaker at conferences around the world on the topics of security, privacy, machine learning, and entrepreneurship. His research has led to numerous innovations in both offensive and defensive security technology and he has published over 20 patents in the security and data science fields. He is known as the discoverer and primary analyst of the “Code Red” computer worm and contributed to many other analyses of significant threats over his career.