Once again, the RSA conference is fast approaching, and that means it’s time for the latest round of security buzzword bingo. What’s in the queue this year? To be sure, artificial intelligence (AI) and machine learning (ML) will be everywhere at the show.
You can be certain that the halls at Moscone will be packed full of vendors pitching new security offerings that claim to use AI/ML, and there will be a glut of competitive messaging at all the other security shows, as some vendors seek to further confuse the marketplace.
But the fact is, what they want to sell you are merely tools that use technology loosely based on the tenets of AI/ML, but are actually nothing more than repackaged offerings that rely on glorified signature-based security strategies.
The technology the majority of these vendors are developing is basically the same type of stuff that emerged in the 1970’s, and that companies were clamoring about back in the 1980’s – versions of ‘expert systems’ that have not proven to be very useful in most cases, and led to the long AI/ML winter from which we have only recently emerged.
Modern versions of the ‘expert systems’ approach may have been revitalized to an extent with some aspects of AI/ML. But the problem is that (by definition) these systems require that a human programs in the rules they operate by, and as such, a human – not a machine - is behind the decisioning.
One of the driving factors behind the development of pure AI/ML techniques is the fact that even expert humans lack the capabilities required to achieve the best results consistently and at scale. Humans tend to display strong bias when making judgement calls, and that bias leads to suboptimal decision making, increasing the chance that important data gets missed.
Basically, an expert system is a set of weighted rules not unlike a set of signatures. While it certainly can operate faster and more consistently than manual human decisioning, these systems tend to be extremely rigid, whereas true AI/ML should demonstrate a high level of agility.
More specifically, true AI/ML arrives at optimal outcomes which are based on a dynamic yet objective set of facts at hand, rather than pre-defining the 'expert outcomes' based on some a priori knowledge as outlined in a very static set of rules.
It's similar to those vendors who use a bunch of humans behind the scenes to do analytics on their customers' SIEM data, automating some aspects of the decisioning process, and then call it AI/ML (you know who you are).
While there certainly may be some AI/ML elements in there somewhere to some degree, the algorithms themselves are far from being the real deciders – they are simply following the rules they were pre-programmed by a human to follow. And if something was missed in the programming, it will be missed in the analysis every single time.
For example, consider whether a self-driving car is applying AI/ML along this line of thinking:
If a human expert driver wrote the rules that govern the car’s decisions, there is always going to be something they missed which will result in a poor ‘decision’ by the system. So, as a fail-safe, there still must be a human at the wheel to manually make any decision that was not anticipated in the pre-programmed rule-set.
In this case, you are no better off (and likely far worse off) than you were without the self-driving system, due to the fact that now the human expert has become much too disconnected from the original data being used to make the decisions.
And in this instance, if the car is not able to always make actions autonomously based on the system’s analysis of the available data, it's not really a self-driving car, is it?
This lack of autonomy makes this a weak decisioning system. It's really a system that is only alerting you because of something having been missed in the expert programming. You will just end up getting alert fatigue and will likely start ignoring lots of stuff, including the potentially important stuff - this is what killed IDS solutions.
Even if the system is augmented with a lookup feature, if the system takes a few moments to determine if there is a probable crash situation because it had to bounce telemetry up to a cloud to get the information required to make a decision, you've likely already crashed and the system has failed.
Alternately, if the system requires you to manually hit the brakes in this situation while you wait for it to render a decision, it offers nothing more than remote diagnostics and it isn't really a self-driving car at all.
A true AI/ML system would not be programmed to make decisions based on a series of simple if/then rules. It would instead be programmed to learn how to determine the best possible decision, based on knowledge gained from studying millions of situations with both good and bad outcomes – and it would be able to make the decision and then act upon that decision autonomously, with no human intervention required.
As a rule, an AI/ML solution has to be both effective and generalizable – that is, it should be able to make decisions based on data or situations it has never seen before. You can be sure it is not true AI/ML if:
Hopefully, the analyst firms will come up with a model that accurately reflects a security solution’s AI/ML maturity if it claims to be applying the technology, but don’t hold your breath. In the meantime, apply the logic above to evaluate any security solutions that claim to be leveraging true AI/ML technology.
The lesson to be learned here is this: don’t get sucked in by the buzzword bingo spewed by those who only seek to confuse the marketplace – there are true AI/ML solutions emerging, and we’d be happy to show you how ours stacks up to the conditions set forth in this blog. Ping us anytime!