What folks call “AI” these days is in the mainstream, and from the looks of it, it is here to stay – the genie is out of the bottle and the financials are making sense. As the monetary value increases, it is only natural that security stakes are getting higher – there is real money on the table in a multitude of forms. Good guys AND bad guys want said money, and the arms race is on!
Comparing “AI” with previous major steps in computing (cloud computing in 00s is a prime example) and the aftermath of moving into the foreground one does not have to be a doomsayer to anticipate the incoming flood of security snake-oil merchants. For those that their Schneier is rusty, snake-oil merchants peddle security solutions that promise silver bullets but, way more often than not, leave your more vulnerable that before using their security solution. In addition, once the sharks smell the blood, a variety of “experts” appear out of thin air. The reason that experts is in quotes is that usually these folks have at best a baseline knowledge of the domain, at worst are plain, old charlatans (if you need a definition of charlatan AND take a trip down memory lane, please do click me). I have seen this happening in the Cloud Native space as a recent example, folks claiming to be infrastructure experts without knowing a single programming language.
“So, the more things change, the more they stay” I can hear you murmuring. However, the case for AI security is a peculiar one – the peculiarity lying in its, as of now, high barrier of entry. Let’s attempt to map known technical fields to this domain. What the cool kids refer to “Product Security” is the first step (Product Security being the union set of Infrastructure Security and Software Security – in a breadth-first approach – and no, appsec is not equivalent to Software Security, mind you and, for the love of Jove, let’s not get stuck on false dichotimies of the past). WIth this out of the way, you would then have to face all the problem associated with Data Security – more often than not including Privacy and how about we sprinkle some specific problems, based on the technologies you are using.
And all of the above are a simplified model – add in legal/GRC considerations to the mix (as of now, no viable regulations do exist – in essence those entities who can do, do whatever they want and Brussels plays ostrich, head firmly in the sand) season it with DLP and shimmer (I will the ethical considerations as something for the reader to ponder). As you are probably thinking “well, that is a lot of stuff” and it truly is. The defender’s problem of how to best defend is exasperasted and you’d better get your Operations Research game up.
Where I am getting with all this? Given that it has been established the inherent security complexity (and by extension, increased chances that the proverbial crap to hit the proverbial fan) the effect of a snake-oil merchant – or anyone who is involved in mental dishonesty – and we established that, yes, they will appear, combining the two we are reaching our Doomsday Scenario:
If you are into AI security and you engage by purchasing goods and services from charlatans or snake-oil merchants, the most likely outcome is a catastrophic one, owing to the inherent complexity and rapid pace of development (now that the money to sustain this is there). When I say catastrophic, I do not mean obviously hiccups or a failed project, lapses in AI security are company-killers.
I will continue expanding my viewpoints at later articles, pointing out specific patterns AND anti-patterns