Whenever a buzzword gets hot, you can expect lazy marketers to start pasting it on every product or service in sight. When two buzzwords get popular at the same time, expect double the hype.

So it is with the two hot high-profile trends of security and artificial intelligence (AI). Security is a big deal because more than 50 years into the computer age we still haven’t figured out how to protect our applications and data well enough, and the problem is only getting worse. AI is on everyone’s lips because of its potential to analyze massive amounts of data more quickly and accurately than a human could to identify, and possibly predict, events.

Smart Machines Fighting Bad People

The AI play in security is that algorithms can do a much better job than people at picking out potential threats from the thousands, tens of thousands or millions of clues in your network traffic and among your devices.  At the very least, the pitch goes, AI can sift out the most likely real signs of threats among from everyday shifts in network traffic or whether a repeated log-in attempt is a user who forgot their password or an automated bot trying to guess that password.

That all sounds reasonable, but I’m hearing (even from some of my security clients) that too many security vendors are pasting the “AI” label on their offerings without delivering the goods. As a marketer, if you’re knowingly complicit with this you’re not only misleading the end consumer but setting yourself and your client up for failure when customers realize the client can’t deliver.

I get it that AI story is a rapidly evolving field, your client may still be growing their own AI expertise, and that security customers are justifiably reluctant to share their successes or, even worse, their failures. We still owe it to our clients and their customers to push for real proof that their AI security solutions can do what they claim.

Some tough questions to answer before pumping out more AI/security fluff:

AI/Security Reality Checks

  • What algorithms is the vendor using to sift through the network or device data they are collecting? How have they been proven to be useful?
  • How accurate (numbers, please) is the AI-enabled assessment of real security events vs. false positives? How are those results improving over time?
  • How big is your client’s AI staff and how quickly is it expanding? If they are partnering with other/bigger AI experts, who are they and how strategic is the partnership?
  • What does the solution do to ensure it is being fed correct samples, to avoid the “garbage, in, garbage out” syndrome?
  • How does it guard against hackers turning AI against the enterprise, such as feeding bogus samples into the machine learning data pool, using AI to gather information about the target and identify vulnerabilities, and using AI chatbots to fish for information?
  • Does the vendor recognize the limits of AI and make it easy to bring people with their fuller understanding of context (and common sense) into the decision-making process?
  • And, as always, push for customer case studies, even if anonymous.

We’ve all been to this rodeo – of vendors trying to jump on a new technology trend before they can really deliver – many times before. Curious for your thoughts on:

  • What percent of AI/security claims can your clients back up?
  • What other proof points can they provide for their claims than what I’ve mentioned here? and
  • How willing are your clients to go the extra mile to tell a provable AI/security story?
Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email bob@scheierassociates.com or call me at 508 725-7258.