AI brings the good, the bad and the ugly when it comes to security

  • An exec from AppSec company Qwiet AI warned AI models themselves could prove to be the technology's Achilles heel when it comes to security

  • Machine learning has been used in cybersecurity for years now and broader AI adoption is expected

  • But bad actors may also cash in on AI capabilities

Artificial intelligence (AI) has been the talk of the town in tech this year — plenty of it with raised eyebrows. Bruce Snell, cyber strategist at Qwiet AI, believes the worry is warranted. He argued that while AI is a well-suited tool for cybersecurity, the industry needs more informed guidance on what it really is and where overlooked vulnerabilities might be hiding.

A recent Gartner report shows data privacy and cloud security continue to be top of mind for organizations, with both segments’ spending projected to hit record-breaking growth rates next year, and that includes issues related to AI.

While Snell affirms that focus, his experience with Qwiet AI —  an AppSec company using AI for preemptive threat detection in code — has helped him identify one significantly overlooked area: the security of the AI engine itself.

“I don't think anybody's really paying enough attention to securing the AI engines and models themselves,” he said, adding that means making sure that the code itself doesn’t contain vulnerabilities.

“I mean, you could poison a large language model with SQL injection strings and have bad data being fed in.”

Many are quick to target the data sets an AI model is fed as the source of hallucination. “AI that's not trained on valid data, it's gonna very confidently tell you incorrect information.” But “there [aren’t] enough people out there looking at how to secure both the AI engine [and] the large language models that the AI is using,” he warned.

A force for good….and bad

While it feels like a fresh topic to many, AI has actually been used in the security sector for some time, just under the specific function of machine learning (ML) — which doesn’t carry quite the preconceived perceptions that AI does.

“You look at what EDR [endpoint detection and response] did to the endpoint space. A lot of that was influenced by AI, right?... [In] the early days, they were just saying, ‘Look, we've got this signatureless-based approach,’ and at the core, it was really just an ML engine running on the endpoint looking for types of security events,” Snell explained.

The popularization of AI is only seeing the technology branch into more areas of security. Snell pointed to threat detection as one of the areas that stands to reap the biggest benefits.

“Cyber criminals are fairly lazy when you really get down to it, and they're not going to go and develop a completely new piece of malware when they can just take an existing one and modify it a little bit,” he explained.

An AI engine trained on the right libraries of code vulnerabilities and exploits will be able to detect these new variations at a vastly higher speed and confidence than a human analyst. But as the tools for detection advance, so too, advance the weapons for bad actors.

“It’s only a matter of time before we're seeing, you know, attacks that are basically being generated by an AI,” Snell warned.

He noted that of course these attacks will originate from an end user — ChatGPT isn’t thinking about hacking your Facebook. But as AI tools continue to advance, bad actors will be able to leverage those. Snell predicted that within three or four years, entirely AI-generated malware may become a formidable threat.


Want to discuss AI workloads, automation and data center physical infrastructure challenges with us? Meet us in Sonoma, Calif., from Dec. 6-7 for our Cloud Executive Summit. You won't be sorry.