All of the flashiest demos of generative AI present chatbots that may question the enterprise setting for absolutely anything possible. Wish to know all of the vulnerabilities you may have? What number of units are unprotected on the community? Whether or not you’ve been hit with Scattered Spider within the final six months? Properly then, Safety Wingman™, Caroline™, Scorpio™, or Orange™ have gotten you coated…or so that they declare.
In a earlier weblog, we mentioned why current safety chat bots are novel however not helpful in the long run – specifically, as a result of they don’t match into the analyst expertise. We additionally coated why the autonomous safety operations middle (SOC) is a pipedream…which remains to be true right now, regardless of generative AI.
Nevertheless, there’s a deeper situation at play right here that’s as basic to safety as time itself: enterprise information consolidation and entry is an absolute bear of an issue that’s unsolved. Put extra merely, safety instruments can’t ingest, retailer, and interpret all enterprise information. And greater than that, safety instruments don’t play good collectively anyway.
Let’s break this down: If we’re to leverage generative AI for understanding the whole lot concerning the enterprise setting, it might want to get info in certainly one of two methods:
- Constantly coaching on all the information within the enterprise setting.
Right here’s the issue: Getting all of the enterprise information into one place is difficult and expensive, as we have now seen with the safety info and occasion administration (SIEM) market. Additional, steady coaching on this information is pricey and resource-intensive. These two elements make this method almost unattainable if accuracy and timeliness are essential, which on this occasion they’re.
- Decoding your request and utilizing integrations with totally different safety instruments to question for the related info.
Right here’s the issue: Integrating safety instruments stays a non-trivial and unsolved drawback that generative AI doesn’t but repair. Till we will combine safety instruments extra successfully, this method won’t ship correct and well timed outcomes. Furthermore, utilizing LLMs to help querying giant, advanced information architectures merely isn’t possible right now – anomaly detection, predictive modeling, and many others. are nonetheless required.
There’s hope for frameworks just like the Open Cybersecurity Schema Framework (OCSF) to handle this; nevertheless, these frameworks aren’t but complete and don’t have the industry-wide adoption wanted.
When this goes flawed, it can go very flawed
It’s simple to belief generative AI implementations due to how human they really feel. Nevertheless, it’s essential to do not forget that generative AI is just one piece of the puzzle and isn’t magic. The event of basis fashions for different duties, equivalent to time-series basis fashions or pc imaginative and prescient fashions, can profit safety operations as nicely in different methods. Nevertheless, it nonetheless hasn’t solved lots of the basic issues of safety. Till we get these proper, we must be cautious of how and what we use generative AI for.
Forrester purchasers can schedule an inquiry or steerage session with me to debate generative AI use instances in safety instruments additional.