Menu
Blog

What AI Can and Cannot Offload for Security Teams

What AI Can and Cannot Offload for Security Teams 
8 minute read

Artificial Intelligence has taken the security world by storm, revolutionizing how the most mundane – and extraordinary – tasks are done. But does it do all? There are things that AI can do very well, and there are things that it cannot. Certain tasks still require a human's cognitive reasoning and an expert's critical decision-making. But AI has certainly earned its place as a valuable tool in the cybersecurity arsenal. The question is – in just what way?

The Power of AI in Security

The thing that AI is really, really good at is consistency. With "all the" executive functions of a human, it knows no lag, does not tire, and doesn’t make fatigue-induced mistakes. It's a beast. It can perform the same function, ad nauseam, without contemplating its existence or falling prey to mild existential crises. It can just – work.

So, leveraging AI to do all those mundane security-centered tasks (like scanning petabytes of data to find behavioral anomalies) works really well. These are jobs that, arguably, no human wants, and the probability of human error would highly influence that. 

For this reason, AI-driven machines find themselves on the front lines of threat intelligence (that behavioral-based XDR capability alluded to above), intrusion detection (spotting the one malicious IP out of so many trillions), and anything else that relies on deriving learning from an impossibly large dataset.  

However, in the information age, impossibly large datasets aren’t too hard to come by. Hence, artificial intelligence has found its way into many a security stack. As Forbes notes, over three-fourths (76%) of organizations have prioritized AI and machine learning in their IT budgets. Additional weigh-in from Blackberry reveals that "the majority (82%) of IT decision-makers plan to invest in AI-driven cybersecurity in the next two years, and almost half (48%) plan to invest before the end of 2023." It's clear that AI's influence on security is here to stay.

But just because AI is a star player doesn't mean it’s good at everything. (Thank goodness).

 

Where AI Falls Short

When you're in the boiler room, and tensions run high, there are times when you want a seasoned expert watching your six. In other words, high-stakes and close-call scenarios which require quick thinking, sound judgment, an understanding of what's in the company's best interest, and loads of nuance. Those are not the best times to have an annoyingly logical animatronic in the co-pilot's seat. Those are times for humans to be, well, human. 

There is also the consideration that because AI is really a tool to be programmed, it can fall into the wrong hands (or internet rabbit holes) and come out biased, bigoted, or otherwise discriminatory and offensive. And it probably wouldn’t “care”. This lack of morality (or social cues) is another factor that betrays the limitations of an otherwise cool tool. 

Overall, there remains a huge need for human oversight and intervention in AI decision-making. Let it run amuck, and it will likely come back having shoved all the women’s resumes in the trash. Not cool. There's a lot of work to do to sand off the rough edges, but AI works well enough for security purposes. In fact, it's brilliant.

Best Practices for Integrating AI in Security

However, a few best practices need to be abided by to appreciate the full brilliance of a machine learning model in cybersecurity. They include:

  • Incorporating a strategic approach to AI in security. This means understanding how security tools leverage AI today and using those capabilities to their best advantage. This includes leveraging Generative AI to detect cyber threats and protect systems from the risk of human error. The more AI, the less human involvement and, by extension, the less chance to mess up on a repetitive task. Strategically using AI means employing it for the right tasks (like threat hunting and analysis) and keeping it out of the wrong tasks (critical decision-making and judgment calls). 
  • Collaboration between security teams and AI specialists to ensure the ethical and effective use of AI. This one speaks for itself. AI is a powerful tool, and when used nefariously, that power could spread exponentially for harm. While everyone who's ever been bored of a task wants to get their hands on AI, security experts and AI specialists must strive to ensure capabilities do not extend to scraping personal information or pilfering sensitive data. That is, after all, what cybersecurity is trying to protect.
  • Continuous monitoring and evaluation of AI systems to improve accuracy and avoid bias. Constant eyes must be on the ever-learning creation to ensure it receives the right influences. As was shown, given negative or biased examples, the AI (unsurprisingly) learned to be negative and biased. Efforts must be made to ensure the input and output is reliable, accurate, and objective. Without this backbone, AI-gleaned data is of no help to the security platforms that employ it. 
  • Keeping data privacy in mind when using AI. Another severe pitfall of emerging AI is that it knows no bounds. It is trained to accumulate as much information as possible, and much of that information is held under copyright lock and key. This is an engaging issue that has met with unfortunate real-world consternation. The jury’s still out, but the principle remains: If you wouldn’t want a human to do it, an AI (that answers to humans) shouldn’t do it either. And when it comes to cybersecurity, a security tool really shouldn’t do it. However, what if someone programmed it otherwise? For this reason, the Center for Cybersecurity Policy and Law typically requires data science teams to guide machine learning. 

Real-World Examples of AI in Security

Fortunately, many security technologies have managed to avoid potholes and leverage AI for all it's worth. And it turns out it’s worth a lot. Examples include:

  • XDR: The “behavioral-driven” approach works wonders for spotting signs of malicious activity that aren’t yet blacklisted and don’t have a signature. As traditional detection tools improved, cybercriminals did what they do best and adapted, turning a corner to release swaths of never-before-seen, fingerprint-less exploits that evaded malware detection. They may no longer have their nametags, but AI can leverage machine learning to spot when their behavior deviates from the norm and flag it when it becomes dangerous. 
  • Signal to Noise: High-powered AI-driven tools can ingest a lot of data and help sort signals from noise. Sometimes, well-intentioned platforms can take in too many data points, which is just as fruitless as no data points. Artificial intelligence can vet those swaths of alerts and spot the patterns that would eliminate them as false positives. 
  • Time-consuming tasks: Machines do what machines do best, and that is the same thing – over and over. AI can automate and orchestrate the little things that make cybersecurity effective (and repetitive). Autonomizing areas of your response playbook can save time, and people power going forward.
  • Training and Response Times: This is perhaps the most classic implementation of AI as we (popularly) know it: via ChatGPT. The Generative AI model is getting better by the day and is especially useful for on-the-job scenarios where fixes are time-sensitive and thumbing through manuals wouldn’t make sense. Having a GPT-like solution aids the learning process and provides streamlined information so that stretched cybersecurity workers can quickly get a handle on their environment. In other news, in a recent blog, we asked ChatGPT to name its cybersecurity risks, and it was honest as ever

At ZeroFox, we’ve been developing AI-driven capabilities to stretch external cybersecurity as far as it needs to go. Our take on Generative AI, FoxGPT, acts as an XDR-type force multiplier that identifies malicious attacks over large swaths of data over time. As reviewed in a recent press release, FoxGPT capabilities will optimize intelligence analyst workflows “with the ability to analyze and contextualize malicious content online, enhancing the ability to combat the growing sophistication of cybercriminals.”

This represents a significant step for us and for the external cybersecurity capabilities we provide. This brings our threat detection capacity to the next level and enables us to provide an even more powerful external cybersecurity platform. 

Conclusion

It's a cat-and-mouse game – it always has been – and AI is a tool being used by both sides. Threat actors up their game, and security strikes back with superpowered AI. Threat actors volley with AI capabilities of their own (the full fruition of which we likely haven’t seen), and round two begins. And so on, and so forth into eternity. However, most AI-apocalypse predictions are purely speculative at this point, and artificial intelligence still has a lot of beneficial work to do within the realm of cybersecurity. In fact, it’s just getting started. 

Especially where external cybersecurity is concerned, AI is poised to do some heavy lifting. By searching out millions of posts and data points on social media sites, it can bring back intel on which posts look like they’ve been hacked, which accounts look impersonated, and which underground leads might be to blame. With so much data to scour online, maintaining an audited online business presence might be next to impossible for organizations without access to AI capabilities. 

ZeroFox is the first unified external cybersecurity vendor, and we are proud to rely on our team of expert security analysts and best-in-class technology to protect a company’s external assets. We’ve taken down nearly a quarter of a million compromised sites and pieces of content within the past year alone, and we continue to make external cybersecurity our passion and priority. AI is nothing new to us: We’ve been steadily integrating machine learning tactics into our platform for years. FoxGPT brings those capabilities to the next level and enables us to do even more to create an external cybersecurity platform that can stand up against AI-driven threats. 

As CISOs look at how AI is transforming the cybersecurity landscape, they will find ways to optimize their stacks by integrating AI and machine learning techniques into their everyday strategies. Cybercriminals are fearless in using AI's powerful technology for bad. We need to know how to leverage it for good.

Tags: Artificial IntelligenceCyber Trends

See ZeroFox in action