As a security guy I am freaking loving AI & Its scope
GenAI, AI, LLM’s, ML, Deep Learning, GAN, Transformers, etc. too many terms to even mention here, but I believe those who are reading this might have also come across these terminologies, alot of confusion and one might even feel bit FOMO like me for not being as savvy as my peers on this topic, but over the last week or 2 I have been going through this extensively and DAMN… I fell in love with AI and what its scope can be in the field of Cybersecurity (Defenders scope), this lead me to come out of my slumber and encouraged me to write this article on why I, infact every security engineer, analyst, Directors etc need to understand AI can and will help organizations solve A LOT of their security problems.
This Article is outlined in 3Sections: Types of AI brief, The Arc between AI & Cybersecurity, Why I am super pro for AI in Security Operations.
(Also, apologies for not being active; I’ve been busy being lazy 😆)
Types of AI (LLM) Brief
I will not go in depth of how AI came into picture but will cover its subset LLM, these things are trained with humongus (I mean LARGE LARGE) amounts of data along with supervised (human help) and unsupervised (Machine itself) training and updating.
Few types of LLMs where everyone might have come across in their life:
- Text -> Text Models ; Ex: Older ChatGPT, or Chat Bots
- Text -> Images; Ex: ChatGPT 4 & Gemini etc.
- Image -> Image ; Ex: Image enhancers we have mobile.
- Image -> Text ; Computer Vision, or google lens
- Text -> Speech;
- Speech -> Text; ex: Alexa, siri, bixby etc.
- Text -> Video; Ex: New yet to release Sora from OpenAI.
And then there is Multi Mode: These are the ones where it combines multiple types in one, like ChatGPT 4–o & above can do, 1, 2, 3, 4, 5, 6, & 7 (yet to come). Till now the ones that are being hyped like ChatGPT, Gemini are more or less doing guess working and are currently being upgraded to “think mode” as in reasoning, why this is important? To prepare the next tech:
Auto Agents:
Till now GenAI platforms we have experienced can give answers, images etc but auto agents are focused more on getting things done, as in like the self driving cars, stock trading bots, robots etc. They have workflows to get things done like mentioned below:
Now there are different types of Auto agents, for different kind of tasks, which we will have it for another session. Moving onto the next phase.
The Arc between AI & Cyber Security
AI, or atleast its derivatives ML has been in use by security vendors for threat detection, ASM etc. They were focused on how it can be leveraged to detect more complex & advanced threats as in to be more precise and help defenders have an advantage over attackers. But now with GenAI tools being available for all <almost> all sort of things, now the attackers are leveraging it too.
Using Emotional prompts one can “force” tools like ChatGPT to create a phishing email draft with near perfection (No grammatical, formatting etc. that are visible differentiators) can be created which I have noticed a lot of organizations are receiving and worst part being security controls are missing them…
So as of now this how the connection between AI, Cybersecurity and its utilization by both good & bad actors. This is a fight that will intensify over time as we move from “orchestrated” GenAI usage to Auto Agents. This is a topic for another article.
Why I am super pro for AI in SecOps
Nowadays, we hear a lot of security vendors and SecOps platforms announcing or coming out with “Next-Gen” SIEM, SOAR, SecOps platform, “Platforms” and not products and they are trying to solve few key problems that currently the industry is facing.
- MTTI & MTTC (Mean Time To Identify & Contain) of a breach.
- Security Resources & Skill Crunch
Let me first start by addressing the MTTI & MTTC problem. Lately I have been going over the latest breach reports i.e. to the likes of IBM, Verizon etc. In one such report from IBM they have highlighted the average MTTI & MTTC and let me tell you, it was a awakening moment. I am citing the infographic below.
One thing I noticed was how big the MTTI & MTTC were, and from organizations stand point, if we ask around the question why? Simple answers are disjointed solutions with SIEM like tools being human/rule driven/dependent had a significant impact on it, moreover the lack of skill set and resource crunch in the industry as a general, and one favorite quote
“Finding a intrusion isn't like finding needle in a haystack but more of finding bad needle in needle stack”
And a person who has been using ChatGPT “properly” over last few days, I foresee a need for adoption of GenAI or AI derived detections to help in reducing the MTTI & MTTC in SecOps. Now, the same has been observed in the same IBM report where there was clear reduction in utilization of AI & automation in decreasing the MTTI & MTTC (refer below).
This shows, how organizations utilizing AI & automations have been able to reduce their total MTTI & MTTC by ~32%.
Now this is just the tip of the iceberg of what can be achieved as I believe we are in 2nd Generation AI adoption in Security Operations, where a lot of tools/platforms like XDR & NG-SIEM are adopting AI for the sake of detections, but the actions i.e. the response & remediations are still very much defined by “Human Orchestrations” as some from security analyst team/engineer team has to write/configure the workflows and define the course of actions that the platform has to perform. this is where human resources and skill set crunch is haunting the industry.
Now AI certainly can’t replace the analysts and engineers, but it certainly can help in assisting their work, I mean to define workflows, incident analysis, complete incident briefing and other metrics which a Jr Analyst can take maybe 4 days be brought down to few hours using AI assistance (Speaking this of first hand experience with other work). Again not chatbots/Co-pilots that you do text-to-text communications to get answers but do tasks without much being “prompt” driven. This can at least help organizations and the already overloaded security analysts taking some load off them, and relieving them of repetitive tasks.
Adoption of Auto Agents, where it can break down an incident and figure out the appropriate remediation actions on its own and utilize the tools (in this case integration) at its disposal to take smart actions again without having to define the workflows or “Orchestrations”, as automations defined by workflows being unaware it can lead to failure scenarios, but Agent that is intelligent and aware its response will be quite stark from Orchestrated workflows.
Quite a lot of vendors in market are doing above, notably $CHKP, $CRWD, $S1, $PAN, $CSCO with either them acquisitions and in-house development are doing so. But yes, these platforms are still in their evolving stage of adapting AI, as they pretty much focused on AI based detections but would still need human intervention in enforcing the response & remediation actions. But in a near future world where a system thinking and reasoning like a human but that can work 24/7 with its best buddy human overseeing it would at least put the already tired CySec defenders at some rest.
Summary
With the stark increase in both attackers and defenders adopting AI and GenAI in their day to day workflows, as the era of auto agents becomes a reality, this will intensify how both parties are leveraging in their defense & offence.
Alot of vendors in the market are introducing next generation security operations platforms, that are theoretically & practically helping both organizations and administrators reduce their MTTI & MTTC but the larger problem being these platforms still need human intervention big time for ack, response and remediations.
Adoption of Auto Agents into these reborn SecOps platforms will make truly the “AI driven” SecOps where it can detect, file the case with ability to “Smartly” take response & remediation actions with human oversight.
For those who are still skeptical on adopting AI into their environments for securing it aint’ helping your fight and cause.
“AI might not attack an organization but attackers using AI surely can”.
Is completely depending on AI and replacing the Human factor recommended? A BIG ABSOLUTE NO, at the end of the day the human nature & Intuition is something AI can't achieve which plays a key role in security.
PS 1: Back after a long long break, so please ignore any grammatical errors.
PS 2: As I say, always open for corrections and lets discuss on next gen AI use cases in cybersecurity.
Socials to connect: Twitter @krishnasai_456, LinkedIn- Link and GitHub- Link
Stay safe and stay curious!!! Till next time ❤
Thank you ❤