Can we trust AI with our Cyber defences? (Transcript)

January 1, 0001

00:02
Dinis Cruz
Hi, welcome to the first Open Security summit session in December 2024. And right on point on where we are, I think on the evolution of AI and gen AI we have. James is going to talk about a topic that actually I really care about. I think there’s crazy potential but ridiculously danger, right, which is can we trust AI with our cyber defenses? Over to you, James.

00:25
James Dell
Cheers, thank you very much. So as I said, I’m going to talk about can we trust AI with our cybersecurity defenses? I think the reason this really came to my mind when we was chatting through what the end of the year looks like is so many companies in the cybersecurity space have heavily started putting this AI badge around all of the products they do and whether that does really mean that they’re investing in LLM technologies, big data, whatever they’re doing, they’re very much putting it front and center. Which means to people not in the cybersecurity space, your board level executives, your people who are signing this off, they think AI is a great thing. But we’re also getting asked this question quite a lot as well. Should we trust AI with essentially defending my system? And that’s what I’m going to explore today.

01:08
James Dell
And certainly with the shift towards managed services and managed detection and response platforms, this really becomes more important than ever. So as I move forward, just to give you guys a bit of a background on me really so why I’m the person talking about this. So I’m James Dell, I’m technical architecture director for Planet it. But what that really means is my job is to work with cybersecurity vendors and understand their technologies as well as they do, so that I can translate that through to our customers.

01:36
James Dell
Which means I have a unique advantage over many of you who might be end customers for some of these cybersecurity platforms we talk about in that I get to talk to all of them and work with all of them in a very detailed way and get to be part of their advisory boards and understand their back end infrastructure. So much so that I can then take it away and go, well, what have we learned and what have we understood from this? So I always put my email address on these as well. Because if you do feel you’ve got a question you want to ask, I’m always available to talk about these things afterwards. I appreciate that these can be quite a one way flow when we do these kind of events.

02:08
James Dell
I like to give you a lot of information and allow you to go and digest that. So I have to talk about plan it only quickly because they are letting me do this on company time. But ultimately we are a cyber security Specialist with over 22 years now experience in the market. So doing this a long time. So we know what we’re talking about. And I say we’ve got relationships with some of these, many of these vendors, which is why it all makes sense. Let’s get into the real content there. Right, so let’s talk about the, this kind of juxtaposition about, you know, human response versus AI led response and who’s kind of best to do this.

02:43
James Dell
So when we’re talking about any form of cyber incidents, we have always previously relied on very intelligent, very well trained humans, people who know their systems, know their technologies, know the risk factors, know how to respond. Up until the last couple of years, that has been the way that we responded. It’s with people. People led first, you were the response. There’s been a pivot in the last few years where that shift is now starting to be more, yes, it’s people, but it’s supported by technology. And as we enter 2025, it’s looking more like it’s technology leading the people. And that’s where I start to see this rub. Right, so is kind of is outsource, outsourcing CyberSecurity to an AI supported like security product really risking us losing control of what we do?

03:33
James Dell
You know, is it going to affect our ability to understand what we’re protecting or is it also going to result in us getting less skilled, less understanding of stuff in terms of what our systems do and how they protect? I can see that. For me, AI is a huge subject. It’s something that’s been a huge subject for a number of years. And cybersecurity has been something I’ve been doing for the last 20 years. And when you bring the two together, you can, I certainly can see the risk of both and I can see the benefits of both.

04:02
James Dell
Having been in many security instances where I’ve had to be at the front end and be responding and making those decisions and making those human led calls, I do know that sometimes actually having a bit of context and some of the bits that you don’t have, which AI is great at, can be really powerful. And some of the bits I’ll get onto later, there’s certainly some advantages there. But if we hand too much of that control over, are we really risking losing control completely? Are we risking the ability for us to be able to respond correctly? Or actually are we opening up even more of These pathways, these doorways into risking our cybersecurity overall. So I wanted to start by talking some of the really high level risks, when we talk risks of AI in the security space.

04:46
James Dell
And some of these you may have heard of before. And I may be kind of walking over this again for some people, but what I like to do with these things is I like to start from a position of let’s talk. So we’re all on the same page, right? And where I’m coming from. So the first thing is when we talk about risks is the adverse real attack. So attackers exploiting the AI systems themselves and using them to deliver their input. So maliciously crafting data designed to mislead AI models designed to really do something that the designer wasn’t expecting. So, you know, attacker could, you know, subtly alter malware to evade AI driven detection systems, or even inject itself into the system and use the AI to its benefit.

05:30
James Dell
We’re seeing some more of this kind of happening with wider systems less in security space. But as more kind of general AI modeling ends up inside of security products, we’re going to see more risk from that. You’ve then got the kind of model bias and inaccuracies now I’ve kind of got my little man sort of left leaning there on my screen. The truth is that we all know that LLMs AI models do have a natural bias, given the data that they’re fed. And the problem we certainly see with security modeling is that they tend to be fed a certain type of data, they tend to be fed risk data, they tend to be fed incident data, indicators of attack, symptoms of attack, things like that. They tend to be fed that information, which means they naturally think everything is an attack.

06:14
James Dell
So you start to see that actually by the nature of the way these models are trained, they start to develop quite a lot of false positives. And the vendors that I’ve been working with where they’ve integrated AI head heavily into their systems. One thing that I have noticed is they’ve done a lot of work to try and balance that bias and add in a lot of countermeasures and counterweights to try and balance that out, which already shows that maybe putting too much reliance on a model potentially shows a lot more risk than you would want to. So, you know, alongside this misclassification of data can be a huge issue. And it also just potentially opens you up to the chance that unnecessary results are triggered or things slip through the net because the bias doesn’t pick them up.

07:00
James Dell
We then start to See this kind of over reliance on automation. So organizations that you turn to AI and LLMs inside of their big data, inside of their models tend to use it at the detriment of human oversight. And the greatest example of this is in the kind of managed detection and response space or manage microsox space where you see a lot of these big organizations taking on multiple of thousands of customers under what would be quite a small team size if you thought about it in the traditional sense. And the way they’re doing that is they’re by going right, we’re going to put a lot of this through AI, LLM automation at the front end to try and weed out the false positives, the bad information from the alerts we see.

07:42
James Dell
What it tends to do though is leave a risk that actually you leave yourself too thin on that human layer and it creates critical threats that where human judgment may ultimately be misled or mishandling information because they’re not getting the full picture or maybe they’re getting it a bit too late because the bias has triggered the data in the wrong way. We’re seeing that being a very common theme. AI is seeing humans be displaced from security roles because it’s well for one human we can get 10,000 optimizations done. Yeah, that optimization may not be the right thing to do. As I kind of mentioned, around adverse real attacks is also the kind of the weaponization.

08:20
James Dell
So you know that we’re seeing it constantly, the bad actors and cyber criminals are using AI to develop more sophisticated attacks, but they’re also then able to weaponize the fact that AI is in these models for their benefit. We’re seeing quite a bit of that and seeing, you know, kind of AI driven phishing campaigns with the start of it. But this kind of dynamic adaption of kind of AI tools is really a problem.

08:44
James Dell
Then that kind of gets onto the point that I was talking about earlier on and we talking before this started around data poisoning, about the fact that actually now that attackers know so much of this defense work is done by AI or driven by LLMs or driven by big data, they could start to hide malicious code inside smaller attacks that would be used to train an AI model that when pulled together in the right way would give them a much bigger attack. And because of the way that we work, we’re going, oh, of synattack, let’s catalog it, let’s put the data, let’s teach the model on it. Well, maybe that’s exactly what they want and we’ve got to be really careful about that.

09:22
James Dell
Data set that we’re using and how we work with it and make sure that we’re not allowing the risk of data poisoning or bad data to be sort of fed into the system. You know, AI systems will become less effective the more harmful data and also the more AI given data that they’re learning from. And we’re seeing more and more of that where we’re starting to sort of get diminished returns from some of these results, where we’re feeding back into its own information. There’s obviously the complexities as well, the complexity of explanation. Many IA systems operate kind of this black box approach where it makes it difficult to understand their decision making processes. We’re seeing that more and more with custom development being done by the different security vendors in putting them into their own tools and not wanting to show they’re working.

10:13
James Dell
It’s a bit different when you look at the kind of OpenAI, your grunts, your lambdas, the world, what they’re doing is they’re very much exposing what they’re doing, the logic behind it to a degree. So you can understand the process. But you’ll see with a lot of the security products that are being developed out there that the transparency is starting to be removed and it’s starting to hide it. Which means that when it comes to incident response, auditing or even regulatory compliance, it actually can be quite difficult to justify how a decision was made or not made in many scenarios. Obviously there’s this dependency on large data sets. So when it comes to zero day attacks or very new style attacks, if the data’s not there, the decision making isn’t great. It also means there’s a high cost to running these things.

10:58
James Dell
So you won’t see the ability for you to run these tools or develop these tools in small organizations. You’re going to be reliant on the big service providers, the big vendors to provide you with these solutions. And ultimately there’s always the regulatory and ethically concerning parts of all of this. You know, the more we adopt AI, the more it outpaces the laws, the guidance, the ethical controls that we have in place. And also potentially the more IT risks essentially the access that humans are having in this space and you know, could cost jobs at the detriment of security to a degree. I’ll just take a. Sorry, so is there a question?

11:37
Dinis Cruz
Yeah, so I actually had quite a number of questions right on that one because I was saying if you’re going to expand or you’re going to go to a different model, can you go back just a slide.

11:48
James Dell
Yeah, of course.

11:50
Dinis Cruz
So, so I think one of the key points, right, which kind of weaves into this, right, is the whole idea that the more black boxes we have in the middle of this whole thing, the worse it gets. Right. And so one of the interesting questions is to ask is how do we know that our current models and our current pipeline is not already compromised?

12:14
James Dell
That’s the million dollar question, isn’t it? That’s the. We, we don’t know. We’re assuming. Especially when you put the trust into, you know, third parties that are gathering this data and building these tools. You’re assuming that they’re doing that checking, right? You’re assuming that they are putting some form of checks and balances in place around the data.

12:32
Dinis Cruz
Okay, yeah, but I was kind of, I was going to the point of, let’s say you use one of the sims, right? So in a way you’ve got two points there, right? You’ve got, okay, the SIM could be compromised and that’ll be a massive. Right. And they will probably detect it, they’ll deal with it, right. That’s almost like it’s a software platform, right? Like that. In a way it’s reasonable, kind of deterministic, right. You put data in to get data out, right? You notice very weirdly that you put a data in the SIEM and you get different data out from the sim. You go, wait, something is really wrong here. Yeah, you can have, that’s kind of handled already. And you can have the data being compromised, but then the data is still there. You still get the same data.

13:16
Dinis Cruz
You could have bad data, you could have data that has been encrypted or data been manipulated, right? But that means that you can follow that path in a way. There’s no black boxes there, right there. That’s kind of a bit of logic. There’s not a lot of black boxes. But in this world, their models in the middle are the model itself or the data that the model is now consuming is compromised. Then what do you do? And how do you know that?

13:45
James Dell
Yeah, that’s the thing is, to a degree you’re not going to know, are you? Unless you can understand the decision tree or the decision logic that was used by the model, which, because it’s black box, you’re not going to get, you are going to end up in this position where you’re inferring trust into something which is naturally untrustworthy because you don’t know how it works, right? It’s the whole reason we came through Blackbox software development was you remove the need to understand the system, but the need to understand the system in the case of LLMs and AI is half the battle.

14:20
James Dell
If you don’t understand what it does, then you’re never going to understand whether the bad result is because of what it’s doing or whether it’s because that somewhere else that you’ve had the data has been poisoned or the data set’s been corrupted, or that actually it just happens to be a really poorly written model. Not going to know that because there’s this obfuscation of what the data really is and what the result is.

14:41
Dinis Cruz
But, but I think this is worse than that because I remember when I was doing some crazy exploits, right? And I remember when we got. Because confidentiality was always. And I was. I always say confidentiality is easy to deal with. It’s fucking pain, yes, but it’s easy to deal with because you kind of know, right? But I remember were doing these exploits on integrity and then I start asking the question, how bad does it need to be for you to be like, where does it go? Exponential the loss of trust, right? And I was always amazed how little data needs to be corrupted for people to go, I cannot trust this thing anymore. Right?

15:17
Dinis Cruz
It’s a bit like if you have a website and suddenly you start to have five customers, 10 customers, 50 customers who say the data is wrong and you don’t know where it’s coming and you don’t know how it’s being wrong. You just really have proof that small quantity is wrong, right? There’s a moment where you have to pull the plug on everything, right? Because you kind of don’t know it is. So take it this scenario where you have some evidence that the model is corrupted, right? Because clearly you put an input, you got an output, you’re going, dude, there’s a blind spot here. Literally there’s a blind spot, right? What do you do then?

15:55
James Dell
I suppose that’s it, isn’t it? I suppose if you control it. So this is coming on to. What I’m going to talk about is that the. If you’re controlling the use of that tool, AI LLM, then yeah, you can pull that, right? You can make the decision to take that out of your workflow and take the additional pain that may put on you. But the core underlying problem we’ve got is that these tools are being baked into the traditional endpoint products, the firewalling products, the things that you would have traditionally, you know, just Had a product that goes, here’s the data, here’s a result is what we do. And you have no control over that. And you trying to, you know, dissuade one of them, the big five cyber security vendors to take their tool out because you’ve detected it.

16:40
James Dell
It’s not going to happen because they need that marketing buzzword of having this AI tool in there. So they’re going to, they’ll run that regardless. And when you talk to them, they’re so hell bent on this, it’s so heavily integrated to their entire system. It’s not just, oh, we’re using it in a small bit here, we’re using it at the front end. We’re using it for the data pipeline, we’re using it for automation. And you go, well, there’s so many places there where that could do exactly what you said. It could be just a little bit wrong. But you’ve then got that compounded effect and you’ve got no control and no ability to pull that out. Right. And this is where my underlying question is, right? So can you trust it?

17:17
James Dell
Well, you know, if you’re in control of it, you maybe can trust it, but if you’re not, or you’re also not in control of, if it’s being used, then you can’t really trust it, can you? Because you’ve got no visibility and you’ve got no control over how much of a sway of your protection that is having.

17:34
Dinis Cruz
Yeah, but I think that’s a risk that we need to start quantifying. Right? Because in a way we’ve come with a solution like that and we have circuit breakers, right? We have solutions, right. You know, you have two or three even let the CDN stuff where the main one, if that blows up, you redirect the traffic to the next one and eventually you hit, you know, the expensive one. But they can handle it, right? So yeah, I think that’s, that’s a big one, man, especially.

18:00
Dinis Cruz
And I think the key part here is if these companies or whatever build a system, you have a model that learned for me that is the biggest red flag now because if you so like, the model I like is one where you use off the shelf models and you give it the data do, I mean, so in a way as soon as you have bad data, you can go, right, I can debug it, right? Like, and I think the word you said there was really cool, was complex explainability. So I tend to talk about provenance, but I think that’s really important, right. If we cannot absolutely trace back, right. How the decisions were made and then tweak it. Right. Or fix it, right. Then if right away in the middle you have a model that just has been trained. Right?

18:43
James Dell
Yeah.

18:43
Dinis Cruz
You know, I think that’s a ridiculous risk. Right. That’s the risk we never had before.

18:48
James Dell
No, it’s a new thing that started and it’s only becoming more prevalent as we push forward. Right. That’s, that’s the problem with it is it’s a new risk that’s been introduced. And when you were doing your old risk matrices, they want something you never have considered before, but now it’s like, hold on, that’s probably more risky than a lot of other things that are happening.

19:04
Dinis Cruz
Yeah. But then take into account that the instance response team tends to be ridiculously high privileged. Right. Has access to ridiculous amount of information. It’s actually a prime attack vector itself. Right. So no, and I think that, you know, but I think you kind of have this view there, right? Like, and I share with you, like it’s ridiculous to dangerous. I think we need to quantify it. Right. And I think we need to have a better way to explain how much of that black box. And maybe we need better analysis. Right. It’s like saying, all right, you got a magic SQL database. Maybe you should say, okay, you have a magic SQL database. You put data in, we don’t know how it works. Right. And you get data out. Right.

19:44
James Dell
Is it good data? We don’t know. Yeah.

19:46
Dinis Cruz
And you don’t have store procedures. Right. You don’t have parameterized queries. So everything goes in and it’s magical. Right. So what the hell, right? So, and I think that there’s definitely a lack of analogies here. Right. And, and I think this world, especially because it’s on the forefront of the attackers, I think it’s going to see some dramatic stuff, but gone. I just want to have some. 
 20:07
James Dell
No, I’ve got some data in a minute that will play into that as well. So, yeah, I think that the point you make is perfect. What I wanted to come on and just talk about was this kind of underpinning kind of issue that we have. So we’re getting this more and more of the security copilot is just the one that I’ve referenced. Obviously Microsoft have done a lot of work with OpenAI and they’ve introduced this kind of wraparound that sits on top of their siem tool now. And the truth is, just because you’ve got these tools doesn’t suddenly make you, as I’ve put here, a threat hunting God. You’re not going to be able to just go, right, I don’t need any skills whatsoever. So I’ve got security copilot and I just bang into it, what does the detection mean?

20:48
James Dell
And you know, crassly I’ve done this. But this is actually the result that comes out depending on what you ask, which is essentially engage some experts, which is just like really useful. It doesn’t actually tell you to fix the problem at all. It says this is a detection that means X. What Step one. Step one is to engage a cybersecurity expert. It’s almost. It doesn’t. And give you the breadcrumbs.

21:08
Dinis Cruz
I think a point you should make there is that just to dispel this myth, right. These tools will make the security experts even more. I think we need to be clear. I think anybody who’s taking view this is a great way to get rid of people or experts is in for a shock. What I think this allows us, it allows to be crazy more productive. In fact, allows us to finally play the game. Right. I think that’s the difference. Right. But you need even more expertise because you’re going to operate at much higher levels, even of impact. Right. Which basically means that the side effects of what you do are massive. So the idea that you don’t need experts into this is just crazy. Right?

21:45
James Dell
Yeah. And that’s where we’re seeing a lot of that shift. Right? We’re seeing that shift come from the marketing tells people that you buy this tool or you add this tool on and suddenly you can empower people who aren’t empowered. And this is what I wanted to come and talk to, just touch on was certainly in the space that I sit in, obviously we work with a lot of IT professionals. And what I mean when I say that is not a disservice to anyone. But I mean, they’re not cybersecurity experts. They’re not people who do this all day out. They are people that wear two hats. They’re people that wear that blue cybersecurity hat and they wear the standard keeping the lights on it hat, which means that they don’t necessarily know what all that stuff in blue is.

22:24
James Dell
They don’t necessarily know what that blue data is. They do know how to install antivirus product and how to deal with something when it’s not working. But what they don’t know is what to do that data and their reliance of, oh, well, I’ll use a security copilot or something else to tell me what the solution is becomes really high. And once they start doing that, you end up in a position where you’ve got a person or people who don’t know necessarily what they’re doing or why they’re asking the question, trying to prompt an LLM for a response that they think is going to give an adequate response.

22:57
James Dell
And then suddenly you introduce another set of risks where you go, well, great, you can, you know, you’re giving someone a tool, but you’re essentially giving someone who’s overwhelmed, doesn’t necessarily know what they’re looking at, critical data and critical decision making to pump back into an LLM, which you don’t necessarily know how it’s been trained.

23:15
James Dell
It seems like a lot of risk going on there for my liking, and that’s something I just wanted to, you know, just wanted to highlight is we see this a lot and people, especially over the last 12 months, who have picked up Copilot for security or some of the other big products out there which tap into SIEM data and use LLMs to provide it in human language, who were not, you know, cyber security professionals day in and day out are finding the tool to be either overwhelming or massively too complex for them to use because the results they get, they just don’t understand.

23:46
Dinis Cruz
Exactly. I think here it’d be cool for you to also add a human layer to this. And because I actually feel that there is. There tends to be very different personalities. Right. By design, almost. Right. Like in a way, the engineers knows how to build stuff. Right. But they tend to have a very understanding of putting things together, stability. What it sometimes don’t have is that like what can go wrong and what else can it do? Right. And also the side effects where in security, we also think a lot about the side effects, we think about the signals. Right. But I think there’s also a crossover because to be honest, like a lot of the people on the left don’t know how to engineer good things. Right.

24:25
James Dell
So that’s the problem too. Right, that’s okay. Sure.

24:28
Dinis Cruz
And so the only comment I also made on your statement is that I think, yes, it’s a problem, but I also feel that we now have the ability to scale at both ends. Right. So what I mean is now we now have the ability, let’s say smaller teams and smaller environments to have access to a lot of that expertise that just not possible. Right. And Also it’s possible for finally the security crowd to get access to proper good intelligence because it goes both ways. The problem is you still need the experts in the room and in the loop because if you don’t have them you’re going to end up, you know, with even a worse scenario. Right?

25:01
James Dell
Yeah, that’s it. So coming on this, this would probably tie in a bit to what were talking about earlier, right? So you know we talk about traditional seam approaching, right? So you know you’re pulling the data in that we’ve got referenced on this left hand side here from some various data sources and where we’re seeing this kind of AI, lms, ML, AI, LLMs, automation, all kind of pulled in is through this piece, this the kind of, the AI detection and data handling piece. I put it in the backend data lake. But you can see the amount of data that’s being processed by these and the amount of decisions that are being made. So when we talk about a large organization, there’s millions of bits of data that come in from those data sources and that’s your big data piece.

25:48
James Dell
But then they’re filed through that threat intelligence, normally a big data set which limits that down to thousands of detections which then is put through some form of ML and AI, which gives them the automated respons, which provides some contextualization which strips that down to hundreds. Then it’s pumped through some form of automation normally kind of, you know, trying to work out whether it’s a false positive, all those kind of things, some form of LLM backend, some human decision making and you end up with the tens, you end up with the what’s pushed to a person, to a managed, you know, to a managed text and response or to a managed sock, your human piece. But what you’ve done there is that reliance on putting data into the hands of something you don’t control. That back to that black box conversation.

26:34
James Dell
We’ve had millions of bits of information that before it’s reached someone in this pipeline has been stripped down to tens. Now it can do this really well, but you can see the margin of error is very large. And that’s why I’ve drawn that little pipe in there in terms of that detection, contextualization and correlation, right? If that’s working perfectly, you get some really solid, good quality high risk information that’s put in front of those threat experts who can investigate and eliminate attacks. But if something goes wrong, it’s very quickly could be lost in that chain, and that’s why I kind of wanted to put this in here, is like, you know, we talk a lot about making sure you get the right data in. 
 27:15
James Dell
And certainly anything where you talk to someone about how to do security properly in 2024, 2025, it’s about making sure you have as much data as possible to give you context, that correlation in a platform that can do that. Now, obviously, you know, you can just have a seam tool and try and do it yourself, but you are going to struggle with that quality and that quantity of data. But then we have to trust what we’re putting this data through. And that’s where, you know, I’m always nervous at the moment because like you said, we don’t necessarily have the visibility into what that big data model, what that ML, AI, what the LLM or automation looks like, because those are those bits that are being developed by those security vendors or those seam tool manufacturers. 
 28:00
James Dell
They’re being developed by them in the back end and they don’t want to give away their secret sauce. I get that. But also there’s a degree of it would be safer from a point of view of openness to be using a model that is open source where you can see it, because then you know to your point what the data is and what the model is. So therefore, if the results don’t match, then at least you can start to work out where things are going to go wrong. But there’s a huge degree in this middle piece here where we are, we’re seeing, you know, especially managed. 
 28:29
James Dell
And the reason I put this managed detection and response services is this is where we’re seeing it more prevalently than anywhere else is these companies are putting huge investment into streamlining their pipeline so that the human element can be dramatically reduced from where it was three years ago. They can continue to cut down resources and cut down people and grow services without having to invest in staff because they’re only really investing in the technology that makes the human have to look at less. Yeah. 
 29:00
Dinis Cruz
Can I try one thing? Let me see if I can put a screen share here and then, because that’s a great. Because annoyingly, zoom doesn’t allow us. 
 29:12
James Dell
Cool. 
 29:13
Dinis Cruz
Can you see this? 
 29:14
James Dell
Yes, I can see that. Yeah.

29:16
Dinis Cruz
Oh, cool. Cool. Right. So see, I think you could see the picture. Right? Yeah, cool. See, I kind of agree with you because you’re saying that if we start to have that whole yellow thing as a black box. Right. Yes, it is. Right. Because then how the whole thing fit together. Right. See Where I think is interesting is to use here, right, in these bits, right? I think there’s two elements of this, I think there’s an element of doing a transformation here, right, where you kind of use a kind of AI to sort of, you know, let’s say, process, you know, the data as you go from one layer to the other, right? But what I think is really interesting is to use AI actually here, which is in the creation of those standards do.

30:07
Dinis Cruz
I mean, what I think is really powerful is almost saying, look, and this is the thing that always pain me in the Steam world is that instead of saying let’s, you know, kind of, you know, get to that point in the end, right, let’s actually spend and actually this person should be working here, you know what I mean? I think where we need a lot of the experts, right? We need a lot of experts here powered by Genai tools that help us to create these data transformations, right? And the power of that is that the data transformations ideally should actually be code or should be done by like you said, an open source LLM. Right? So you kind of. Yeah, yeah, sorry, saying.

30:57
James Dell
No, I was saying that’s exactly it, isn’t it? It’s like that the reason that yellow pipe is there is that black box that is that hidden, the hidden unknown. And if you’re stripping that much from the human there is always the risk of something not being right and we are going the wrong way. But it’s, I think this conversation’s always been driven by where kind of the big five or six cyber security companies want. They think they can make the most money rather than it being in a position where they’re going, oh, actually, well, what would best for data security? And if you’re running an internal, you know, an internal team of a decent size, then yeah, you can put a seam tool in and start putting the people where they should be.

31:37
James Dell
But what I’m certainly seeing from the kind of the managed services side of things is that you’re seeing that those kind of mssps are spinning up. You’re seeing these vendors pushing these products out and the way they’re making them cost effective is by making that pipeline in the middle completely opaque. You cannot see into it and you just hope the data you pump into the front end comes out to the right results of the back end.

31:59
Dinis Cruz
Yeah, yeah, gonna go back to your slides. But, but I, I, I think that’s a crazy risk, man. And, and what I think is interesting in this world is that what those companies are doing is they’re putting themselves in a danger position because their biggest competitors probably will not be at the sims. Their billion competitors are going to be some of the other data visualization platforms that are going to do this right and they’re just going to happen to the security better than everybody else.

32:22
James Dell
Yeah, exactly. I’m. I’ll just. I’ve realized I cannot say bad mouth but I’ve talked badly about AI in detection and I wanted to always want to give you the counter side to that. And I think the reason that this slide is just to quickly touch on some of those key benefits you only get though when you start putting AI into this work workflow. So it’s just worth talking about. So there’s a kind of four key pillars to where you get good value from AI in the cyber security space and especially in threat detection. So is the, you’ve got the investigation, the identity, reporting and research. Those are the four pillars where it works really well. At no point does it replace a human and you’ll be key to see that’s what I’ve put in my slide.

33:08
James Dell
It’s about empowering to the point you were just making. It’s about empowering people to make the right decision. Right. And those things it can do really well is for things from enhanced threat detection. So we’re talking about bringing the data together but data from different sources that potentially are in different languages, stored in different methodologies, that stuff is great for because it can, it’s easy. It’s great at sifting through data, correlation of data, looking at patterns. Humans are really bad at spotting patterns in big data because you just can’t consume it all. Something like an LLM is great for doing that kind of detection piece as same as kind of the sifting through IOCs and IOAs.

33:46
James Dell
If you’re looking at indication of compromises or indicators of attack back and you’re looking at thousands of them on a huge scale event as again as a human you start to lose concentration after about nine or ten that you’ve read. And that’s just humans. We’re just not very good at consuming data in that way. But using a tool to give you the ability to do that’s great. And again the key word for me is tool. We’re not talking about replacing it with a system. We’re talking about giving a tool that can be used for that piece but doesn’t make the decision for you. It’s there to help you go through.

34:17
Dinis Cruz
The Information on those three, I think other, by the way, I’m with you. Like, I think gen AI and these LLMs finally is going to allow us to make this industry really work. Right. And I think this is going to tip. I think this has the potential to tip the advantage to the defenders because we can finally have a lot of visibility into what’s going on. Right. I think there’s a key pattern here that on these three things you mentioned, which was that up until now, every time you want to make tech better, you had to write some code. Right, Right. And that’s a massive game changer because it means that, for example, you can now consume different data sources in the same LLM without changing anything. Right. And that’s massive, right?

34:59
Dinis Cruz
It’s massive because they have an ability to process unstructured data that we never had before. And that was always the challenge. The challenge was, if you think about our pipelines were also very brittle, they were very fragile and you always have to code new stuff on top of it. Right. Where if you. That’s why in a way the innovation in this space was always so slow because everybody, even the big providers, you know, basically they’re doing a good job in a lot of times. Right. But I think the model is broken and they never, I think, went to fix the model itself. Right. And the model is that up until now you have to write integrations almost by hand for everything and that never scaled. Right.

35:40
Dinis Cruz
So these bits that you have here, even those patterns, we now have a much better comprehensive engine to do that. The key, I think is not to have a model that has already got the bias. The key is to even use three different models to ask the same question. Right. And if you get the same answer, you now know that you probably have a good finding.

35:58
James Dell
Yeah, exactly. Yeah. No, perfect. Yeah. I say the final pieces of things that we always talk about when we talk about it, so 24 7, 365, they don’t need to sleep. So that means that you don’t lose the context between one threat analyst going off and the next threat analyst coming on because the context is held and you know, the benefit is there they can be more cost efficient than having lots and lots of threat hunters because they can do things like pattern detection. The threat hunters can’t do very well. But also on the flip side, I don’t want to see you replacing everyone with AI because it’s just ridiculous. It can help improve accuracy in some scenarios, but you have to be using it in the Right way back to all the points you’ve made.

36:36
James Dell
You can’t just go well we’ve given this system and LLM therefore it’s more accurate. Well if the data you’ve given it is rubbish, it’s going to be rubbish. And it’s about making sure that you give it the right data. The big thing that it can do is reduce that response time. So the time between something going wrong and you making the right call in the right hands. So if we go back to the co pilot for security and you’ve got someone who is a trained threat hunter and they see a certain detection and they ask the right question to the LM and say what does this mean in this context? Because of this, with this data and they get the right result out of it, it will cut down the amount of time it takes to them to respond and take the relevant actions.

37:15
James Dell
But if they don’t know what the prompts are, they don’t know what the information is that needs to be looking at then just a child with a toy and they’re just going to end up getting the wrong answers. So again it can be really powerful in the right hands.

37:26
Dinis Cruz
But that’s ridiculously powerful. In fact once we did analysis and we mapped out that the fastest we can find about it is, was the biggest correlated item that determined the impact of the incident. And there was actually a guy that was on a team that was ex milit and used to say that they had this, they said the faster you can get a medic or somebody into the medical center that has been injured is literally the percentage of survivability are only correlated. It doesn’t matter what you had, what matters is how fast can we get quickly together. Yeah, yeah. And I think that reduced response time is massive here. Right. Because like you said, the more you can reduce that where somebody with knowledge knows the difference between false positives or only press the bad signal. Right. This is a big, this deal. Right.

38:10
Dinis Cruz
That’s the big thing. I also, the one I would add to the top I think is teach, which I think is a thing that we overlooked and then they keep losing their talent. Right. You know, and, but I think that we have the ability to create much better learning paths and to bring people into for example instance response that before were really hard to do or very dangerous. Right. In a way.

38:32
James Dell
Right.

38:33
Dinis Cruz
So I think there’s a teaching element that you could also have of explaining how you do things, why you do things. You know, taking somebody that doesn’t have death, maybe on some areas, and saying, look, you need to learn about this, this, this. But then you’re learning in context, which makes a lot more sense.

38:48
James Dell
No, yeah, it makes complete sense. I didn’t think about it that way. That’s a, that’s a very valid point, is that you is teaching the people that we’ve already got and using it to upskill. Right? Yep. So just to quickly recap some of the things I’ve said around, oh, I hope my PowerPoint stopped working. There we go. So importantly, obviously, when we talk about these risks, right, there’s how much reliance there is. Right. We’ve been talking about that quite heavily is how much are you relying on it. There’s also the piece around how much are we trusting a single algorithm or a single bit of code.

39:23
James Dell
And what I mean by that is if we take one specific vendor, for example, and I’m not going to mention them by name, but you think about the one that caused a major incident this year with outages. There was hundreds of thousands of endpoints affected by one single platform. If that one platform is using one single algorithm to provide itself, you know, to provide it’s AI that’s driven into the product and it’s compromised by a threat actor. They’ve suddenly compromised not just one system, they’ve compromised that over hundreds of thousands of endpoints. And beyond that, sometimes it’s an integrated system and they’re these custom models that have been built. It. It’s a lot of reliance on a single point of failure. So you have to look at that when you’re looking at the risks is going, is that a benefit?

40:04
James Dell
You know, again, there’s a lot of, certainly in the, with the vendors I’ve been working with, there’s a lot of false positives or false negatives that are coming through when they are using the correlation contextualization piece via a black box LLM, they are, they’re going, oh, we’re getting lots of false positives because it’s just the data and the bias is making it think that. Or things that should have been caught or a human would have caught from a seam tool are being lost because it’s going. I don’t think that’s important because again, it’s biases against that. So it’s definitely, for me, it’s something you have to keep thinking about is what does that data look like. And ultimately the piece we’ve been talking about for a lot of this.