How GenAI Agents will Dramatically Change our Industry

When (day):
Tue
At:
18:00 - 19:00
Project:



Session Video

About this session

This session will shed light on the profound impacts of GenAI on cybersecurity and privacy, uncovering both the opportunities and complexities it presents. Dinis will also guide us on how to navigate these changes and future-proof our strategies amidst rapid tech advancements. Dive into the future and unravel the boundless possibilities of GenAI with us.

πŸ” Topic Insight:

The landscape of our industry is shifting, morphing, and evolving, with GenAI agents at the forefront of this spectacular transformation. But what does this mean for us, and how do we navigate through the sea of opportunities and challenges that it brings?

🌐 Dinis will delve deep into:

  • The impact of GenAI agents on cybersecurity & privacy
  • Opportunities ripe for the picking in the GenAI space
  • Navigating through the complexities and challenges
  • Future-proofing our industry amidst rapid technological advancements

πŸ”— Secure your spot NOW and join us in exploring the enthralling world of GenAI and its boundless possibilities.

πŸ”„ Share with your network and let’s dive into the future, together!

Transcript:

Dinis Cruz - 00:06 Hi, welcome to these open security summit session. I’m actually going to be doing a presentation and again, feel free to ask questions on this end. My kind of view is that I really believe that geni agents and I’ll explain a bit what I mean by that. They will dramatically change our industry. It for the better, by the way. So I want to explore that and I will go into some of the topics where I think it will make a massive difference in there. So the first thing is that geni is a big deal. I know everybody talks about it, everybody talk kind of it’s a lot of hype. And also there is a lot of hype. It’s a bit for some of you guys have been around here for a while, you know, that’s what happens, right?

Dinis Cruz - 00:51 Like the cloud had a lot of hype, the internet had a lot of hype, mobile had a lot of hype. And there’s parts of it that there’s going to go up and bust, lots of investment on crazy stuff. But I think the fundamentals are there. And Max, one of the slides I don’t have here, but I want to add is it’s a bit like we created Jarvis, if you know the analogy from Iron Man, which know, as you can see in the movies, is a great help know Tony Stark to build all sorts of crazy stuff, right? But then it’s almost like we launch it in the world by asking it to write some poetry or do some stuff and then he hallucinates because it tries to help. And then we go and say, see, that’s really bad.

Dinis Cruz - 01:33 So I actually think hallucinations are something quite good and I’ll explain in a bit why that’s the case. So this will bring a lot of change. They’re very fast. But there’s always this thing, right, like people get excited, some new things. And I think we’re getting into the chasm bit now because there’s a lot of people that’s interesting, but it’s not for me. And there’s also a lot of people that go away, what’s going on? And some people go, well, my fax machine back, right? But there’s always a lot of challenges here. So I really think we’re now getting into this chasm bit.

Dinis Cruz - 02:03 And you can see it, you can see that after the excitement, there’s lots of even a lot of very clever people, right, and a lot of people that have a lot of experience and they really now saying that it’s not a big deal, it’s not going to be as much it is. And I think they’re looking from the different angle. They probably are correct in the way they’re looking at it. But I will argue that there’s a different way of looking at this, which I think very exciting and just about everything will be impacted, right? And that’s the bottom line. The same way that you can argue that Internet and technology but let’s say the Internet is now impacted just about every industry, sometimes for the better, sometimes for the worse, right? But everything will be impacted.

Dinis Cruz - 02:41 So one of the things I find interesting is this analogy of gen AI with electricity. Because at the moment everybody in a cybersecurity team and the analogy here was why everybody in cybersecurity team needs to be an AI user expert. But I would argue that in other teams need to be the same thing. But what you have is we take for granted that you want people to be an electricity expert, a power user, at least a computer power user, internet power user, email power user. But all this stuff, especially I would say email didn’t exist, internet didn’t exist, even computers, right, for a while. They were not really the productivity tools that we have today, but we now take it for granted. So the difference is that this change is happening at a much faster pace.

Dinis Cruz - 03:26 But in a way, my hypothesis here is that I would say that everybody in the cybersecurity team needs to be an AiPower user because of the force multiplier, because you will make their world. In fact, I would even argue their happiness in how they operate a lot higher because a lot of the frustrations that we have today, we now have a way to deal with it and we can scale in ways that we could never before. So the key here is when you look at everybody impacted, it kind of falls into free gaps, right? And people should be excited. In fact, I think a lot of companies should be doing they should be very excited and they should be panicking because they really should be looking at it from a very effective way.

Dinis Cruz - 04:08 But there’s also a lot of people that are ignoring it and go, yeah, it’s all right, we can live with it. And I think there’s also lots of industries and companies and professions that in the past it was okay, right? It was okay because the incremental change that was coming was 1020 percent. It took a lot of years to do it. You could even ignore it and then catch up later and still be okay. I think the rate of change is very different and the acceleration that we have now is massively different. And I actually think that the sweet spot is you should be panicking because you should be understanding that you don’t have a lot of time.

Dinis Cruz - 04:44 But you should also be very excited because you should be able to really take your services or your tools or your products or what you provide and where you add value. You can take it to the next level. In fact, you could even dramatically increase your market because you can now provide lots of great services to other markets that before you just couldn’t or it wasn’t scalable to go there, right? And here’s the killer thing, right? This is some of the companies that are panicking in the last two years. And these are not small players like Microsoft, McKinsey, AWS, Google, they all realize that in a way the gen AI, the AI world is really going to make a massive difference in their companies and they’ve made in a way a very good bet.

Dinis Cruz - 05:28 So you could argue that somebody’s been doing this for a while, but what could have happened was you could argue that Microsoft didn’t panic enough when the mobile came along, even when the cloud came along and they missed the boat, right? They’re catching up, right? They took a long time to be at a good place and in the mobile they completely lost it, right? Same thing with in a way if you look at AWS for a while, they didn’t really had a good AI strategy and good AI models, right? They use in some areas but now they’re really bringing to the next level. They’re kind of catching up because again they saw the opportunity and Google is another massive one, although they were investing on it.

Dinis Cruz - 06:09 The reality is that I would argue that AI posts can pose an existential threat to a lot of the Google stuff. Like I dramatically reduce my Google usage and when I use it now it just feels primitive and I hate it because it doesn’t have was doing this presentation when I did the first round, it was so frustrating to use Google images because it just didn’t understood and I’d have no way to communicate my intent. So a big deal, right? And now if we talk about cybersecurity, here’s the thing, right? In cybersecurity, in the world of cybersecurity, our industry is also part of the problem because a lot of the times, if you think about it, we live in the fringes or part of our industry of inefficiencies of other teams.

Dinis Cruz - 06:51 So other teams have inefficiencies or they don’t have a certain amount of knowledge or skill or to be honest, not even their problem is the model that they’re in where they’re not given the time and the budget to do things in different ways and sometimes better ways. So we kind of evolve to add value there. But the reality is that if you look at our industry, half of it or most of it is fucking broken, right? You move the needle from a little bit to a little bit, right? Sometimes some of these security products in here, one of some of these logos, they’re more insecure than some of the stuff that we have in our network. So some of them you could argue that we have a negative risk score by putting them in a network, right?

Dinis Cruz - 07:34 And it’s just crazy how we have this massive industry where actually a lot of this we should be looking at how we improve others. But I get why we had this because there was lots of problems that it wouldn’t scale otherwise. So why market economics allowed the creation of all these industries and tools and companies that exist there, but a lot of these freaking broken. So the thing I want to be very clear here, all the thing I’m talking about, and especially the AI opportunity is not about the terminator that is a real problem, but it’s not to be solved here. And it’s not the world that we operate that needs to be solved. That needs to be solved at a different level, the same way that we are able to control nuclear weapons and other things.

Dinis Cruz - 08:19 And yes, there’s going to be problems and we will need to do something about it. Right? But this is not what that’s about. What this is about is also this is not about being fired, right? Look, companies that want to fire people, they’re always going to fire people. They don’t need geni to do that. Basically, there is a change like exists in a lot of industries. The same way that we don’t have professional typewriters anymore, we don’t have people that drive horses and carts, right? There’s a lot of professions that have been evolved. Yes, it might disappear, but I don’t think that’s the world that we’re getting in the short term here, right? What this is all about is about scaling and making the human better with the agents. And at the end of the day, look, stupid people would do stupid things, right?

Dinis Cruz - 09:07 If you’re a lawyer and you check TPT to help on a case, right? And you don’t check the references, especially knowing the thing, hallucinates, dude, you almost deserve it, right? And like these guys here, right? If you do that as a thought experiment, maybe there’s a bit of a Darwin award here going on. So again, we don’t control that. But what’s important is to understand the concepts. So I want to walk you through a little bit my thought process and why the paradigm shift that really made me think about how all this works, right? So the GPT thing is actually worth spending the time. Over the summer, I took 30 books on my holiday. We drove to where went so I could take it and I really went deep.

Dinis Cruz - 09:51 But the more I went deep, the more I realized that the GPT concepts really hit me. And I really got a good understanding because I think each word is very important. The generative, the fact that it creates something, there’s an air gap and I’ll talk a bit, the fact that we pretrain it and we train it almost in a particular direction, I think that’s very important. And then the transformer, which is fundamentally how you map the connections, how you’re able to provide context between the words or images. And I’ll show a great image. That was, for me, the big paradigm shift.

Dinis Cruz - 10:26 So when you look at the generative, this is the bit that really I start to understand the power of this which is the way I think about the generative, is that you have an input, you have an intermediate representation and then you have an output. By the way, the reason this images here suck is because I’ve seen image in the past and I was trying to find them and I couldn’t find them. So this is kind of the best I could find when I was building this. Right? Definitely need improvement. But to be honest, it’s the generative workflow, but it’s literally you got an input, you got an intermediate representation of that input. And then what we become really good is creating really good transformations out of that one.

Dinis Cruz - 11:04 So not transformation, but a word equivalent outputs from that are now good enough or are as good as the input and then we become good at connecting some of those dots. Right? But the really important thing is that intermediate representation and the other key concept here, and this is for me, was the killer interesting feature here, is that the cost of creating a new output is actually linear, right? So that means if it costs a or of course X to do one version, it’s going to cost two x to do two. And three four is the same minimal cost and it’s actually quite a low cost to create that other data. And it also means that whatever output you have is just one of millions, billions of what could happen based in the way of your prompts or some of your inputs.

Dinis Cruz - 11:56 And again, that’s a game changer, which I’ll go in a bit. The transformer is also one of those things that the more I looked at it, the more it clicked and the more you kind of start to realize that’s how it works. And it’s interesting because now when I use Chattbt quite advancedly, I can see in action, I can see how he operates. In fact, I now can understand the clues that I need to give it because of the transformer and the fundamental of it, of the transformer. And this was, I think, the big sort of paradigm shift is that as the words come along, each word in a way gets assigned a particular set of weighting depending on almost the context of the other words and almost like finding a way to provide attention to say, what is really important here.

Dinis Cruz - 12:43 So you start to understand, or the logic as it builds, you start to understand the difference in context of what’s happening. And that’s massive. So I kind of view the transformer as a way to almost think, pay attention here, pay attention there, this is relevant, that is relevant. And then it’s like a massive graph that goes along. And I saw this great paper recently from the anthrophobic guys, phobic guys, who actually are the ones behind Claude Two, which is another really good, basically model. And they were basically trying to understand how it works. And this is the killer thing, right? Remember that up until now, we still don’t have a good understanding exactly how the language model works, exactly how cloud two works, how Chat GBT works.

Dinis Cruz - 13:29 It’s one of those engineering problems that we haven’t solved, which is crazy, if you think about it, because there’s so many layers and you don’t connect. And one of the things these guys looked at is basically saying, if we look at the neurons individually, they couldn’t find a connection between the neurons that fire that were activated based on the input. But what they’ve done was they created a bunch of clusters, a bunch of what do you call features, which are basically certain groups of neurons in a certain way. And once they’ve done that, then they start to get a one to one mapping between, for example, English inputs and the output, or Korean or maths or DNA sequence, et cetera.

Dinis Cruz - 14:08 And even cooler, they found that if they take a particular, say, numbers, which provids numbers at the end, but they start to modify the activation elements, that basically the neurons. That neurons who basically tend to be activated. So they overstimulate those, you get the results in whatever you wanted. So suddenly you go from numbers to Korean to English to DNA sequences to whatever, HTML, et cetera. So that was a really cool example, which actually raises some interesting security concerns, by the way, the more we evolve into this and how models can be manipulated. But again, great research. And this is all about the transformer. And there’s some really freaking cool visualizations, if you follow the real paper about this. Now, this is the image that made me understand Chatt and made me understand Gen AI and made me realize we’re now into something massive.

Dinis Cruz - 15:02 And let me walk you through this picture. The square in the middle, rectangle in the middle is the original picture. So that’s the original painting. The big picture is, in a way, Adobe Firefly to create a large format of it based on the initial image. But you could see how it flows, right? You could see the sun there having this flow. You can see this window part here. You can see the town being bigger. You can see the difference in colors. That makes sense. And when I saw these images, I had a big paradigm shift because although it was image and now you can apply the same thing to text, but I start to realize that this is something that you can’t Photoshop this, right? You can’t do illustrate.

Dinis Cruz - 15:49 You can’t create this almost from scratch unless you understand the art, unless you can replicate the artist format in beginning, I swear, unless you can understand almost the context and the language and the intent of the original painting. And that’s massive. And then if you connect this to the fact that the cost of creating the other edge, another version, is the same as to create this version and this is where now you become an art director. This is where you start to look at the intent. So you can argue and say, actually, I don’t want a town on the left. Actually, I don’t want that big sun to be there. Actually, that moon represents something else. I want something going on. You now can art direct this and you can make changes based on the prompts.

Dinis Cruz - 16:36 And the cost of creating a new one is the same cost of creating this. And that is a massive change because usually what happens is that even when you take things like threat modeling and other things that we do and documentation and mappings and languages and even stories, you do a great first version. But then Tweaking, it is where you spend a lot of time. So this allows us this generative mode allows us to now be very productive in the customization, in the improvement of the final thing. But this is very important. Somebody, an artist, still has to validate the final finding. There’s still a creative process to say yes, that conveys the meaning that I wanted, that conveys the original stuff that well represented.

Dinis Cruz - 17:29 So, for example, here you got Mona Lisa on the left and then you got Nirvana and you can go Adele in here and you get, for example, a joke on the right where you go, oh, look, I can put the Metallica Master of Puppets in a cake, right? But you see, that shows again the power of this where you can now go, I don’t want that lake on the left. I don’t want something on the right. I don’t want a shark in the Nirvana thing over here. Adele actually shouldn’t be smoking something or whatever something else, or she should not be in a like literally, you can now think about what do you want the message and the creation to be?

Dinis Cruz - 18:09 But this is the part of the generative where the middle bit now can be manipulated and translated and talked to in order to create the output. So you can turn around to the right and say, well, I don’t want that to be a cake and that’s fine. You get another version, or I want that to be something else. I want that to be these fields. I want that to be a war zone. I want that to be whatever you want it to be, right? And that’s insanely powerful. So that was the one that really made me think of the power of the generative. And then when I apply that to risk, when I apply that to our industry, I see insane parallels that you got. So fundamentally, what you have is this.

Dinis Cruz - 18:44 You got an input, you got an intermediate representation and you get the output. And that intimate representation is fundamentally what Gen AI is creating is all those layers that exist. And the reason we don’t fully understand it is because at the moment we still don’t understand how a lot of the relationships between those layers occur in the large language models. But this is insanely powerful because based on the inputs that you can provide, you can now control the quality and the directions of the output. And it’s very easy to create new versions of this, which is super powerful. So fundamentally, the view where I say gen AI can add a lot of value is in managing complexity. So we now are able to tackle issues that before was just too complex. And that’s insane because in a way, how do we scale this?

Dinis Cruz - 19:34 In fact, that’s a simplified version of everything that the CISO needs to care about from a security point of view. And this is even missing things that happened recently. Right? Again, the services on the left is an example of services that I created in one of the teams I led. And we had to provide all those kind of services to basically help the business. But how do we scale this? And in a way, let me now give you some examples quickly on some of the areas where if you apply this concept of gen AI, we will have a massive impact in the scalability. So the first one is communication. We have a massive challenge in security of communicating. It’s not just with board members, it’s with execs, even within ourselves, with other teams. And the problem with communication is scale.

Dinis Cruz - 20:24 You can sit down and take a particular risk, a particular vulnerability, a particular finding, and if you have enough skills or understanding of the problem, you can write that for a particular audience. And if you do that, usually you’re quite successful. The problem is that in security we have literally every part of the business is our stakeholders. So we need to have messages that goes all the way from pure financial, to communications, to customer focus, to pure technical, to strategic whatever, right? So the bet that I never could see how we could scale was how we could figure out how to communicate to a target audience. And fundamentally before chat GPT, the idea, and I tried to ask my teams, but of course that it could never have worked, right?

Dinis Cruz - 21:14 Because the idea is that we have a message and we will go, okay, so we need to take that message and we need to give it for that particular person or that particular team in their particular language, in their particular culture, in a way, in the particular state of understanding that they got. That is ridiculous. Right? Because the engineering cost and the bill to do that will be too greater. And then you have 20 of those stakeholders, or ten of those or 50 stakeholders where each one of those really needs a consistent message, but it needs to be translated, like I said, in their language, in their culture and their understanding, in their skill set, their background and all that. But now we can because in practical terms, you can have a prompt and I’ll show some examples in a bit.

Dinis Cruz - 22:01 We have a prompt say, hey, you are an advisor. Here is the communication I want to give. Now explain this to a particular audience and guess what? Chat GBT or whatever, what you’re using is able to do that. And ironically, that’s what they’re really good at. So Hallucinations I think, is a good thing because they really show that the limitations of what is supposed to be doing and it shows that we shouldn’t be taking some of the outputs at face value. But the point here is that what it allows you to do is allows you to take a professional and say, I want to communicate this and this is my intent. And then allows Chat GBT to go, cool, what about this? And then you read it and you’re going, yeah, that’s what I want to say.

Dinis Cruz - 22:47 Sometimes it may go E, that’s even better way of saying what I want to say than I was able to do. But it’s still you validating. It’s still you basically saying, this is my idea, or this is what I want to say is this that is now translated in all sorts of different languages and cultures, which I think is insanely powerful. So I think communication is an area that is going to make a crazy difference, right? And if you want to see a good example of communication failure, today is risk. So for the first time we have the ability to scale our risks. We have the ability to create risks, ability to scale the creation of risk, scale the mapping of risk, scale the connection of risks and scale the explanation and the risk acceptance.

Dinis Cruz - 23:34 And before now, all of these required in a way, insane amount of engineering, insane amount of graphs. And even then it wouldn’t really work. Now when you add to the mix the ability to take a message and transform that message based on a particular set of rules and intent and requirements and data sources and that can be translated into, again, different languages, different cultures, different experiences, different analogies, different this is really important and different context of understanding. So it means you’re taking into account where the other person is in their knowledge and you continue from there, which is always the best way to teach, the best way to communicate because you’re providing information that’s almost like 100% relevant to that particular individual. We can now do this.

Dinis Cruz - 24:26 So imagine if you’re talking to a board members, you can now create customized messages to each one. Why is it important? How does it relate to an MBA? Okay, here’s the risk data from a point of view of an MBA CFO. Here’s the risk point of view. CFO, Chief People Officer here’s the same risk from the point of view of people. In the past we tried to do this, but he couldn’t really scale. Now he can. But you can keep going down. You can go to the execs and then to the teams and then the project managers and even to the developers. You can have almost like the same risk explained completely vertical on the whole organization. And you can then say, hey engineer, you need to do XYZ. Here’s why.

Dinis Cruz - 25:06 And here’s why it matters in a way that it makes sense to you that it connects with a wider even business direction, objective and risk to the business. So we can now create all these translations that in a way the risk is still the same or the model is still the same, but the views of it are all different and that’s massive. So I think for the first time we’re going to be able to really do something I’ve been trying to do and I’ve done to some extent, but it’s really hard. But I feel now much more possible, which is to have a direct connection between the top level risk of the company and the risks that the individuals that are doing something about it actually are aware of.

Dinis Cruz - 25:47 And there’s a connection behind all the way in a graph, but also in context all the way there. And this is super critical. So that means that somebody, for example, in the cybersecurity team or an engineering team that is doing something, they should know exactly what risks they’re mitigating and maybe even what risks they’re creating. And then when we go to the program owners, we do the same thing. So this is a really great way to scale. And if you want another area where it’s not being effective today is third party vendor management. And part of the reason is because even if you have a good mature process, and even if you tweak a little bit, it’s still spreadsheet driven. It’s still very non context specific.

Dinis Cruz - 26:29 Because even if you mature have tons of questions, tons of things, and maybe you’ve been tweaking a little bit, that’s nowhere where we need to be. Because what we need to do is this is a good example of bots. And again, take into account that these days you can already have private bots, private LLMs that run in your cloud environments, run in your data center. So the idea of data leakage and influencing models, we kind of pass that, which is great. And also in a lot of these things we don’t really need to tweak the model itself. What we need to do is add good information for the model to understand. So if you imagine here you can say, all right, you are a third party vendor management bot.

Dinis Cruz - 27:09 Here is our current vendor management policy and structures and questions and this is what we need to know. Here’s why we need to know that. That’s quite important. So then there’s consistency. And now here’s a particular vendor. Now for example, we can even say the bot, okay, start asking questions and. Chat DBT can already do this, by the way, and you can say, hey, have a conversation with these third parties. So now the bot can start asking questions to the third party, can be given information, and then you can provide guidance to the third party. Okay? So you need to implement this. This you care about, this you don’t care about that. Yes, based on your team size, you could do internally, actually, there’s no way with your current team size you can do it.

Dinis Cruz - 27:47 So you need to find a vendor, for example, that will give you this you can even create a brief. Here’s a brief for your vendor, so you can see that we can now take a lot more ownership on helping our third party, which, by the way, is part of our supply chain. So they’re critical part of your infrastructure from security point of view. But we can not only be able to evaluate them better, be able to make better security decisions, especially in the procurement stage, but also we can help for the vendor that we’re working on, we can help to make them more secure, we can help to raise the bar at the end. And actually, we done a lot of costs for us because we just implement the bots, implement the agents.

Dinis Cruz - 28:23 And this is kind of what I mean by the agents, which is these bots that act on behalf. And you have multiple agents working together, documentation, which is one of those things that got thrown away in the last 1015 years as we gained speeds. And yes, there’s an argument to be said that in the past was probably a bit over the top. Again, it didn’t really scale and it was quite costly, but we kind of went to a mode where not everybody does it, but a lot of teams go, oh, the documentation is the code, and we are agile. We don’t documentation and we don’t have diagrams, and our system is not we can’t stop to draw the whole thing and how it works. Which is ridiculous, right? Because if you think about it, they want it’s just that they don’t have the time.

Dinis Cruz - 29:07 So it’s not that if you go to a team and say, hey, you need to maintain the system, would you like to have very highly deep tell documentation about how it works? Why the decisions were made, what’s the behavior, what’s the flow? Data paths between here, architecture, how does it work even visually? Because that’s how people like to think. Everybody would say, yeah, give me to me. Right? The problem is how to get it done. So one of the insane powers that you now can do with the LLMs is you can use it to document it.

Dinis Cruz - 29:35 A simple example is take a Git diff and feed it to Chat GPT and you get some amazing technical explanation of what just happened that commit, and you can say, give me a business analysis of this, give me a technical analysis, give me a security analysis of this. And so we now can get to the point where we can really document effectively our code and we can start to have a much bigger 3D view of our code, of our structures, of our environments, and we also can start to capture the meaning of it. So imagine how awesome would be to be able to describe. Here is what’s happening in this cloud formation script, which by the way is completely unreadable beyond a certain point. Right? Here’s this helm chart, here is this code commit, right?

Dinis Cruz - 30:18 But here’s the intent, or here’s the intent of the changes you’re making. Now think about how awesome would be to go whoa, hold on, the intent changed and that’s suddenly something that will always go, not always, but would sometimes go under the radar because it was not obvious. That particular change of intents and maybe this change of intent is not even in the second, 3rd, 4th level is on the fifth and the 6th, or maybe the interconnection between systems. All these things I just spoke for the last minute were absolutely impossible to do, require a huge amount of engineering time and money to do it and actually nobody really done it properly, which is why we don’t have it today in our workflows.

Dinis Cruz - 30:57 But that all changes because now we can really start to understand and we can get chatgbt to start explaining the materials, what’s going on. And by the way, all this should be version controlled. So this is not about creating materials and stuff on the fly. This is like we should be using and verifying the outputs. But the speed that you have and almost the experience is completely different. I like to read the commits analysis created by chatgbt of the code that I just commit and I actually find it quite interesting because A before I don’t have to write it b because I’ll go oh yeah, I also changed that I catch things and I can fix things quickly because that’s not really what happened.

Dinis Cruz - 31:39 So you can get rid of that but it’s so much more effective and it becomes part of your commit workflow. So again, this is going to be massive threat modeling. Listening to a Gary McGraw amazing podcast the other day and one of the things he said is that one of the problems we need to do is we need to scale threat modeling. We haven’t figured out a way to scale threat modeling. And I completely agree, threat modeling is an amazing practice, done well, makes massive difference. It’s a great way to communicate, great way to teach, great way to understand what’s happening, great way to model threats fundamentally what the name says. But we can never scale.

Dinis Cruz - 32:13 And we can never scale because how many of you guys then thread model where literally it starts by somebody saying yeah, this is how my system works, right? They start to draw it. Think about how many degrees of failure we already are. Because if you have to draw your architecture means that architecture only lives in your head, means there’s no up to date environment that you’re comfortable to actually represent what happens in the real world, we can now do something about it. So imagine not just threat modeling being used to create better materials, but then modeling the threads. And again, the other problem with threat modeling was the explosion of potential threats that had not a lot of context, which always reduced the value of the threat model.

Dinis Cruz - 32:56 So we can now have context, we can have risks, we can have environments, we can have controls, and then we can say, okay, here’s the model of what’s up, here’s the threats. But now apply the context and suddenly you get a much smaller but highly focused and realistic set of threats and risks connected to that. What are you analyzing so very exciting about how we can now start to scale threat modeling using the Genai stuff. And again, code reviews, like I kind of covered already. But imagine a Gen AI powered code review where you can also start to ask a lot more interesting questions and capture those questions and scale and start to talk with the bots about the intent. So I have some really interesting conversations with Chat GBT, and I have to say that the answers are amazing sometimes.

Dinis Cruz - 33:45 And I have these full on conversations, what about this, what about that? It’s like having this copilot, right? Literally this helper that has a lot of wide set of knowledge, doesn’t need to be all perfect, that’s fine. But he really helps a lot and he really augments it. And one of the last ones, instance response, right? Imagine an instance response world now driven by Gen AI. There’s a great session that was done yesterday about a tool to recreate a lot of attack trees and basically possible scenarios based on the attack mitra framework. Really cool stuff, right? But so, again, on instant response, there’s so many areas that we can start to have a good amount of context so we can say, what happened here? What’s the problem? This. What’s this signal? What’s that?

Dinis Cruz - 34:32 So we can really lower the noise that we’ve got, but increase dramatically the signals that we have here. And training is an area that I’m actually quite passionate about. And I do think that this could be one of the biggest game changer, which is we now will have the ability to create custom training materials that are adapted to an individual, to a particular culture, to a particular language, to a particular understanding. And we can create this personalized patch, which I feel was always the problem with the more traditional training. That had to be a lot more generic because there was a wider audience, but also the cost of customizing the training was always quite high.

Dinis Cruz - 35:16 So if you look at some of the people that are talking about gen AI in Africa and other developing countries and environments, that basically it’s like actually what happened in Africa, where they jumped from phone lines to mobile, right? They almost skipped a generation, which can be sometimes quite good because then you don’t have legacy. But it’s not just that. I feel that even when we talk about skills transfer and bringing people from other industries into our industry, we now have the ability to create highly customized training packages, environments that you can talk to, you can ask things. You can go to chat DBT and say, hey, I want to learn this. Explain it to me, or give me a training plan, or ask me questions, or give me almost like, the ways to learn. And it’s okay.

Dinis Cruz - 36:04 You need to figure out the best way that you can learn or somebody can learn. But I feel that can scale a lot. And in cybersecurity, a lot of things that we do is training. Training our teams, training others, educating, communicating. So I think this could be massive, right? Been talking for a while. There’s just one more little topic I want to talk about. When I learn these things, I kind of like to basically be hands on, and I like to be involved in sort of practical ways because or else you just go on tangent, right? So one of the problems that we know is, how do you make board members understand cybersecurity? That’s quite a common problem. We even have a session today, and there’s common sessions about literally how do you communicate.

Dinis Cruz - 36:48 And also, board members want to know about cybersecurity because they also know that in some ways, they start to be accountable. So how do you make a board member understand and have a safe place for them to talk and engage about cybersecurity? So playing around with it. So we kind of created the cyberboardroom.com, which you can go right now and create an account. It’s free, right? Play around with it. And let me give you a quick demo. So if you go in there, it kind of looks like this. So there’s some content in here that you can read. But the interesting thing is, we actually created three bots, which I’ll walk you in a bit. So the idea is, these bots are the ones that provide advice and custom content to the board member, right?

Dinis Cruz - 37:34 So what you can do here is you can say, so Athena is the one we called it and say, hey, and I prompt the question here, say, Good morning. What you do? So basically, this is the bot that is now saying, hey, look, I’m Athena. I’m an AI advisor created by the Cyberboard room. My purpose, to assist board members like you. So think about how cool it is. It already has context. He already knows that I’m a board member. He already knows that I care about security. He goes, okay, how can I assist you? Right? So you can now say, okay, I’m on, let’s say, the finance industry, and I’m worried about ransomware, so you can do that. And now this is the cool thing, right? So now I’m going to get context about it.

Dinis Cruz - 38:29 I’m going to say, okay, ransomware attacking, and he had severe consequences spread for organizations, blah, blah. And then you could say, okay, give me some information about your environments. But you can even go, okay, actually before that, can you explain ransomware for somebody, let’s say, like me, with limited cybersecurity skills, but I have a strong financial background and have an MBA, sorry, have a master business administration, right? So you can see so now I can private and say, okay, can you explain this to me? But I don’t have a lot of super size skills. But then financial background. So what’s going to happen now is basically the bot provides information about that in that world, in that language, in a way, starts being adapted to that particular context and that’s super critical.

Dinis Cruz - 39:49 And if, let’s say you just like bullet points, you say, can you explain it again? Only using bullet points, right? So basically this is where you now can be super creative, right? It’s about the person on the other side, basically the individual that says, this is how I want to learn. This is the question I have, this is how I want to do it. And then you take it from there, right? So it’s a pretty cool thing. So we’re exploring with a lot of ideas in this. Again, please have a go, play with it and okay, I’m going to try something. Let’s see if it works, right, is you could also change it, the context of it, right? So you can modify the behavior of this, right?

Dinis Cruz - 41:07 Actually, I think it’s interesting because actually, I’ll say in a second the bot, because actually, okay, this didn’t work. Actually, it’s good it didn’t work because we actually primed the bot to be helpful, an optimistic advisor. So there’s some prompt. This is actually good example, prompt injection. So good thing it didn’t work, right? I’ll show you an example in a second of that. So the thing that is super powerful here is that all that you saw there was created by this part here. And this is what’s insane, right? So if you look at basically this is that bot that you just saw there is basically powered by this small little bit where this is called the original prompt. So all the questions came after this.

Dinis Cruz - 41:55 So you go, you help an advisor, you created by the boardroom, you’re kind and your job is to help. So you can see, that’s why I was saying don’t be jaded, right? And then be proactive. So that’s why you can see the bot was asking questions to clarify and also give a nice message. And also we can then do things like this. So we also say, hey, you got two siblings, you got Odin, the tech bot. So we’re actually creating another bot to things this, and Minerva, who’s the business bot supporting this stuff, right? And if you’ve seen Encanto, you get this joke. You’ll say, hey, there’s also Bruno, right? But we don’t talk about him.

Dinis Cruz - 42:27 So, in fact, even here, if I go back to the bot and start again and then, you know, hello, how are say, you know, do you have siblings? Right? What’s cool about this is you can see this got created just from that small little thing I said, like my first sibling, which is a tech part behind capabilities. Odin helps me with up to date list advances. The other is Minerva, with the business support and commercial metaverse, all this stuff, right? How do you work together? So you can see that this is where Chattyput is quite cool, right? Because it creates a nice narrative, creates a nice environment, and basically it’s all based on the stuff that we provided. And I go if I go, anybody else?

Dinis Cruz - 43:20 It now is going to go, actually, there’s one more member of the FEMA Bruner, but we don’t talk about it. He’s still in development and actively involved in our conversation, right, but you can see that I found it’s really cool, right? Because you’ve got this cool sense of humor, of implementation. And I want to show you one final little thing. So I wrote this nice, deep technical blog post about it, so you can read a lot of the details, how everything works. And I just want to show you one little thing which I actually find quite funny, right, which is fundamentally this bit here, right? So if you go into sorry, let me just find it. Okay, so I make a little change.

Dinis Cruz - 44:02 And I was actually speaking to a Portuguese friend, so we actually use camoich, which, by the way, I love to have chachi at the time because we can really understand what the guy was all about. So I changed a little thing here to say, by the way, instead of being optimistic, you were highly psychic. Now, I actually found this answer really funny because if you look at the answer that you provide, it basically is like, hey, why should I care about cybersecurity? And the answer was, hey, good morning. Welcome back to another trilling episode of let’s try not to Get Hacked. Today starting camoids, blah, blah, security. The share everybody’s talking about. Why should you care about cybersecurity? Where do I even start? The landscape of doom and gloom is so vast. Tell me, do you have any cybersecurity management in place?

Dinis Cruz - 44:48 Firewalls, antivirus, regular audits? Or maybe somebody who actually knows the phishing email looks like? So I find this funny, right? So some individuals might actually like to have this sort of more kind of jade or sarcastic approach and others don’t. Some might have very dry answers. I like, there he goes a lot. Sister Michael, you can ask sister Michael’s version of Cybersecurity, which again is super funny. So the point here is you can adapt to the language and culture and by the way, it already speaks multilinguals, right? Because if I go like this oops, sorry, actually he already answered this. If I say bondi, right? I’m Portuguese, right? He already speaks Portuguese, right. And the answers are already Portuguese. Right. See? So it’s really cool. Right? So this already works in every language that Chattypk supports. Right.

Dinis Cruz - 45:46 So again, insanely powerful and that’s fundamentally it, right. So I’m not sure if there’s any questions in the chat, but that was sort of my view. Why the agents and what we really are looking now in the medium term is a whole number of agents that augment the teams, that make the teams a lot more productive and really allows to take us to the next level. It’s not again, just to reiterate, it’s not about job losses, it’s about job augmentation. It’s about making us a lot more productive and be able to tackle issues that in the past were pretty impossible or very hard or didn’t scale. So cool. So any questions? If not. Hey, Michael. Hey.

Speaker 2 - 46:39 Yeah, I had a question about inputs or lack of inputs. So when you talk about threat modeling, one of the things I’ve noticed, particularly when you’re dealing with so people in the team who are more experienced at dealing with them versus less experienced at dealing with them as you said you’re relying on being told what the system does and how it’s put together. And generally you find more experienced practitioner are better at finding what they’re not being told what’s missing. Slightly input. So would that be an example of if you liken the GPT model back to individuals about how much the person is pretrained when they’re doing it manually and how effective we’re going to be at using AI to generate those models based on incomplete information.

Speaker 2 - 47:31 So if we’re not able to provide the full picture of it, but how accurate that is to come back.

Dinis Cruz - 47:37 Yeah, actually I think there’s a couple of ways to look at that. I really look at the models now as initial prompt who sets the stage, almost reference information. So vector people talk about vector things, but fundamentally supporting information and then specific questions or specific data about it. And then the model and the model suddenly became a beauty commodity. So one angle in what you’re talking about is if we have, for example, good reference implementations of what, for example, you’re looking at. So it’s a service, a micro web service or whatever. We can say, hey, here’s what good looks like. This is the kind of stuff that we would like to see. Okay, let’s do analysis.

Dinis Cruz - 48:19 But I think the sweet spot here is at the moment, it’s not efficient to go to the individuals that have more experience to get them to help, because the onboarding process of getting to the point where those individuals are productive is too high. This is why we don’t need the LLMs to produce 100% accurate information. What we need is for it to raise the quality and to accelerate the information. So an example is, let’s say that you have a more junior person back with an LLM on maybe some junior developers to do a threat model analysis, right? They can do that. I actually would argue that the LLM would already provide a lot of great guidance or great analysis, but then I could review it, right? You could review it, the senior architect could review it.

Dinis Cruz - 49:08 And that’s when they go, oh no, you missed this, you missed that, you missed this because there’s information. And then, even better, I can then go, okay, let me change the prompt. So next time you look for these other gotchas, right, so we can then learn the scalability, right? But even the architects, it’s quite funny, because what I used to do with the thread modeling is I would do thread models and go to the architects and going, Is this correct? And sometimes they’ll go, well, that’s not supposed to be like that, and go, yeah, that’s actually how it works. On the other hand, they go, oh no, but you’re missing this. Oh, you’re missing that. Or the developer will go, yeah, but there’s a backdoor here because we couldn’t make that work. So we created this.

Dinis Cruz - 49:44 But that bit never scaled because when you wanted to go back and reflect it’s almost like you create a work of art. So I feel that the key here is to have the expert in the mix, but we scale a lot more and then when the expert provide an opinion, you can recreate the materials. That’s why the painting for me was such a paradigm shift, because what happens is the expert goes, oh, that’s not what it’s supposed to be. Oh, that was not the intent. Or Actually, you misinterpreted what we meant by this. But then you can go back and recreate it again super quickly and going, okay, is this what you mean? Is this what you mean? And that’s so much more effective than what we have today.

Speaker 2 - 50:25 Yeah, I’m kind of thinking, what are the other enrichments that you can augment onto that as well? So you kind of go, yes, we kind of pull it from what we know, but then also an ability to hook it up to your runtime. So if you want to say, well, I know that all of these resource groups have got the configuration, so pull that in and then here are the code repositories, pull that in and now start to do it, and then from all of those different things, build a more complete, accurate picture of everything.

Dinis Cruz - 50:54 Absolutely. I once did a security review that was one of the best ones I’ve done because they gave me access to an Elastic server that had the real time logs. So I had the access to the application, I had the source codes I could trace to it and I could see the logs. So the 3D view that I was starting to have was massive because even if you do a poke over here, you can see how far it would transverse, right, all the way to the database, all the way to the logs. Now I can see that. Look, we can now feed the logs. I’ve took the TCP dump and give it to Chat GBT and you gave me a great analysis, right? You can give now raw logs and say analyze me this or give me the contact.

Dinis Cruz - 51:37 Or more interesting, this represents this request. Start connecting the dots. So the context that you can have, you still need good data. So this is also going to drive us to create much better data sources, much better those curated elements that we can then give almost to a context engine that can then understand that and we can then ask questions about it. And that’s what we never had before.

Speaker 2 - 52:07 And taking just obviously, like all of the different large language models which are available out there, I know as well, there’s like open CRE as well, or there’s other language models. How much of it do you think is going to move towards for security use cases that we need to pre train models which are customized to security or can bring in more understanding of what that context is?

Dinis Cruz - 52:33 Yeah, well, I think Google actually has the SEC palm, right, which I think is quite an interesting view of it. Yeah, I think there’s definitely the interesting thing here is then as we go into more use case scenarios, you actually sometimes want smaller models, much more focused models, or models like have specific set of knowledge sets. Although one area that I don’t fully understand yet is, for example, is it better to have something you train on a couple of things or is it better to have really good vector embeddings that you can customize, that you can even use models to help create that you then feed a really good comprehension model. And I think there’s an interesting debate here. I think it’s going to be interesting to see which one is more accurate. Right.

Dinis Cruz - 53:23 And some people talk, for example, difference between long term memory and short term memory. It’s almost like I think you can pretain on intent and you can pretain on almost like the behavior. But when you want accurate information, it’s probably better to almost give it the information. Like, for example, is it more efficient to pretrain a model on NIST, the whole of NIST, or to have a way to feed the elements of NIST and maybe the context window needing to be a certain size, but then the ads that you get are actually completely accurate to that particular, let’s say, NIST framework, okay?

Speaker 2 - 54:01 That’s like referencing the information that you want to provide into it, providing in the prompt at the time. So, as you said, the example of what good looks like just going like in the prompt, here is what I want it to look like, or here’s an example of what we think is good, and then use that as the basis to build upon, correct?

Dinis Cruz - 54:17 Look, we had a really fun example in my team, right, where were releasing a particular product, and the product required a bunch of steps, right, from set up kit, do this, get a jira token, et cetera. And what happened was we actually used Chat GBT to go, hey, give me the detailed instructions for this, which is actually really cool, right? Like how to do this. So instead of you Googling how to set up my SSH key, we go, okay, here’s the thing. In fact, here’s name of the repo, here’s the step. So even the instructions were already all nice, right? So we do that. We share that with the team. And I kid, you know, a dev comes along and goes, guys, I got it. It’s all working. But I didn’t install my Jira key, right?

Dinis Cruz - 54:52 And literally, my first instinct was like, dude, you mean step three of the seven instructions. But then I realized, of course, the first couple of steps were irrelevant because he had it by then. Your brain stops reading, right? And what we then do is we say, hey, what about this? So we then say the prompt was, you are helpful assistant. He is the raw content. Like, he is literally the guidance. And then say, ask some questions to the user. So then Chaijpti was clever enough to analyze that and figure out what was the seven questions you want to ask. And then you answer those questions. Yes, no, yes, no, yes, no, right? And then you get technical, 100% relevant guidance to whatever that user was. So you don’t have instructions one to seven general.

Dinis Cruz - 55:37 You have instructions one to three that start where the user’s knowledge or the user experience ended, right? That makes the whole difference. So suddenly, of course, he’s going to read it because he’s going to go, step one, which I haven’t done, and blah, blah, blah, right? And the key is that we fed it the document, right? We fed it, in a way, the guidance, which is why context windows matter. So you got 24 now here, you got uncharted 42, I think, or 24K, but then what’s it called, plot two has 100K. So this now becomes an interesting question of cost, even, because I’ve seen cases where 3.5 charge BT is as good than 4.0 if you feed the data. So maybe there’s an earlier version that is even cheaper, right? So we don’t need.

Dinis Cruz - 56:24 To use the superpowered one to perform an action that actually a much lighter, maybe even faster and cheaper language model can run.

Speaker 2 - 56:34 Yeah, because we’ve been using it for well, not in anger, but using it to like going back to one of the earlier points around creating content for different personas, as in take these requirements and turn it into stories that engineers or product owners can pick up and play. So about using the translation between well, as a security team, this is what we mean, but let’s write it an example of something actionable for teams.

Dinis Cruz - 57:02 How do you put it? Like a Chat Bot or it’s just a prompt.

Speaker 2 - 57:06 Just going into I actually tested a bunch of different models to go like, well, if we go into OpenAI or Chat PT 3.5 or four and then going into Bard or different models and going if I provide this requirement in the same way, what does the story look like? Which has come out of it? So just saying, like, this is the format of how we want a story to be written. This is the as an example, like a PCI requirement for network security. So you might go for implementing PCI requirement, one point, blah. And then going into and this is the building blocks or the components that we want to use. So we want to use Visual Firewall or we want to use this or that. So basically give a picture of where to apply it and then go.

Speaker 2 - 57:47 Now write me a story for implementing that one I was playing around with was implement customer managed keys for implementing encryption for data at rest at. Service bus and show me what that would look like in terms of an instruction that we can pick up and say, right, so speak to an engineering team and say, that’s basically sort of what we want you to build, but use it as the starting point. Obviously, don’t just trust it implicitly, but go is that kind of what we want to create. So rather than having to have somebody sit down and write all of that from scratch, go, we’ll just kind of bang it in, see what comes out, finesse it, make it relevant, fix it and then hand it over.

Dinis Cruz - 58:26 But it was insanely more productive, a lot faster. Yeah. And the cool thing is that when you want to do tweaks, and this is my key point, regenerative, when you want to do Tweaks, you’re tweaking the final product. Right. And the cost of recreating that is very low, where in the past, even if you did the first pass, then you will almost spend 20% or 5% of your time making the first pass and then the other 80% to 95% of your time just trying to get it in that better state.

Speaker 2 - 58:56 Yeah, mostly PowerPoint slides. That’s the thing. Every time you decide that you’ve kind of gone in too big or too small on PowerPoint, you’re like, all right, I need to make everything smaller. You need to move every single box into by the way, have you started.

Dinis Cruz - 59:08 Playing with a new visual AI with Chat GPT? Right. It’s pretty cool. Right? So you can now upload pictures. So you can now do your drawing on a board and then take it to Chat GPT and it starts to not just analyze it, you can transcribe it, you can give you other dot implementations of it. It’s really freaking cool. That’s going to be a massive game changer.

Speaker 2 - 59:28 Not done it yet, but need to check it out.

Dinis Cruz - 59:31 Cool, man. Thanks for the questions. I hope the content quite interesting and I see you in other sessions. The summit.

Speaker 2 - 59:39 Thanks. Bye.