About this session
Enterprise vendors continue to discover new phishing scams, and insert these new techniques into phishing simulations. Is this reasonable and the most effective use of our users time? If AI is so smart why are we wasting users time, when there is evidence suggests that users don’t have time to be so watchful and careful? Shouldn’t we stop stressing out our users?
Speaker 1 - 00:00 You.
Speaker 2 - 00:02 Hi, welcome to this Open Security Summit session in June 2023. And here we have Sarb doing another session. It’s almost like a series now where he’s looking at a lot of the user interactions. And I think this one is going to take an interesting twist of that on AI, where some of our users are actually smarter than it, and let’s see how we can adopt the new technologies to actually make something work that it wasn’t working as well.
Speaker 1 - 00:25 So Sarb over to you. Thank you very much.
Speaker 3 - 00:28 And it’s an honor as always to be on these. Thanks for having me on. Just a quick starting point to let people know this session is less about AI and more about users and more about how users behave and how they don’t behave. And I use myself as a typical example of a typical user. And you’ll see that as we go through. So let’s get started then. Okay, so gender, first of all, am I smart in AI? Then after, I’ll be looking at thinking fast, thinking slow, the NCSE data breach statistics. What is the number one problem or issue that they talk about? What wasn’t in the statistics? What if AI had been used in those circumstances? And sort of advice we’ve been giving.
Speaker 1 - 01:23 Our users and the cost of training and so on.
Speaker 3 - 01:27 And then going back to that question, is AI smarter than our users? So, first of all, am I smarter than AI and how smart is AI? The main graphic there in that sort of montage is effectively the city and the head in the center, because basically our views of AI were based on movies and what AI could and could not do. And it was computers that would be connected to everything and they’d be able to do things really fast. And then more later movies, we’ve got things like the image on the left hand side, like a robotic. And then maybe you may be thinking we’re in our infancy, and that image on the top right is maybe AI in its infancy. But the reality is, if we look at AI, we are probably, if we look at evolution more like the spider, the acronym. And really that is where I think we probably are.
Speaker 3 - 02:27 It is very early days, and if you were lucky enough to attend the sessions that were run earlier on AI by Dennis, that would be a very good one on some of the early things that he’s seen, how they’ve changed.
Speaker 1 - 02:45 And I think the changes are going.
Speaker 3 - 02:46 To come very quickly. So that’s really those things that AI can do. And our impressions of AI. If I move to next slide, you see I’m probably not smarter than AI. I do not have the capability to store all the sorts of books that are in the library. I do not have the capability to analyze lots and lots of information all in one go. I have not got the capability to be able to do complex mechanics and engineering calculations and certainly don’t have the sort of wider capabilities of being able to see and analyze lots and lots of different images from all over the place. And although I may not have those, I do consider myself as sort of smarter than a standard user. I’m not a standard user. I am an advanced use of Microsoft. I do use Microsoft Office in lots of different ways.
Speaker 3 - 03:46 I do touch type, I do lots and lots of things which standard users do not do. So I am slightly smarter than an ordinary standard user. But just because I’m sort of slightly smarter than a standard user, all that has been experience, all that has been things that I’ve gained along the years and that’s all assuming in the same way it’s experienced my experience and what.
Speaker 1 - 04:11 I’ve learned over a period of time. Effectively I’ve learned it because things stay the same.
Speaker 3 - 04:18 In some respects. What I would say is what if everything didn’t stay the same?
Speaker 1 - 04:24 What if things changed and they changed?
Speaker 3 - 04:27 Rickly? Would we still be happy like that.
Speaker 1 - 04:30 Second ball or balloon? Or would we be more like this.
Speaker 3 - 04:35 Girl pretty disappointed and upset that things are not what we expected from them and they are changing and they’re changing regularly. Well we’re going to be that upset, angry, depressed. And how is that going to affect us? And are we likely to move into that if things change and they’re changing ways that we’re not expecting them to change? What are we going to be more like? Well I don’t use that cuts off language but are we going to be more like that? And if we are tech savvy are we going to be more like that?
Speaker 1 - 05:05 And the thing is, in terms of.
Speaker 3 - 05:08 How we do things and what we.
Speaker 1 - 05:10 Change and what we don’t change, what.
Speaker 3 - 05:12 I like to look at and how we’re used to in terms of interacting with our surroundings I’ve done.
Speaker 1 - 05:19 A lot of work, probably in the.
Speaker 3 - 05:22 Last year and a half, two years around user interfaces and looking at what makes and what doesn’t make a good user experience. Through that I came across lots of user experience, designers and sort of things that they take into consideration. They’re taking consideration the way we think, the way we behave, how people take lazy options. What sort of words, phrases help people complete whatever they’re trying to complete? What colors work, which ones don’t work. How to make things appear familiar. Where in the screen and the place should things be to make it appear far more familiar? And how can they make things consistent for us? So user design sort of experts, they actually really look at all of these sorts of things. Now if we didn’t have those user.
Speaker 1 - 06:09 Designs, designers looking at these things, some.
Speaker 3 - 06:13 Of the things that we take for.
Speaker 1 - 06:14 Granted wouldn’t be there.
Speaker 3 - 06:17 And there are certain ways that we’ve come to expect in terms of the interfaces that we use. Whether you use Windows platform, Apple platform, you’re talking about your mobile platforms, all of them have certain standards that they use and then they use them in certain way. And that enables us to learn and do what we try to do. And that ables us to take advantage of some of the things that we try to do. This is some of the research that goes back to many years. And there’s a book a couple of years back now, I think 2008 by Daniel Kahnman, which looked at fast thinking, slow thinking, and really we sort of class lots of different types of thinking into the two. I’ll start with the right hand side, the slow thinking. Slow thinking is very rarely done. It’s hard, it’s deliberate, it’s rational, and it is reliable.
Speaker 3 - 07:08 And that slow thinking. The best example I can give you off the top of my head is.
Speaker 1 - 07:12 When you learn to drive, and you.
Speaker 3 - 07:14 Learn to drive, everything is a chore. You are looking at everything. You cannot have a conversation. You’re trying to make sure that you’re doing everything right. By the time you’ve passed your test and you’ve been driving for a few years, basically what you’ve done is you’ve now changed your mode of thinking about how you’re driving into fast thinking because it’s much more common in your driving and what you’re doing. It’s easy, it’s automatic, it’s emotional, it’s.
Speaker 1 - 07:41 Error prone to an extent, but you.
Speaker 3 - 07:43 Do get through that. Basically what Carmen talked about in the Fast and slow Thinking is most of.
Speaker 1 - 07:50 Us, on an average day, we tend.
Speaker 3 - 07:53 To have these models that we’ve built for ourselves in how we interpret everything into fast thinking. And it’s very important that we use these things because when we’re doing what we’re trying to do, whether we’re at work, whether we’re as users in front of our computers trying to do things.
Speaker 1 - 08:12 Or anything else, that we have in life.
Speaker 3 - 08:15 We try to do things in the fast way of thinking and try and come up with ways that we can reduce the burden on our brains to have to do slow thinking. It is something that requires you to rethink about what you’re doing.
Speaker 1 - 08:31 And now I’m going to break there.
Speaker 3 - 08:33 And I’ll come back to this train of thought in a short while. What I want to do is just look at the NCSE Data Breach Survey from 2023. So that came out in April, so about two months ago. And one of the first few things that it covers is about the percentage of organizations that have carried out the following activities to identify cyber risks within the last twelve months. So it’s things like using specified specific tools designed for security monitoring, risk assessment tested staff, which includes mock phishing exercises. And if you look at that’s 19 and 16% there carried out cybersecurity vulnerability audits, penetration testing, invested in threat intelligence, but the one I’d stick to is the one in the middle in terms of testing staff. Next one trends over time compared to 22. The deployment of the various controls and procedures has fallen amongst businesses.
Speaker 3 - 09:33 And I’ve highlighted that one towards the bottom there. The agreed process for phishing emails is actually down from 57% to 48%. And notice that there’s 19% that use phishing exercises.
Speaker 1 - 09:51 So that’s down if you move to the next one.
Speaker 3 - 09:53 Again, I’ll highlight phishing because it is the biggest category there. So this is the percentage that have identified the following types of breaches or tax in the last twelve months among the organizations that have identified any breach or tax. So if you just take those, forget all the others. The highest and the biggest problem there.
Speaker 1 - 10:14 What’S being stated is phishing attacks.
Speaker 3 - 10:17 The next one down is others impersonating organizations in email or online. Now that’s 31%, so you’re going from 79 to 31, the second one down. So that identifies how big a problem.
Speaker 1 - 10:32 In that respect phishing attacks are.
Speaker 3 - 10:35 Next, among the organizations identified any breach or tax, approximately half of these business and 54% of charities say that they.
Speaker 1 - 10:44 Only experience phishing attacks and no other.
Speaker 3 - 10:47 Kind of breach or tax. This falls to a third amongst the largest businesses, which is 33% and a similar proportion amongst the medium size. Specifically medium and large organizations are more likely to report phishing attacks, 93% of large businesses and 84% medium businesses versus 79 of overall. That is quite a high percentage. Next, let’s go to percentage that report the foreign types of breaches or tax as the most disruptive, including the organizations have only identified phishing attacks in the last twelve months. That’s 59 and 64. Again fairly high. And the next one down is 35. And again, if you look at it’s roughly not the same, but the second one is in the 13. The point I’m making there, what constitute a crime? So this was quite interesting in the survey they define what they’re considered a crime. This survey covers multitude, so multiple forms of cybercrime.
Speaker 3 - 11:46 Towards the bottom you got phishing attacks that individuals responded to, for example by opening an attachment or that contained personal data about the recipient and did not lead to any further crimes being committed. But still they’ve identified that as a crime.
Speaker 1 - 12:00 Now in the next section that they.
Speaker 3 - 12:02 Go through, and again, where fishing is.
Speaker 1 - 12:04 Mentioned, it covers all types of cybersecurity.
Speaker 3 - 12:08 Breaches or tax that resulted in a cybercrime. And it’s worth noting that most of the 11% of businesses and 8% of charities that identified cybercrimes are referring to phishing related cybercrimes where individuals responded. So again, those numbers there and the highlights there because they do talk about.
Speaker 1 - 12:26 Phishing in a big way, the nature.
Speaker 3 - 12:29 Of the cybercrimes experience will obviously from the figures I’ve shown you, we’re talking about phishing. Now the percentage of organizations that have identified the following types of cybercrime in the last twelve months amongst organizations. And again, this particular thing is slightly different in terms of attacks. So you’ve got fishing at the top, the next one down. Most of you will remember that a couple of years ago ransomware was the big thing and now we’ve got viruses, spyware or malware. And ransomware is way down as 4% compared to viruses, spyware, malware, which is 12%. And phishing attacks is still way up.
Speaker 1 - 13:12 There at the top.
Speaker 3 - 13:14 Using these results from the breach server, this is what they estimate. So 70,000 non phishing crimes in the last twelve months. UK charities that’s in UK business, sorry, and UK charity, experienced approximately 785,000 cybercrimes of all sorts of all types in the last twelve months.
Speaker 1 - 13:34 So again, it is what we’re talking about.
Speaker 3 - 13:39 Phishing is a major thing as far as NCSE are concerned. So some of the other data that we’ve got, I’m going to quickly flash that through. And this other data isn’t about the crimes itself, it’s what they did as a result of what they found. So this particular chart is percentage of businesses and charities that had any of the foreign outcomes among the organizations that have identified breaches.
Speaker 1 - 14:09 So as a result of a response, what happened?
Speaker 3 - 14:13 And basically we’re talking about 8% and 7% of charities whereby the website or online services were taken down or made slower. So you’re going all the way down. So if you look at all these, some of these are quite small as percentages, but keep that in mind because I’m going to come back to that. And this here is percentage that were impacted by any of the following ways.
Speaker 1 - 14:37 Among the organizations that identified breaches, listed.
Speaker 3 - 14:41 Any impact, we’re talking about 37 added staff. So as a result of that, they added staff to staff time to deal with the breach itself. New measures were needed for future tax to try and protect them. They stopped staff carrying out daily work, other repairs.
Speaker 1 - 14:58 Now, in all of that, interestingly enough, it goes from a fairly sort of.
Speaker 3 - 15:05 Mediocre 37% down to 0%. And I will come back to these.
Speaker 1 - 15:09 Stats in a short while.
Speaker 3 - 15:11 So sensitive organizations have done any of the following since they’re most disrupted.
Speaker 1 - 15:16 So the highest thing there is whether they took action, and the action was.
Speaker 3 - 15:22 Staff training and communication.
Speaker 1 - 15:24 And we don’t have the stats which.
Speaker 3 - 15:27 Say what staff training it was specifically whether it was around phishing or whether it was normal awareness, maybe password training, maybe something else, it’s difficult to say, but it is staff training that they’re talking about and they’re just talking about additional staff training. The next thing is install changed or updated antivirus or malware solutions, and that is nine and 6%. So we’re going from 19 and 28 in the charities, the top one, to slightly smaller numbers.
Speaker 1 - 15:58 Now this one here, I know it seems like a bit of a I.
Speaker 3 - 16:03 Chart and it is for me at my age, 57% of, sorry, percentage of organizations have done any of the following since they’re most disruptive, the top thing there is changed or updated Firewall. Updated passwords training comes down at 10%, there installed a changed updated antimalware is third.
Speaker 1 - 16:23 But if you look at that, the.
Speaker 3 - 16:24 Fact that training is the third one down and that still comes out at 10%. So they’re doing lots and lots of.
Speaker 1 - 16:35 Different things, which is strange because it.
Speaker 3 - 16:38 Doesn’T necessarily fit the story. And if we look at some of the key fishing points from or, sorry, the key points around fishing from that survey, 19% tested staff with mock phishing exercises. Businesses with agreed processes went down from 57 to 48. 79% of business identified phishing attacks as the top, as the number one. And yet phishing attacks was 93% in large organization, 84 in medium, although it was 79 overall. And phishing attacks is the number one most disruptive attack. And again, I’ve taken that figure from the different groups of charts that I showed you. Yet they may not always know the complete outcome of an attack because most of those showed in terms of the way that they responded, weren’t always sure what was the outcome and why they took a particular action. So the impacts website changes or online services taken down or made slower, which was 8%, and temporary loss of access to files and networks.
Speaker 3 - 17:46 So those are the sorts of impacts we’re talking about. However, staff training was down, was number one at 19% and changed an updated firewall or system configuration was the next one.
Speaker 1 - 18:00 So not in the survey.
Speaker 3 - 18:03 Other surveys that I’ve picked up talk about 91% of advanced cyber attacks begin with some sort of an email. 81% of all malware attacks come from phishing attempts. Email as an attack starting point continues to work and it works because it works. Why would anyone change it? Even if AI isn’t used, phishing emails still give a return and a good return without any issues. Phishing as a service seems to be thriving. Email addresses are one of the cheapest marketing commodities around. Our email addresses are sold from one organization to another, anyone can buy them. And phishing responses are all seem to be about advice to users and simulations. Responses which block too many emails are considered to be too disruptive in many respects and have been for quite some time. People say, oh, I was due to get an email, but it was blocked. Can you unblock it, can you do this?
Speaker 3 - 19:03 And organizations find that quite disruptive as well.
Speaker 1 - 19:07 And the thing is, if you think.
Speaker 3 - 19:09 About it, criminals don’t need to use AI. They can succeed and they can carry on benefiting from what they’re doing because we’ve seen that it works, it absolutely works from the NCSE statistics. But the thing is, if the attackers.
Speaker 1 - 19:25 Started to use AI, it’s an article.
Speaker 3 - 19:29 Earlier this year, March the 8th, talking about AI taking phishing attacks to a whole new level of sophistication.
Speaker 1 - 19:36 So when they’re getting this, what do.
Speaker 3 - 19:39 You think is going to be what do you think in your own organization? We’re going to be dealing with we’re going to be dealing with the phishing being a major disruptor already. In some respects it is, but it’s perhaps going to get even worse than it has been before. And the sorts of advice that we’ve been giving our users and I was amazed. The first two thing I couldn’t believe, I searched this online. I started to copy that down and I thought to myself, I’m going to include this, although it’s nonsense. Avoid phishing. How is that good advice? I pulled that off the internet when I was doing searches for advice for reducing phishing. The advice for reducing phishing is avoid phishing. Next one was be cautious about spam emails to avoid phishing. So there’s a whole bunch of things that we’ve said to people in the past and most of these I know I’m guilty of.
Speaker 3 - 20:36 I have typos in my messages. Sometimes I send out messages that are urgent and have typos. Some of them are attachments and I may be asking somebody to do something, which is great, but I have grammar issues. Sometimes Salutation may be wrong, there is no email address.
Speaker 1 - 20:56 All of those things that we give.
Speaker 3 - 20:59 Advice to ordinary users to figure out whether something is actually a phishing message or not are the sort of things that everyone’s guilty of and we receive and we write messages with these sorts of things in them. I’m going to show something now. It’s not my work, it’s work of some security people and I haven’t got permission for it, but it is out on the internet. And because it’s out on the internet, I’m using it.
Speaker 1 - 21:27 And basically I’m using it to illustrate.
Speaker 3 - 21:31 How sophisticated the attacks are coming. And really the question that was being asked in this chat on LinkedIn was what phishing templates do you think is the most sneakiest? So really this was around phishing simulations and this is around the idea and the people who did the work on this. It’s absolutely brilliant work. Nothing wrong with the work at all.
Speaker 1 - 21:54 But I just think that it is.
Speaker 3 - 21:57 Difficult for ordinary users. I find if I had to pull these things out and spot these things in ordinary messages, I don’t know if I could do it. So this one here, it’s vague, but how financial services make their decisions is never predictable. Like most fishing, the impact of ignoring it is always dropped into the message. Yes, absolutely.
Speaker 1 - 22:22 I don’t know if I’d be able.
Speaker 3 - 22:23 To respond to that in the right way and suss out that it could be fishing. This one here, starting as it means to go on being direct and authoritative, the window to act is clearly stated as it is. In fact, it’s a requirement. Again, simple thing, I wouldn’t be able.
Speaker 1 - 22:45 To pick it out.
Speaker 3 - 22:45 This one here.
Speaker 1 - 22:48 Maybe I should.
Speaker 3 - 22:49 Look, I’m being curious, not nosy. Maybe you should, maybe you shouldn’t. Another one. These five, they are absolutely brilliant. And I can imagine that AI could actually craft these in such a way that any security person could possibly fall for these. And if not any, then at least ten to 15%. And that’s probably all these people are looking for. In the worst case, 5%.
Speaker 1 - 23:20 However, most of the phishing stats that.
Speaker 3 - 23:22 I’ve read in the past is they’re looking for returns of about 1%. Quite often they’re looking for small returns. And the thing is, AI is going to get better. And what we’re doing in our response to AI going into phishing and our response to what’s out there and what we’re expecting people to do is we’re expecting our users, first of all, to remember every phishing technique that we’ve covered in the last few years in every email they read, we want them to learn.
Speaker 2 - 23:49 Sorry, just a quick one. So those examples on the previous page, they were created by an AI engine?
Speaker 3 - 23:57 No, this was crafted not by an AI agent. This was crafted by a company that does user when it’s training.
Speaker 2 - 24:04 Okay, cool. You can totally see how we’re not that far off from taking that and generating also, in fact, creating custom to the user, almost taking into account the user and context, isn’t it?
Speaker 3 - 24:16 Absolutely right. It’s taking it the next level. So the point I’m making here, we’re trying to get into spot techniques in every single email from fast thinking to slow syncing. And we’re asking them to do this for every single email they get. And we’re causing stress related sorry, skewed related stress. That’s what I’ve heard that this is a term, I’ve looked at some research and how it’s caused and all of that is causing our users to be non compliant. And, and that causes a whole range of different types of fatigue. And that fatigue may not hit them instantly, but these different types of fatigue end up leading to ineffective user awareness outcomes. That’s not where we should be going. And the thing is, the reality is that phishing isn’t going to go away because it works. As I’ve said earlier on, if anything, it carries on working, it’s going to increase, not decrease the types of phishing has been and will continue to increase with the technology and with the different types of sites and the services that we get.
Speaker 3 - 25:19 There will be phishing emails that related to AI as well, that will claim to be from AI that’s identified something that you may have been looking for. The number of emails everyone gets isn’t going to go down. That’s increasing, especially because everything’s going digital. Whether you want points for your shopping, whether you want to set up an account with a shopping site. No matter what you look at. Everything we do now, even communication with our council, everything, expects you to give up an email address of yours. The inventiveness of attackers will get better. And when it comes to target attacks, yes, they will be using AI. Why wouldn’t they be using AI? Not that they have to, is what I said earlier on. Targeted attacks are the hardest to train against. How are we going to train? I mean, we’ve given five examples there, and they were put together by smart people who do anti malware sorry, anti phishing training.
Speaker 3 - 26:18 And they are going to get smart and they’re going to be customized. And I guess really what I’m trying to get to is when we reach the breaking point of relying on our.
Speaker 1 - 26:28 Users to do this sort of thing.
Speaker 3 - 26:32 While we’re actually slowing them down in their productivity and the cost of all these things, it might seem like it’s fairly straightforward. Training people, monthly costs or updates, train people, update phishing. This is the latest thing. This is what you need to look out for. Could be five to ten minutes a month. Simulations on that, five to ten. So if you add the two together, let’s say it’s ten to 20,000 users, that’s ten to 20,000 minutes a year, which is about 167 to 333 hours, which comes down to between 22 to 44 days. And if we average it to a month, that’s about 5000 pounds on average salary. That doesn’t seem that expensive. And the cost of training and simulations could be anything from three to 20 pounds per user per month, which would be anything from 36 up to seems okay in some respects to many people.
Speaker 3 - 27:24 But what we’re not thinking about is the hidden costs and the impacts of phishing responsibilities onto our users. The fact that we’re putting these responsibilities, they’re changing from what’s? Normal fast thinking to slow syncing. Whether they expect to have the conversation via an email with someone or not is irrelevant because it could be new, it could be existing conversation. Whatever it is, it’s still a cost to have to think about whether or not this is that same or not same conversation and that cost of disruption to users. And the more messages they get, the greater the total cost per day per user of that disruption in the way that their brains are working, in the way that they’re thinking. So user interface design realized that even using the wrong colors has an impact on interrupting the user flow.
Speaker 1 - 28:12 And the strange thing is, if user.
Speaker 3 - 28:14 Interface design is the same, why except why do we not understand that we are absolutely interrupting the flow of productivity? And the extra cost of that, I really think are absolutely astronomical if were to try and calculate them. I did try and do some calculations before for this presentation, but I thought they may be too wide.
Speaker 1 - 28:36 And without some academic sort of rigor.
Speaker 3 - 28:41 It’S going to be meaningless. So I will move this particular presentation. I’m not towards the end yet, but I will try and get to a point where I can present those the impacts of these things. So regardless of all this training, fishing is still the number one impact of crime. So yes, there’s been simulations going on, yes, there’s been user training. And regardless of all that, we’ve still got phishing as a number one thing. So since phishing is a major disruption, let’s assume that because too many phishing emails get through, we can either spend more time teaching people how to interrupt their workflow by thinking slow and thinking through the simulation training, or we can spend more on technology to reduce them in getting through in the first place. And we do need to start thinking differently. We need to sort of start figuring out the human error is the single source of breaches is what we hear so often, yet all of our staff are also considered to be our best defense.
Speaker 3 - 29:42 So really we should be thinking about what is that humans do so badly, what is it they’re bad at, what is it they’re good at? And in terms of our defenses and being the largest source of the breaches, users should focus on things that AI can’t do and things that they can do. Because at the moment, we know for a fact that AI cannot do the work that we do.
Speaker 1 - 30:05 That’s a fact right there.
Speaker 3 - 30:07 It’s not a surprise. So without AI in phishing tools, we know that rules based technologies work if we can pick out the latest things that our users should be looking out for. And I find that amazing that there’s so many anti phishing solutions out there. And yet, although there’s so many antifishing solutions out there, the volume of phishing simulations from what I think the first anti phishing simulation tool I came across was about 2008, 2009, and there was one company, and I forget the name, and it was the only one. And I remember speaking to them several times on improvements that they couldn’t should be making. And that was 1415 years ago and that was the simulation. So in terms of where we’ve got to from there to now, so many things have changed.
Speaker 1 - 30:56 And with all of those things, if our experts can pick out what it.
Speaker 3 - 31:02 Is that we need to be looking out for, why is it we can’t stick those into rules and stop it so that we don’t have to train our people to start looking for these sorts of things? And AI is smarter, it absolutely is smarter than all users when it’s been taught the techniques. Dennis’s presentation showed that if it’s been taught, has a list of enterprise specific rules in terms of phishing, whatever rules the enterprise has got, if it’s got a list of registered domains. And the reason that’s a good one is. I know that their solutions and the way that they work and why that sort of thing is important is it will stop users clicking on any domain that was registered in the last day, week, month, whatever it might be that you want to set your controls to. And I know that phishing emails rose a couple of years ago when COVID started, and NCSE did say at that time when went into lockdown, that they noticed a range, an extra couple of thousand domain names being registered at that time and they start to be used.
Speaker 3 - 32:07 So they were registered and they were used within a very short space of time for lots and lots of things. And again, you can use these sorts of tools. So it’s not that complex in adding these things in. We need to be reducing user thought, action and response in terms of fatigue. We need to limit what people need to know to very extreme small set of messages because this is a disruption from the work that they are there to do. We need to limit what they need to do about messages other than the work they need to do to get things done. We need to keep tasks that should be performed by technology separately from those that should be performed by humans, and actively looking for ways to help them become more aware by having less to do in response. In training, they always talk about three things that you need to focus on three things.
Speaker 3 - 33:00 And really we’re talking about anti phishing simulations covering lots and us as users having to remember all these things. I really don’t like having to remember all these different things myself as a user, what they have to do, what do users have to do, and whatever it is that they need to do, it should be clear, it should be simple, it should be quick, they shouldn’t have to think about it. But at the moment, with all the different techniques, they’re having to think a lot outside their comfort zone. So remember, our users are not there for us, they’re not our human firewall, and they are there to add value to the business, not because we can’t be bothered to find better phishing. Blockers if your border firewall was broken down, would you expect your users to replace it and would you expect them to deal with so if they wouldn’t, why would we expect them to deal with phishing?
Speaker 3 - 33:51 It’s similar sorts of thing, because the onslaught of, I guess, traffic that we used to get when we first started to implement Firewalls is the same as it is now when we’re talking about phishing emails that we get messages, email messages have been growing similar to the sort of responses that brought our firewalls into place. And if the users are smart, sorry.
Speaker 1 - 34:17 AI.
Speaker 3 - 34:20 Is not smart enough to do the work. Our users are smart enough to do the work. And we should make our users think slow. Sorry, making our users think slow is they’re losing battle in the long term. And I think that we will get to a point where the number of phishing messages we get, it’s not going to be very good. So, question are our users smarter than AI? Yes, they are. They’re infinitely smart than AI because AI couldn’t do their jobs by a long shot. But let’s help by reducing the total decisions that users have to make or think about so they can get on and doing the jobs that only they can do. AI should be able to do the other things.
Speaker 1 - 34:56 So AI, our users are not smart.
Speaker 3 - 34:59 In AI because they cannot analyze loads and loads of different messages and loads of things in a short space of time in a way that I said I’m not as smart as AI. AI can do loads of things that we cannot do. And if AI that you’re using isn’t good enough, we should be offloading the work that AI cannot do off to our users. Because so far many organizations, the way that they’re using phishing simulations, we’re actually doing exactly that. We should be keeping our messages focused on the most important things that our users can’t do. And if your AI can’t do it, complain to your vendor, find another vendor, don’t transfer the action of the risk response onto our users.
Speaker 1 - 35:43 That’s it.
Speaker 3 - 35:43 I’ll stop there. And that’s the end of my presentation.
Speaker 1 - 35:47 I will stop.
Speaker 2 - 35:50 Can you expand on that last point? I think it’s quite interesting. Can you expand on your last point on shifting the things to the AI, to the users? Because it’s quite an interesting point.
Speaker 3 - 36:05 Yeah.
Speaker 2 - 36:06 Can you go like some examples?
Speaker 3 - 36:10 What I’m saying is I’ve searched the number and I tried deliberately not to point to any single vendor. But there are whole bunch of claim to have really good AI. Now, a year ago I would have laughed anyone that said that to me. And it has been getting better and it will get better, but we’re not there yet. But it’s certainly better than lying on our users. The number of vendors that claim to do this has been growing, especially with AI get it getting better. So I’m just talking about any aspect. I think we need to focus on giving our users time back in any way that we can. If phishing simulations is part of it and we do it slowly, then that’s fine. If it’s not, we need to think about our own organization. Some of the things that I’ve done in the past in terms of the question you’ve asked me is we’ve started to in some places I’ve worked at.
Speaker 1 - 37:06 What we start to do was rather than use blanket simulations, we start to.
Speaker 3 - 37:11 Look at specific users that we think are going to be high risk and those that are much low risk. Those that we think are high risk. We give them specific ones sort of things that they’ll encounter, other people, other things that they’ll encounter. And I know that many people say they already are doing this. I think what we need to do is where we are still doing it is to do it as fine Tuningly as if were actually fishing ourselves with that target group. Many of the fishing exercise I’ve seen and some people complain about is that they have blanket fishing. Things really take up too much of their time and what they learn from them is not much because basically, as I’ve said, we still have the same phishing problem we’ve always had.
Speaker 2 - 37:53 Yeah, that makes sense. Actually, I guess an interesting evolution of this is that, and I don’t think we that far off is when we actually start to get the AIS to read our email, right, to basically start cleaning up. And they might actually be quite effective, even if just trimming saying, look, based on your history, based on your analysis, especially when it comes customized, this is the one that matters, the ones that don’t matter. And this one is a bit weird.
Speaker 1 - 38:20 Why is that.
Speaker 3 - 38:22 As good as that? As you say, instead of blocking them and I’m saying blocking the first instance because it just sounds easier. But what the AI does is messages come to you exactly what you said they point out, say look, this message come from this person and the CFO but it seems like it’s come from a Gmail address. Are you expecting it? So they basically highlight the things that you need that are different and you make the decision. And if you make the wrong decision, that is totally okay. But what you’re not doing is spending thinking slow and thinking fast, doing normal work to something that’s a chore, that security people expect you to do as a result of the training that they’ve given you.
Speaker 2 - 39:01 Yeah, and I guess one of the ways I will scale is because a lot of sometimes the detection is done at the edge where they don’t have context. Where the more the models understand, your inbox understand, like I said, your relationships, the graphs and maybe able to be a lot more effective. And then in a weird way maybe we can actually start to have somer tooling talking to each other where we can say hey, imagine a world where in that moment in time the crew and EDR tools that you have in the device go hey, by the way, pay attention because something bad might be happening. A lot of tools, sometimes they cannot do as much as they could because of performance. Right. It’s kind of like a lot of our security models that we could be very focused on some activity, but that brings everything down.
Speaker 2 - 39:47 But if you know that you’re only doing it for a small period of time or when it’s a bit more dangerous. Yeah, so that integration is again, I think that context, that is starting to be possible now, where even five or six months ago, I couldn’t really see how we’re going to get to that level of automation.
Speaker 3 - 40:06 To replace that or to respond to that. I guess my techniques that I’ve always.
Speaker 1 - 40:12 Used everywhere I’ve went, because you never.
Speaker 3 - 40:15 Know, and you move around a lot, you never know what techniques or what tools are going to be in place. So you techniques yourself and approaches that work for you no matter where you go. So for me, what I do is my inbox. Anything that’s important, that’s critical, where I have to pay attention, that where I might be fished. What I do is I create folders that work for me, and I automatically create a rule using my email client that will move that if it’s from that user into another folder. And if I get an email from that person that stays in the inbox.
Speaker 1 - 40:49 Itself, that means there’s something wrong with it.
Speaker 3 - 40:53 It hasn’t moved. Little things like that, which we all have techniques, and we shouldn’t have to figure out these techniques. We should be able to have a single inbox. Everything is sorted by itself, by the AI, by our AI email clients, which points to us all those things that we’ve just been talking about.
Speaker 2 - 41:15 Yeah, no, that’s a cool technique and yeah, looking forward to having I manage my email bots a little bit over the place at the moment.
Speaker 3 - 41:25 I’m going to develop this a bit further. So a couple of months time, I may do another one related to this. But the next stage of where I think it’s gone and where I think it’s going to be useful, because I think this is what I’m very.
Speaker 1 - 41:37 Interested in is user behavior and how.
Speaker 3 - 41:41 Users take on our relationship with security.
Speaker 2 - 41:45 But the one that I told you we really want to expand is on the users as human sensors. And actually, if you look at the Kneban framework, that world, there’s a lot of I think those guys I think it was Snowden, david Snowden. Dave Snowden. If you look at the work he’s done, he came to one of the summit, actually, and did some really cool stuff, but he kind of had this Knebbing framework, and a lot of it is about human sensors. So he expanded on he thinks about complex and think chaos, complex orders and something else for a while. But yeah, check it out, because I think it’s quite interesting. But I really like that human sensors concept where in a way, we should be looking at users as our best assets. One of the messages that we try to push, but it’s always a bit hard, but I try to encourage even the users is that what I would have an issue is if the user don’t let us know, right?
Speaker 2 - 42:49 Like, if the user sees something and don’t let us know, even if they do something and don’t let us know means we lose a lot of time. Like, for example, if they send the email to the wrong person, think initially the user might go, oh, let me see if I’m in trouble, where actually I almost want to turn the pivot around and say, look, you are only in trouble if you don’t tell us. If you tell us, we could do something about it. We don’t want to be catching it at the other end. Not that the user should be in trouble, but I think we need to create this culture of the human senses and they should be empowered and we should have the ability to process the data because it means that we’re going to get a lot more data, we’re going to get a lot more false positives.
Speaker 2 - 43:24 But I much prefer the users to be triggered happy than trigger unhappy, where when they do something that maybe it’s a bit weird, they don’t acknowledge it and they don’t raise suspicious. So that means we can’t calibrate the thing.
Speaker 3 - 43:39 And there has been some research done and I mentioned it and I’ve read.
Speaker 1 - 43:42 Quite a bit around users getting fatigued as a result of all the different things they have to do and what.
Speaker 3 - 43:52 Makes them respond in the right way, what makes them respond in the wrong way. So there has been some research done in the UK as well, and it’s moving forward and there is beginning to be more and more work done, because, I don’t know if you know, but the number of universities that are now doing Masters and PhDs in cyber has been growing over the last seven, eight years. And it’s fantastic. It’s really good. And some of the things that they look at are users, not just the side beside of things that you and I may be interested in. And they’re looking at stress and stress it’s causing users. So it’s all very exciting stuff.
Speaker 1 - 44:29 Yeah, cool.
Speaker 2 - 44:30 All right, man, brilliant. I just checked, there’s no questions, so this is all good. We wrap it up and I’ll see you on the next one. Brilliant.
Speaker 3 - 44:36 Take care, speak to you soon. But.