In this session we will focus on what really happens when your business suffers a cyberattack
Get an honest understanding of:
- the reality of facing an outbreak
- the impact on your staff
- the harm to your business operations
- the struggle to recovery
- the damaging long term affects
- The real-world costs to a business of falling victim to cyber-crime
- Get an insight into where CISO’s and CTO’s fail, your weak spots and the hardest areas to protect.
- We will also cover where cyber insurances help and where they can hinder.
- Perfect for any business or tech leader with responsibility for, or doubts about, their organisation’s cyber security defences.
Spend this session with James Dell and have your eyes opened to the world of hurt that comes from a failure to protect your business.
A number of Publications on our blog:
Dinis Cruz - 00:03 Hi, welcome to this open Security summit session in April 2023. We have James, who is going to talk to us about the truth about suffering a cybersecurity attack. I’m really looking forward to this one because I’ve been involved in several ones and there’s a lot of great stuff to learn. So. Over to you, James.
James Dell - 00:23 Cheers. Thank you very much. Thank you for joining me to talk about the truth about suffering a cybersecurity attack. I think the ultimate thing that I want everyone to leave here with today is we’re all human. At the end of the day, these things happen, but we need to learn from them. It’s not a case of just taking a scenario and going, well, we’ve dealt with a problem and putting it behind us. We should be learning and growing from it. That’s really where I’ve approached what I’m going to talk to you about today from a real world point of view. Just to give you all a bit of background about me. So I’m James Dell. I’m associate director and head of technical architecture at Planet It. The reason that I’m kind of here and I’m able to talk about this stuff is we unfortunately get pulled in after a lot of businesses have suffered a cyberattack and where they’re doing incident response or we’re doing threat investigation and trying to find out how things have happened, ultimately put things right and get businesses back on track.
James Dell - 01:16 Often we get dragged in at the point where businesses have suffered and they’re really looking for someone to be that helping hand. That’s given us some great opportunity, me personally, some great opportunity to be first hand across some very different businesses. That’s what I’m going to talk to you about today, is those scenarios and where that fits. Obviously I work plan it and a lot of you probably won’t know who plan it is, but essentially the way to look at it is we are a one stop shop for all things it. That’s why we come from a very strong cybersecurity background. 20 years in cybersecurity, but we just don’t just do that. We’re involved with lots of businesses ahead of that. We may be talking to them about hardware procurement and other things, and then an incident happens and they go, oh, you guys are cybersecurity experts, why don’t you come in as well and help us with this?
James Dell - 02:07 That’s how we get involved in a lot of these scenarios. I’m talking about the scenarios today, all of these are real world scenarios, so everything I’m talking about really happened. Quite obviously, we have sanitized down the data, so we’re not telling you who we’re talking about, we’ll tell you the sector they sit in, but I’m not going to tell you exactly who they are, because obviously they’ve done a lot of work to rebuild their brands and I want to go and trash them on the internet. My kind of aims from, say, the kind of four main aims, is to talk about what happens when a cybersecurity attack is discovered. There’s a real human element behind that I want to talk about and go into some great detail around that. I’m then going to go through four scenarios that happened and I’m going to cover inside of those, the attack chain, where they went wrong as a business, and then ultimately what the financial implication was to that business.
James Dell - 03:00 Once we’ve looked at those scenarios, I’m going to come on and start talking around the lessons you can learn from those scenarios and what we should be taking away from this as businesses moving forward and It professionals in general. The final thing is the kind of elephant in the room, that kind of cyber insurance. What I want to talk about is what it really costs, because we all know how important cyber insurance is, especially in 2023. What’s the underlying principle there and what is the implication of the risk? I’m going to go into some detail around that. What happens when a cybersecurity attack is discovered? Well, having been, unfortunately, not an instant, I’m going to talk about today. Firsthand, when I was an It manager in front of one of these, the first thing that happens is you get overwhelmed. That’s what I’m trying to show here.
James Dell - 03:50 The first thing is your pressure starts mounting. You there’s people looking at you for a lost revenue. They’re wanting to know how these mistakes happened. They’re wanting you to make decisions to get the business back on its feet as quickly as possible. They want to know if you’ve got the resource to do it. They want to know if the attack is over. They want to know who is there to be held accountable. They want answers. They want to know who did it, how it happened, when it happened. None of this ever happens at a convenient time as well. You’ll then be forced to put untested strategies into place. For a lot of businesses, their Dr plans, their business continuity plans are untested and this is the time that they get pushed and you push the button and go, right, we’re failing over to our Dr scenario.
James Dell - 04:35 And does it really work? Has anyone ever used it? That loads employee stress. You’ve then got senior management looking at you again, well, what about the reputational damage? And you’re screaming for help ultimately. What that really does is having been there and seen lots of It managers there through what we do at Planet it is, it causes panic and it really influences decision making. You’ll see that through the scenarios that I’m going to talk about, that as much as we can say that we all know what we do in a scenario. We’ve got great Dr plans, we’ve got great cybersecurity, we’ve got a great sock team, we know the pieces of the puzzle, we know that we’ve got offline backups, we’ve got Immutability. The truth is, when it goes wrong, the pressure really quickly ramps up and a lot of people struggle to make clear decisions in those scenarios.
James Dell - 05:27 Ultimately, what I want people to take away from this is that you are human. You will panic when these things happen. Because as much as you may think you have the answers, you don’t at that point in time. You’ll be getting pulled a lot of directions with lots of people asking you lots of strange questions that you don’t really care about at that point. Because you’re thinking, if I can just recover System X or if I can just get the website back online or just get the sales function working, then that will take senior management pressure off me. Or can I get the attacker out of it? Are they still in the system? What have they managed to take? You’ll be thinking about so many other things that you really won’t care about. The kind of, are we going to be good? What damage does do to our business?
James Dell - 06:05 Take a breath and when you get into that scenario, this is when you need to know really clearly who you’re leaning on. It shouldn’t just be a case of who you can lean on internally, because your staff may be on holiday, they may be sick, they may be out for another reason. It’s who you can lean on externally and who you have trusted relationships with. Whether that be your cyber insurance, whether that be a SoC partner, whether that be a security software specialist, whether that be someone like Planet It in the MSP space. Someone that you can lean on and say, Right, it’s all gone wrong. I need help. And how can you help me? Get us back. It may be that it’s just to have someone to sound off against, but ultimately having that in place and having that somewhere that you feel confident about leaning on, it is the key, really.
James Dell - 06:51 That’s something that I personally really suffered when I was in an organization and were hit with a cyberattack. Was that suddenly it went from, yes, I’ve got a massive problem I need to fix because our systems are down and we’ve got a massive ongoing attack to I’ve got someone from senior management standing over my shoulder, breathing in my ear, saying, when’s it going to be back online? And I honestly don’t have the answers. I’m just looking for a way to get through the day, as it were. Now I want to talk about those scenarios. Right, so these scenarios, as I said, they are all real cases. These are things that have actually happened and they’re in, what I’ve done is I’ve made sure that we talk about the space, they’re in rather than talking about the business themselves. In scenario one, we’re talking about a training provider.
James Dell - 07:38 They are around 8000 users, or were around 8000 users at the time this attack. This attack took place in September 2020. Now, the dates that I will give you, the case dates will be the date that we got involved. They won’t necessarily be when the attack started, it’ll be the date that they’re on our records. Some of these attacks, as you’ll see as we get into the attack chain, have actually been going on for a little while. The date that references when we’re involved. This training provider, 8000 users, they are massively, it heavy as an organization. All of those users are It users and their lifeblood is it. Therefore this attack was massively disruptive to them. So let’s talk about it. First of all, let’s talk about the type of attack. Well, this was actually a manually triggered and it was a targeted attack. It was aimed at them as organization.
James Dell - 08:27 It had been specifically written and designed as malware to attack them. This wasn’t someone just getting lucky with an attack or spamming every IP address. This was targeted as them as an organization with the underlying principle of stealing information from their CRM system. They had some knowledge of the organization, but they actually ultimately knew the space they were in and what value there was in the data. The actual weak link in their system was a public facing link to their exchange system via Skype for business, their UC platform. This ultimately gave them a server that was facing the Internet that was unpatched and open to the wide world. This attack chain found that system very simply. It started off with scanning for an open port. They found that port and they also found that this server itself was publicly facing and it had RDP unsecured and opened on it.
James Dell - 09:26 They very quickly and very easily brute forced themselves onto the server. Now this brute force actually happened in February. So when we got involved was September. You can see already that those first three steps of this attack chain happened multiple months before it was detected or anything really happened. In February they have scanned the IP addresses, they found the unsuccessful port, they brute force into the server and they’ve accessed via the Remote Desktop protocol this system. What they then did was they did of investigation work for these attackers and they did some IP scans, they pulled some data, they did some investigation around what they could see on the network, what ports are open, and then they ran, mimicats and extracted the passwords from the memory of that system. Now, in the naivety of the team that had deployed this server, they’d used a domain admin account to register this server with the domain itself.
James Dell - 10:21 When they’d registered it with the domain, it had actually kept that Credential in memory and they hadn’t changed it since the server had been deployed. Mimicats, a tool that scans through the memory and finds the passwords itself, was very effective in gathering that data very quickly. Ultimately, from that, it then had the passwords it could use. They sat on those passwords for multiple months before they did anything. Yes. Sorry. You’ve got your hand up.
Dinis Cruz - 10:46 Yeah. What I really like about this is that sometimes this idea that the attackers have all the advantage, they can basically find one vulnerability and that’s it, and it’s game over. The defenders only have to defend everything all the time. I always challenge that with the idea that, look, a successful attack has plenty of things that they’ve done that we could have detected. If you think about it, every one of these points was a moment that the team could have detected. They could have start raising the alarm bells, you could start hard to get an understanding of what’s going on at every one of these. Right. In fact, I guess the more to the left you are, the more mature an organization is from a defense point of view.
James Dell - 11:25 Right, exactly. Yeah. That’s the thing, is what we’re looking at when I look at this professional is I want to be stopping the first two or three steps there, not worrying about stopping the things at the very end, because by that point, you’ve already lost control. That’s very much where this attack led to, was by the time that they’ve got the passwords from memory, they’ve got access to a system, you’ve actually already lost the keys to the castle. You’re just waiting for something to go wrong. They still had time to spot the system because that Mimicats tool was sat on that system. If you’d logged into it, simply logged into it, looked on the desktop, you could actually see the file there, but no one did, because no one was doing any form of maintenance. They just left it there. So it then progressed on.
James Dell - 12:04 They leveraged those credentials in September to access all the systems. February they’d done the reconnaissance, they were ready to go, but they spent the time between February and September writing a custom script that would use the information they’d already gathered to attack the SQL system that had the CRM data in it. It was all staged so that when they hit Go, there’d be lots of disruption. Very carefully, they’d be extracting the data that they wanted while keeping the It team busy. And that’s what they did. They leveraged the system details and they extracted the data from the CRM. At the same time, every system that wasn’t part of the CRM started being encrypted. The systems were being deleted, files deleted, backups being deleted, and the system was being locked out on mass, but the distraction was there, that it was affecting domain controllers, users, printing systems that people didn’t think, oh, what about the CRM system, which actually looked okay until they’d finished extracting the data they wanted.
James Dell - 13:01 That happened to the CRM system as well, but ultimately it was detected too late, right? They detected it very much at the end of the attack chain. They detected it when they saw the visible signs, when all the users started saying they couldn’t log in, and when printers stopped working and they had multiple other issues, they then went, hold on, we’ve been here. That’s when Planet It got called in and they were like, oh, we’re already out, we’ve lost this, what can you do? We’re very much on the back foot then. Were able to extrapolate this information from all the log data and everything that we saw afterwards. Ultimately, let’s look at where they failed. I’m being a bit honest here with this is kind of at the back of our report, what we know that they failed. The first and ultimate painful thing is they underinvested in their It staff.
James Dell - 13:48 None of their It staff had been trained in any form of cybersecurity, in any form of modern system protection. They all had good base It skills they’d come into the organization with, but many of them had been there for five years plus, and they had the new training in that time. They’d become very stale and out of touch with the way that things really work. There was a number of unchanged passwords from over five years, so the one that was actually used was a domain admin password. When we eventually got round to doing recovery in other world, we found there was passwords in there that had been eight to ten years in an unchanged format. Ultimately lots of them were very simple passwords with no complexity at all, that actually they were lucky hadn’t been compromised before them. There was no multifactor authentication on any system at all.
James Dell - 14:37 Every system they had, there was no MFA. You could literally go from on premise systems to three, six, five to their Azure environment without any MFA prompts. You could get into CRM without any MFA. You could even get into systems that they’d integrated like Google Suite and other things. You could actually laterally spread really easily without any kind of barriers to stop you. There was systemic failures in the way that the It was being managed. The It management had no understanding of the environment itself. They were there as essentially a caretaker, keeping the lights on, keeping people happy from a customer service point of view, with no real care for the It systems themselves. They extended that lack of understanding though to the end users. They didn’t train the end users with any kind of cybersecurity training or any kind of phishing training and they did no testing.
James Dell - 15:33 Had this not been the way the attackers got in, there would have always been another way because they were so open, they were essentially they had all of the doors open and all the windows open in their house. They also had silly things like passwords on printed out, bits of paper stored on physical media, there was password books just left on a shelf in the It office. It wouldn’t have been hard for someone to have wandered in and taken those passwords which ultimately weren’t very secure, so you could have remembered them without taking the book as well. The things that then become more of a problem for me are when we start talking about the fact they had no representation around information security level at the board. They had quite a large board of governance, they had quite a large management team. There was no one anywhere in those ranks with any kind of information security hat or even vested interest information security.
James Dell - 16:24 The highest person was a mid manager who didn’t attend any of the senior manager meetings, didn’t go to any of the governor’s meetings, they weren’t involved in the ownership of the organization. Therefore information security stopped at a point where it got no backing from anyone. They then really had a lot of things like misconfiguration, they had a lack of products deployed, they had an underinvestment in their It, but they also had a number of other breaches that had been recorded for over twelve months. Went through the attack that actually took them down, happened in February, but went further back in the logs and were able to find stuff that happened twelve months before where people were testing the security to see if they could get in. Because their firewall was misconfigured and they didn’t have any kind of basic sanity check and they had no pen testing, nothing like that, they just were open.
James Dell - 17:18 That’s where it comes into the end. They had no third parties, they had no one to lean on. We got called in as who’s local to us, who’s available that can help? It wasn’t through a kind of, oh well, we’ve got a cybersecurity partner we trust, like I talked about it was we need someone who’s got a cybersecurity background who can drop in and help us. That lack of investment really hurt them. Because even if some of the technologies they’ve been using had been configured to the utmost standard, they had known published vulnerabilities that were in existence but hadn’t been patched because the manufacturer no longer existed or the product they were using just had gone out of support, end of life. The manufacturer didn’t need to patch anymore. You could have done all the work in the world and their firewall still had holes in it, those kind of things just can’t be fixed so let’s look at the realism.
James Dell - 18:07 Right, what did it cost them? These figures are actually from their insurance documents, so these are actually correct. We’ve run them up because they’re higher than this. The instant recovery and repair cost them 200,000 pounds. They had to find that money from their pockets, so they had to dig into the budget and they had to dish that money out. That money was spent in about a six week window from everywhere, from rental of hardware through to professional services, through to everything they needed to get themselves back on the road, back up and working. That was 200,000 pounds they then spent, or sorry, didn’t we spend? They lost around three and a half thousand pounds a day in system outages. Because they were a training provider, whilst they had no it, they couldn’t provide training, so it was costing them three and a half thousand pounds a day for these system outages.
James Dell - 18:57 The outages in most parts were resolved in around three weeks, but there were six weeks of complete disruption before they were back to where you’d say was normal working. There was a huge amount of disruption there. That cost obviously tailed off towards the end. Ultimately they got a massive fine from the ICO. So 150,000 pound fine for the breaches. The data they lost around 80,000 records from their CRM, going from first names and date of births through to medical information, through to academic history, stuff like that. There was a lot of data that was lost. ISO didn’t take that particularly well. The initial cost of the attack was 450,000 pounds. The truth about the actual total cost of the attack, by the time all was said and done and all the bills were settled and everything was just over 2 million pounds. Now, the big thing for me here is that the Cyber Insurance did not cover this attack.
James Dell - 19:50 They spent that money in good faith. That 450,000 pounds thinking, right, well, we’re going to get this back when claim on our insurance we’re doing this. Because of the systemic failures that I highlighted in the previous slide, there was no coverage. The cyber insurance invalidated. They said, you haven’t done enough, and therefore they had to carry the can. Now, ultimately, what happened to this organization was, and this is why I always use as a case study, their entire board level and senior management team were let go a few weeks after this attack was resolved and it became clear that they weren’t going to get the coverage of the Cyber insurance. The organization was then taken over by another organization and absorbed as part of a merger, as essentially a supported loss lead, because it had learners that were tied to the organization which needed to finish courses.
James Dell - 20:44 There was a bit of government help there to make sure that merger took place, but ultimately a massive financial impact and it affected a lot of lives. The people who thought they were safe, the people who were top level management, who had nothing to worry about because they didn’t care about it, were the ones that got burnt. The It team actually stayed around, they weren’t hit by that massive swathe of sackings, they were saved, but they ultimately all left in the long term. They’re not the only people that’s happened to. I’m now coming on to a very fresh case. This case actually happened in March this year, and this was for a 2000 user global electronics manufacturer, someone who we’ve known for a long time and unfortunately a victim of their own success to a degree. They were hit by actually an automated attack targeting vulnerabilities in web servers.
James Dell - 21:37 More specifically they were hit by a vulnerability in an open source plugin for their website. Like maybe we use Shopify or you’ll use other plugins like that. Essentially they put a plug into their website many years ago that had a vulnerability that been around for around 18 months and they’d not spotted it, they’d not patch that particular plugin, they patched the website and patched the engine, the underneath of it, but they hadn’t patched that plugin. That plugin allowed a great deal of access. The attack chain starts in the same way this was done by scanning multiple IP addresses. And it just got lucky. It found a system and it found that vulnerability in the system. It found the unpatched web server. What it then did was a SQL injection attack using that vulnerability to essentially override the underlying server. It gained access, it gained root access to the server and they used that leverage on that web plugin to gain credentials.
James Dell - 22:41 They pulled credentials back out of the system that they could then use to reinject back into the system to gain a much higher level of authority on some other systems that were open. Think like your VPN, access, web gateways, RDP, all those things that you would have protection in place because they leverage these credentials, they had more access than they should have done. Now when they went in, this attack was purely done to cause disruption. There was no financial motivation as far as anyone’s aware, there was no ransom note. What they did was they just went in and they used an automated script to delete all of the systems inside their VMware environment, to delete all of their backups, and then delete all of the access to their tape libraries. They were left with a system that in minutes became unusable and unaccessible. It was detected again, too late, it was detected at the point the stuff that I get deleted.
James Dell - 23:37 Through their own naivety, they’d left things like tape libraries and Immutable backups plugged into the network. This attack was able to delete information that should have been otherwise protected, putting them in a much more vulnerable place than they should have been. All of this was gained from a simple plugin that was just adding a very small feature to their public website and it’s a feature that had they uninstalled it, probably nine out of ten customers wouldn’t have noticed. It was purely because they’d had it there for so long, they just left it in place and they never thought, hold on, do I need to check if these plugins are patched? Because they were manually added. They failed in a few different places. The first one where they failed was they had a lack of staff. Their It team for 2000 user business was very small.
James Dell - 24:28 They had systemic failures in their It management. Their It management’s failures was they hadn’t asked for more staff. They hadn’t said we need to spend more money on security. They no investment in security again no MFA. This becomes a common story. You’ll see this through a lot of these. MFA is a great tool just to be a barrier if nothing else, to stopping these attacks. The text was stored in plain text on the server that was attacked. The reason they were able to extract them so easily was once they were on that web server, they had a lot of information in a word file that shouldn’t have been there. There was no air gap in the way that this company is set up. They have a lot of servers, they have about 2000 public facing servers to be honest, in the way that they’re set up and there’s no real air gapping.
James Dell - 25:14 The ones that are public facing versus the ones that do internal systems, the ones that are servers running devices that make their components are all in one flat network. That meant once you were in you could see everything. They had external sharing in place as well, which meant that it was really easy for these attackers to find a lot of ways to inject additional attacks and additional vectors for them to start deleting data and speeding that process up. There was no information security representative at board level again. The It director for this organization is a lovely person and they’ve done a really outstanding job. Without being in that room and being at board level and saying, I need more money, I need more investment, they were just shouting into a void. They knew they needed more stuff, they needed better security, they needed to do better testing.
James Dell - 26:04 They were just unable to tell anyone. There was no DMZ in place on their firewall. Everything again, flat network, easy for everything to sneak into the network and they even had systems hosted inside of Azure and they were accessible from the same flat network approach. The span of the attack was very easy to go. From on premise to the cloud and start really affecting their systems. They also have loads of legacy It equipment. What I mean by legacy is I’m talking about devices that run 2000, 2003 servers that should have been decommissioned a long time ago, or if they physically can’t be decommissioned because they’re attached to something that only runs that version, they should be completely air gapped from the internet and from the wider network. They had loads of those in place that you could simply dial onto, but the organization essentially hadn’t scaled with the business.
James Dell - 26:59 That was shown very much in my final piece, which is that they still were using tapes, they still had tape based backups, so they had to rely on older tapes that would take a long time to recover. They had a long time to get back up and running from backups have been taken nearly a quarter before this attack happened. They lost the first three months to the year essentially in recovery. Again with this one, they’re a smaller organization, we’re only talking 2000 end users. They spent 50,000 pounds on the first day on an instant recovery service. That was a rapid response service from the company. Sophos and they offer that, it’s a kind of a set fee. So they paid 50,000 pounds for that. They were losing 5000 pounds an hour in outages because of the turnover and what this business does and the semiconductors and the chips and all that stuff that it sells.
James Dell - 27:52 They were losing so much money per hour that it was a case of throw money at recovery. Let’s do it quickly and let’s get going. They then had to spend 90,000 pounds on a new system, on bits and pieces they should have upgraded anyway. They’ve then poured 90,000 pounds into all of that money was spent in a month. So, we are what, mid April now, all of this money has already been spent, this attack happened in March. Within a month they’ve spent a huge amount of money. They’ve calculated that one month of disruption cost them around 450,000 pounds, including the cost of recovery in the new system. That’s a huge impact to their bottom line. They anticipate the final cost will be around the 540,000 pound mark. They are lucky in the fact their cyber insurance will cover some of this because they had the right level of COVID in place.
James Dell - 28:47 It’s not going to cover the full 540,000 pounds worth. It’s not going to recover them from the loss of reputation, the damage that’s been done to their brand because they were offline. Companies that have been using them for 20 years to buy parts couldn’t buy parts that day, so had to go somewhere else. It’s never going to recover them from that and they will see that damage for years to come in their bottom line. For scenario three, I’m going the other way, we’re going to talk about a huge organization. This is a multi site regional education provider. They cover a little over 18,000 users across 16 sites, all in the UK. They’re an education provider. It is critical in modern education. And they were hit really badly. The reason they were hit was they had misconfigured firewalls. That’s as simple as that. Their firewalls were the weak link.
James Dell - 29:44 They weren’t configured correctly, they weren’t configured to best practice. An automated attack got through them and was able to essentially encrypt all of their systems and ransom them, or, sorry, present them with a very large ransom equivalent to a couple of billion pounds. I think it worked out worth in bitcoin without having to have any intervention from a human. They were able to get all the way through, get onto a system and get it deployed like that. They started in very much the same way. It scanned for IP addresses, it found direct access to a system. This system was literally completely open and published to the net. It was a server that you could have gone on any port you could have used, you could have very simply, using the IP address, found some file shares on it. There was no proper protection in place.
James Dell - 30:34 It was something that everyone believed had been removed, but no one had actually checked. The manual tools that were run on there were tools that you would manually run to hack on it, but they were actually run an automated script. Essentially, once they found this box, they dumped all that on there. The script ran, pulled some passwords, it managed to work out that it could get onto every system because it was on the domain. It was very much a legacy system. They silently deployed a payload across all the sites and systems. This is the key one for me with S is this was deployed everywhere for around five weeks. The antivirus that was in place on the endpoint didn’t notice it because the antivirus was misconfigured as well. The antivirus was told not to scan the systems to save because the systems were older, they didn’t want them to impact performance, so they cut corners.
James Dell - 31:27 They said, we don’t want to scan systems. Once the package had been deployed onto them, it was very easy for it to sit there under the hood. They triggered this attack at a time that was coordinated across all sites. When they knew someone wasn’t going to be there, so they knew their education and they knew there was a half term coming up. They waited till 330 on the day the half term started on a Friday, press the button. That attack then started, data was extracted, systems were encrypted, including copies of their system that were online backups with a third party provider. The system was in a complete disarray, unusable. They had to rebuild from the ground up. I’m talking about rebuilding every laptop, every desktop, every network appliance. They had to start from scratch. They are pretty much a new organization now versus where they were in 2021 when this happened.
James Dell - 32:26 They detected all this well too late. They actually came back in on the Monday to find none of the systems working. Everything was broken and they were scratching the headers to why nothing was working. No one noticed it. There was no alerting, no monitoring. All this attack happened over the weekend. When they came in the Monday they were essentially dead in the water and we came in a few months afterwards after they’d been trying to recover It themselves and then ultimately a midterday, they couldn’t do the recovery and they needed some expert advice to guide them through how to rebuild a system from scratch. When we look at where they failed, you’ll see there’s some very common themes appearing here. The first one, there was no one at Information Security represented at board level. There was no one who had that vested interest. Again, It director would seem very high up in the business, but he had no voice at the table.
James Dell - 33:16 The people that he reported into were not It people. They weren’t interested in looking at what could be done. They were interested in other areas of the business. There was a huge lack of staff for them. They had 16 sites, but only one person that was trained actually to cover those sites to a cybersecurity standpoint at all. They had systemic failings across It management as the other ones did. They had misconfigured firewalls, they had breaches again recorded for over twelve months on that system. Those breaches had done nothing. The big one for me that makes them different is they ignored the early signs of an attack. They just went, look, we’re seeing some strange behavior here. We’re seeing that there’s a couple of these systems that we’ve rebuilt that have got random files on the map, just be students or something. They ignored it. They had the opportunity there to stop it and to prevent this, but no one took the action.
James Dell - 34:13 The only thing on this slide that really to me only really takes effect, I suppose previously in the education space, but now more in all businesses as we’re looking at this kind of cost saving Is. The organization was massively focused on cutting costs and saving budgets. What they were doing was taking money away from the It team and saying, well, we’re not going to invest in cybersecurity, we’re not going to invest in these areas because we don’t need to. This is a shocking one. The amount of cost for this recovery. They spent a million pounds instant recovery. That was hiring in specialists, hiring in equipment, replacing equipment, requesting software. They literally went town with a shopping list. They had consultants in there who were coming in to build systems. They had loads of extra bodies running out. They spent a million pounds that came out of their pocket straight away.
James Dell - 35:06 They had to find that. They then got a fine from the Department of Education and they had inspections across all the sites because of the disruption they caused to teaching and learning that cost them nearly 100,000 pounds in the actions that the DfE expected them to put in place afterwards. They then spent 700,000 pounds on replacing core infrastructure that was just, they recovered to the old equipment and kind of got systems working. When it was looked at by the DfE and it was looked at by ourselves as independent assessment, we made it very clear that equipment wasn’t suitable moving forward. They had to spend 700,000 pounds replacing that equipment and then they spent around another 250,000 pounds procuring new third party suppliers. What I mean by that is they had an It support provider in place, a third party. They couldn’t use them anymore because they couldn’t trust them.
James Dell - 35:59 They had to go out and tender for a new contract and then the money stopped spiraling. 1.1 million pounds spent essentially initial attack, but then they overall spent just over 2 million pounds. They are never going to recover from the damage that’s been done to the reputation. If you do of searching, you’ll be able to find them. They were in national papers, they were all over social media, they were very high profile when they were hit. The continuation of that effect is that parents are not sending their students to these schools. Their numbers are down, so their budgets are being squeezed. They’ve got a long road to recovery and they will continue to see this for years to come. The damage has been caused by this. Lots of people at very senior levels were forced out of their roles. It’s a big change. It’s not just, oh it’s, it doesn’t matter.
James Dell - 36:55 The final one I wanted to add is scenario four, small business in Snark in compare comparison to the other ones, 500 users. But they’re a law firm. The key there is I want to show this can happen to anyone and you’re not above it just because of the sector that you sit in. We’ve had education providers, we’ve had people that make microchips and now we’re looking at someone who is a law firm. This happened late last year and this was a stupid mistake. Again, as we’ve kind of mentioned a couple of times, this is something that they should have known about. Everyone saw the news, saw the LinkedIn posts about the vulnerability around Exchange. Exchange had a problem on premise. Microsoft very quickly released a patch and said you either deploy this update to your systems or you take these systems off of public facing connections.
James Dell - 37:48 This organization did neither. They had scheduled to upgrade the server to the new version of Exchange at a time that would suit the law firm. That was scheduled for eight weeks down the road of the day this attack happened. This attack chain follows the exact vulnerability. It used Exchange OA to gain direct access to the system, leveraging the vulnerability in the unpatched server to deploy a command and control software. In this case, it was the equivalent to your, like, log me in one, two, three, or team viewer. It deploys a version of that onto the desktop. Once they’ve done that, they then use that system to deliver commander control software to multiple servers. And then they started harvesting data. Now, the big thing here is this was a law firm and they had lots of data that was very interesting and a lot of high value.
James Dell - 38:40 This is the first scenario I’ve been involved in where the insurance company said, I think you probably should pay the ransom before any actions have been taken because of the type of business they were and the data they had access to. The attack started harvesting data and then it started encrypting the systems. The ongoing attack was only spotted because of that routine server maintenance. The engineer who went to do some basic maintenance to that server, he was going to install an update on another bit of software logged into the system via RDP, saw a random icon on the desktop, and he called us and said, this doesn’t look right. At that point, were like, this is not right. You need to shut down your access to your systems, cut your Internet connection, you’ve got something in your system. Once the investigation was started, it was very apparent it was too late.
James Dell - 39:34 By that point, it was like, well, we could have got lucky and could have found this just before it happened. Unfortunately, we didn’t. It had already done the damage it intended to do. Just by the fact that they started scrubbing It, the C to C software, they deployed, started reinfecting lots of systems and started respreading as they tried to do the clear up. Their failures are a little bit different. The first one is the lack of critical patching. If a patch is critical, you should be deploying it that day. You shouldn’t be waiting to an appropriate time. The business should move to It, not It. Move to the business. They had a consistent change of It managers. They had three It managers in two years. The rest of the team had been there for 20 years. And that’s not collectively, that’s 20 years. Four people have been there 20 years.
James Dell - 40:26 So they were stale in their roles. They lacked the training, they lacked the updates to be ready for these modern kind of attacks. They also assumed that the third parties they were paying were doing a lot of this work for them, even though they didn’t check the contract. They didn’t know if they actually were. It was just assumed. They had no testing in place, no pen testing, they had no cyber essentials, they had nothing in place. They just were assuming their system was good because they felt it was. They had no monitoring as well, so none of these systems were being monitored. It was assumed they were up and it was all good. The issue was just spotted by chance, had they logged into that server three weeks later or three weeks before? The outcomes have been very different. The threat actors had been monitoring this vulnerability for months.
James Dell - 41:12 They had been able to deploy C to C software. They’d been able to get into system and really get a good understanding of who they were and really build a very strong attack that was going to give them most valuers as attackers. The entry point was ultimately a documented and no vulnerability that everyone was very much aware of. Their costs for an organization of their size are pretty scary. They spent 75,000 pounds just getting off the ground with instant recovery. They were losing around 10,000 pounds per hour because of unbillable services. So they had twelve days. Ten of those were working. They weren’t able to bill for. They had a huge loss of customers. You’re a law firm and you’ve lost our data and now you’re telling us you can’t help us. There was a huge swathe of customers who left them initially, straight away.
James Dell - 42:08 They have then anticipated they’re going to lose about 100,000 pounds in the next five years in profitability because of the damage that’s been done. The initial attack cost them around 875,000 pounds, they reckon about 1.4 million for the final attack cost overall. The truth of this one is they will forever live with the reputational damage of being the law firm that was hit by cyberattack. They won’t ever get away from that because again, it was in the national press in the area. It was in the local press for three or four days because it was very high profile that they were offline and they serviced some very high profile clients who couldn’t use them. It became quite a nice little local news story. Cyber insurance did cover some of this cost, but not all of it. We’ve talked about what happens, but then there’s something that I always want to lean onto, which is just because over doesn’t really mean it’s over.
James Dell - 43:07 Once an attack is finished and you go, right, clean the art. I’ve had my five days of working solid, we’re all good, all the systems are back up. Doesn’t mean you’re finished. You’ve got things you still need to do. You still need to do a post attack cleanup. There still will be systems, data, stuff that needs scanning, cleaning, making sure that you’ve got no trace of any attack left in your system. Once you’ve been attacked once, you will always get attacked again. The community online is really key on sharing data of places that threat actors have been successful. It gets shared very quickly in the Dark Web, so you will get attempted reattacks. Whether successful or not depends on the steps you’ve taken. If data was stolen, there’s a high chance after attack it’s going to be used. Whether that’s for, whether it gets sold online or whether you end up with them coming back to you for ransom, there’s a chance it’s going to be used.
James Dell - 43:55 There is always then the chance that you’re going to get legal challenges put against you. If you’ve mishandled customers data, you’ve mishandled any information. There’s always the ability for an individual organization, individual or an organization sorry to challenge you, take you to court and try and get some damages from you. The ICO in the UK will always get involved if the case is big enough and near enough. Every one of the cases I’ve mentioned, the ICO have been involved in some way. The level of action they’ve taken has been dependent on how they’re feeling at the time, but they’re always going to be there. As I mentioned, threshold actors love to share success. You will lose business and clients to attacks as soon as people know that’s the thing that’s happened. Would you trust your bank with your money knowing that they’ve managed to lose load of it?
James Dell - 44:39 No. You’re not going to trust a business with your business if you think they’re not going to be good at it. You’re then going to have to rebuild your reputation, you’re going to have to manage the press and any publicity, and there will be lots of jobs. It doesn’t always hit the It team or your information security teams, your sock teams. It may hit upper management, but that then will affect you. I always use this slide when I’m talking to business owners and say, look, just because your It team are doing their job, it may be that it falls actually on you as a business owner or a director that you’re responsible for these failings, and therefore, you could be the one that’s losing their job at the end of this. Here’s the big bit. What should you be doing differently? Where should you be changing?
James Dell - 45:23 Well, the first thing is everyone should be investing in their staff. Both your It staff and your wider staff should be trained to a decent level of cybersecurity. All staff should have some form of testing done against them. All staff should have yearly training done, but your It staff should have more targeted training done more often as an It professional, always have a paper trail of everything you’re asking for, everything you’re being told you can’t have. There is always going to be those scenarios where you say, I need 100,000 pounds to buy a managed threat service, and your boss will go, not a chance. Have that paper trail because when this all goes wrong, you want to be able to go, sorry, I told you this was going to happen, but always fight for those investments. Having been an It manager for a number of years before I did this job, it is a fight.
James Dell - 46:14 You have to be the one that’s willing to stand in that room and say, if you don’t give us this money, something bad’s going to happen and it’s on your head, not on mine. Don’t also settle for risks. When a business wants to make changes, they want to implement a system, they want to do anything. Don’t settle for the fact that it has risk with it. Be the one that flags it, be the one that says we need to change this. You should be testing three things. It should be your staff, it should be the systems and it should be your approach. Your approach is normally best tested with a third party, someone who can come in and say, is that the right way to do things? Maybe do it through certification like Cyber Essentials. You should be physically testing your system with penetration testing.
James Dell - 46:54 You should be doing vulnerability scanning and you should be doing phishing and simulation training with your staff and testing them on a regular basis. Your staff are your weakest link. They also can be your best protection. You need to make sure that you’re spending that time with them doing this. Now, I mentioned at the very start around policies and procedures, you just kind of try strategies and you just get on with things in the attack, you shouldn’t have to do that. You should have dry run and tested all of your policies and procedures and your approach to Dr before things happen. Now, I suggest you do that via what we would call a roundtable event, where you pull in your cyber insurance specialist, you pull in your It team, you pull in business ownership, leadership, marketing. You bring everyone into a room and you give them a scenario.
James Dell - 47:43 I normally do it in a brown envelope, no one else knows what that says. You open that up and say, right, this is what’s just happened. Now let’s work through it end to end, how we get things sorted. Yes, to your hand up.
Dinis Cruz - 47:55 Can I make a little addition here? One of the things that I use that and do, the amazing examples, I definitely use some of the data because I think you provide lots of really good examples. I tend to like to use existing incidents almost as dry run. Basically you take a P three and you run it as a P one. In a way the justification is if you go back to your six of events is you might have a P three business on the impact, I say at the end, but I can justify to running a P one because it was detected there not detected here.
James Dell - 48:29 Right.
Dinis Cruz - 48:30 What’s really interesting is that as you do that in a way, you’re not doing dry runs, you’re actually running for real. The first ones tend to be super painful. Even just getting the team together, just getting the stuff, getting that machine starting slack channel super fast, creating playbooks, all that stuff, right. That in a way, because it’s a P three, you have more margin for error.
James Dell - 48:51 Yes.
Dinis Cruz - 48:53 I think that works really well once everybody’s on board with the idea that, yes, this is not a P one for the business, but it’s a P one for security because we elevated that P three.
James Dell - 49:05 That’s the way often I have to do it. I often get brought into businesses to run these and be the kind of outsider leading the scenario. I have to kind of do that, kind of the forcing everyone into a room and going, right, you are going to talk about this and you are going to see how bad it can be. Because when it’s a real scenario and they’re trying to ring you at three in the morning, they’re going to want to know where you are and why you’re not answering your phone.
Dinis Cruz - 49:25 The advantage of doing on top of a real scenario is that you can then do what ifs that are very realistic. My problem with a lot of even corporate disaster recovery is everybody to relax. The VRS, right, the tabletops, everybody’s far too relaxed. It’s far too much optimistic around the table where when you have a real example, the whole idea that is this possible? Just went out the window because it just happened. Now, the attackers might have turned right instead of left, which is why it’s a P three instead of a P one. But they happen to be there, right?
James Dell - 50:01 Yeah, definitely.
Dinis Cruz - 50:02 Matthew, you got your hand up.
Speaker 3 - 50:08 Hi, james and Dennis, longtime listener, first time caller. No, what I wanted to ask is you mentioned about fighting for investment, which is obviously very difficult. People hate spending money, they just hate it. I was wondering, is it actually a good case to make that maybe spending money when it comes to maybe things like insurance, cyber insurance, you save money by putting that upturn, investment in things like prevention. Will that bring the premium of your cyber insurance down?
James Dell - 50:41 Yeah, definitely. I’ve got some bits on cyber insurance, but you’re right in terms of the way that I normally do it when I’m speaking to someone, because obviously I’m an outsider to a lot of organizations, I’m coming in as someone they’re bringing in to provide advice. I’ll say, look, I can show you these real world scenarios and show you what the cost is if you get this wrong and say, well, look, you could spend 100,000 pounds now or spend a million pounds when it goes wrong. Do you want to take the risk if you’ve got the million pounds in the back pocket. Often it comes down to weighing that up and saying, look, I’m happy for you to say, look, we haven’t got the money, but that’s a you call for the business, I’m not making that call. A lot of the times, certainly being in It management roles, they try and put it onto your SEC ops, your It managers, and say, no, it’s your problem.
James Dell - 51:24 With noise, if you’re not going to give us the money, it becomes your risk and you’re swallowing it. That’s the way I always try and take it.
Dinis Cruz - 51:31 But you touch up for me. One of my, I think the pet peeves I have with the cybersecurity industry, in fact, we did a couple of sessions in the Summit that talked about It correlation between your security posture and the cost of insurance. I think the cyber insurance companies are still pushing for sometimes big insurance blocks, but the new ones, I think, going in the right direction, where they, for example, ask you for data feeds. They ask you for a lot of data. They fail to ask for digital data that they can consume on your posture, and then they can adjust the premiums on it. And I think that’s the future. Because then you can go to a division or a group and say, look, yes, if you guys don’t invest on it, a, you have to buy the risk, but also you have to buy these loans because the insurance for your part of the business is going to be way higher than the insurance of that part of the business.
Dinis Cruz - 52:14 That we actually have this business in place. Here’s the ask for the cybersecurity companies or an entrepreneur. I still feel that market is not well served at the moment.
James Dell - 52:26 No, at the moment it’s still very much an old fashioned insurance business in terms of going, oh well, you’re going to pay this much and that’s a risk we’re taking. Whereas you could have spent a great deal making your system really secure and they don’t really care. They’re still going to give you the same premium because it’s the price that.
Dinis Cruz - 52:40 You said, because it backfire, right, because they now run, they get to the point where it starts to be uninsurable, right, companies where they’re not insure because they just don’t have the data. Right. I think we’re facing a bit of a backlash of all these rush to do stuff, insurance where the companies didn’t really know what they were doing. They just sell lots of insurance, then they got hit by lots of payouts and now I think they’re modernizing, but I don’t think we’re there yet.
James Dell - 53:10 I say the final thing from this slide I want to take away is that security must be your priority number one. We say it all the time, but when people are talking about launching new web systems. New services. Security has to be the trump card and your business needs to understand that if you go, this isn’t secure, then the business needs to stop. I’ve seen so many organizations that have failed because they’ve gone, well, we’ve got this lovely new app that we’re going to run and we’re going to launch it now, but it’s not secure, the back end is not ready and they just do it anyway. That’s something as an industry, we need to learn to change. We need to force on people that it should be. Security is the gatekeeper. If security says it’s not ready to go just because there’s a marketing deadline or something free.
James Dell - 53:52 It’s a big, hard thing to change in business, but it’s something that I’m certainly with what we do at Planet, we’re trying to embed that into our customers, that you can’t get away with hiding from it. On that matter of cyber insurance, just to wrap up, I suppose there’s a few things I want to leave people with. We’ve seen a lot of this recently, so it’s a must. You must have cyber Insurance 2023. There’s no excuse for business to not have cyber insurance, but you need to read that small print. There’s a lot of expectations in that small print on you as an organization. There’s also a lot of caveats on how they can get out of paying and you need to be very much aware of what you signed up to, not just, oh, cool, well, X Company said they’re insure us, so we’ve signed up for it.
James Dell - 54:34 Because when it all goes wrong, you may not be able to lean on that insurance. You also need to understand their approach to an attack. Right? A lot of these organizations were attacked, they call their cyber insurance and they have a cool, will we bring in X Team to do it, or? We expect you to do y you need to know that, because if you start making decisions and making changes, but their cyber insurance expects you to call them straight away and their team to take over, you may have invalidated your insurance immediately by even touching anything that’s been attacked. We’ve seen two instances, which I haven’t talked about today, where they didn’t get cyber insurance paid out because they started fixing the problem. Their insurance specifically said, if you find a cyber incident, you have to ring this number and our team will take over.
James Dell - 55:20 Again, understanding your coverage, that’s critical. You need to know to what value you’re protected and what they will cover. I would also argue that you should be bringing someone from your cyber insurance into those roundtable exercises. Last three I’ve run, we’ve managed to get someone from cyber insurance there. It’s been really useful. They’ve come in and they really open that conversation up because they go, well, we wouldn’t warrant that or we wouldn’t support that. The business then has to adapt its approach. You have to be aware they will not pay in some scenarios and you have to be ready for that. You need to have details of that cyber insurance as well, kept somewhere other than your emails. The number of scenarios I’ve had where someone goes, oh, I have the login details for the insurer, it’s on my emails, or I don’t know, it’s a PDF that we have saved on our system.
James Dell - 56:07 That’s useless if you don’t know who insures you and you haven’t got a paper copy of the records, it’s going to be difficult if you have no, it sounds stupid, but I’ve done that a couple of times in the last three years and I’ve been like, cool, who insures you? No idea. Ensure the value is correct and don’t be afraid to shop around. You don’t have to just buy any old cyber insurance. Loads of companies are offering really interesting, innovative solutions that like you were talking about leverage, what protection you actually have and go, great, well, you have managed threat. Well, we’ll lower your premiums. Oh, great, you have industry recognized firewalling in place. We’ll drop your premiums. You do pen testing every month. Great, okay, we’re going to lower your premiums. That’s what you want to be looking for. You don’t want to insurer who’s just going to throw out a solution because it ticks their boxes.
Dinis Cruz - 56:52 By the way, when you said that they come in and take over, a lot of times they come in and take over an extra crazy cost.
James Dell - 57:00 Oh, yeah, right. Or they come in, take over and walk away.
Dinis Cruz - 57:05 They also bring in lots of other companies. In fact, your company might be one of them that gets called in van insurance. Right, but that gets charged at a crazy premium. Right?
James Dell - 57:14 Unfortunately, yeah, that’s the way it works. Whatever we charge, they’ll put their premium on top of that.
Dinis Cruz - 57:19 Absolutely. I just asked a really good question.
Speaker 4 - 57:26 Yes, that’s correct. Hello, James, thank you very much for the presentation. In my PhD thesis, I’m investigating cybersecurity decision making dynamics, but I’m from international relations department, so my field is different. Basically I’m trying to understand how these cognitive biases or decision flows are impacting this kind of awareness trainings, or how these decision flows impacting the decision, the quality of decision during an incident and so on.
James Dell - 57:55 Yeah, I think from my experience, the quality of decision making very much falls apart if people haven’t got some form of bias in terms of if they’ve not been exposed to technology or the way to approach things, they crumble under the pressure. You’ll see that where a lot of these attacks get extended out is because the It manager or the security ops manager hasn’t got the actual exposure to real world scenarios. They’ve read about it in their textbook and they go, yeah, this is what we do. They don’t know what that pressure feels like. Some of the most successful people and some of the people we have working for us who deal with these things all day, every day, they have that cool, calm head because they’ve been there, they’ve done that, and they have that bias of, well, this is the way I’m going to approach it.
James Dell - 58:39 Although it can be a bad thing to say, well, I’ve already made decision about how I’m going to start fixing this, ultimately it gives them a grounding in terms of, well, I know how I’m going to approach this. Even if I’m wrong with my initial, oh, it looks like this kind of attack, or it looks a bit like it might be an imitate attack, they will at least have some kind of strong belief that they will follow. I think that can especially in a high pressure environment, that can be really beneficial. We see a lot of that in terms of where we get pulled in it’s a lot of the time it’s people that are unsure and don’t have anything to lean on. That’s where they start making decisions that are kind of knee jerk and things that no one else would warrant. You’d never make that decision if you were given it in a cool, calm scenario.
James Dell - 59:26 Under pressure, they go right after deleted everything, it was easy. You’re like, Why did you just delete everything? That’s just given us nowhere to work from. I’ve seen it happen where that was their initial response was just to delete all the systems because that was a way to fix it. So, yeah, it’s a very good bit. I’ll be interested to read what you write because there’s definitely a lot of people under a lot of pressure out there.
Dinis Cruz - 59:48 Even in terms of budgets, right, and negotiations and when you’re presenting before the incidents or after incidents, having individuals who have experienced it in past organizations or even an organization that has experience makes a big difference, right. Unfortunately, it’s a bit like that, right. The ones who understand the side effects, they tend to relate. They have a much more personal connection to the situation, so they can anchor much better the requests.
James Dell - 01:00:14 Yeah, that’s it. You have to have something to fall back on. That’s. For me, my backing is the fact that I’ve unfortunately had to go through this before and I get to go through this with customers. It’s never a nice experience, but I always learn from it. Every one of these that I’ve been bought into, when it’s happening, you just feel dreadful, but then afterwards you look back and you do the thought exercises afterwards and you go, actually, we learned quite a lot about how to deal with things. As a business, we continue to get better as we get involved with these because we see we’ve learned from the 2030 that we’ve been involved in the last three years. We’ve learned from those and we’ve learned a quicker way to respond and unfortunately the person that gets attacked tomorrow will get a better response than the person who did three years ago.
James Dell - 01:00:56 Because we’re more aware of from the decision making that we’ve already built into our processes, we know how to be more equipped to deal with them.
Dinis Cruz - 01:01:05 I use incidents as a way to also get a lot of stuff done, especially on the P three elevated because it’s a great way to drive change. It’s a really amazing way to drive lots and lots of change. Actually James, I’m going to make you a host because I need to go to the other side.
James Dell - 01:01:24 Yeah, of course.
Dinis Cruz - 01:01:26 Feel free to continue. That’s fine, you guys can continue until another questions. Matt so then what you can do is just make the ones who ask question, right? Like for example, Matthew hand up just co host and then they can ask the question and then just have the recording. Yeah, would it I was going to mute myself but you guys keep cool.
James Dell - 01:01:47 Matthew, you should be able to unmute now in theory.
Speaker 3 - 01:01:52 There we go.
James Dell - 01:01:54 I can use them. It’s not too bad.
Speaker 3 - 01:01:58 Yeah. So you picked my interest just there. You said you’ve been involved in 20 or 30 of these kind of coming in and fixing the problem. How many of these do you see where you’re running a managed service into somebody and you stop an attack before it gets anywhere?
James Dell - 01:02:14 Yeah, for a lot of our managed service clients we’ll be seeing the indicators of attack and we’ve got a large sock team and they’re dealing with that kind of early doors. Right, so those guys are picking stuff up in the kind of first two stages and dealing with it day in, day out. Those ones we don’t class as instance because they normally get resolved out. To be honest with you, most of the ones 2030 I’ve been involved in are from the other side. They’re people that aren’t our customers, or they might have bought a tin from us, bought a server from us ten years ago, and they just know of our name. We get pulled back in as that kind of trusted advisor and that’s very much those customers a lot of the time end up being managed service customers by the end of it, but we get pulled in very much from the outside of it.
James Dell - 01:02:54 As okay, well, tell us what you can do to help here with our managed service clients. We try to be all over it and I think that’s one of the things that we strive for, to be honest is that we are investing in our people to make sure that we are stopping this attack change from managed service clients. Unfortunately being involved in some of these tax you see that where there is MSPs involved, they’re not always doing everything they can do. They know that maybe the monitoring is not quite up to scratch or they’re not pushing new technologies. One of the things that I do for our managed service clients and my team does, is that we’ll do that fighting a bit. We talked about earlier, we’ll be the ones that are going to have to fight with the MD or whoever the purse string holder is about these security products and go, look, you’ve got us doing your It.
James Dell - 01:03:41 We really need to be investing in these technologies and we’re kicking Scream until it gets done because we know that without those tools in place, we’re the ones that get hurt.
Speaker 3 - 01:03:51 Thank you.
James Dell - 01:03:52 No worries. Is there any other questions at all? I think perfect. Right, well, I am well aware that I’m overtime, but I just want to say to everyone, thank you for listening. If you did want to either have a further conversation or you just wanted to reach out and have a bit more of an in depth discussion around some of the things we talked about, my email address on the screen, you get my Microsoft teams. I don’t ever bother putting a phone number on there because teams come through to my phone and everyone does it that way, but I’m always available. It’s not something that we ever charge for. If you want to have a conversation around how we can help better protect your systems, that’s what I’m here for. If you want to learn from what other people’s mistakes are, that’s ultimately why I’m here today.
James Dell - 01:04:39 Everything I do for our customers is about helping us learn from mistakes. It’s not necessarily about spending money, it’s about going, well, where can we better? Do feel free to reach out and I’m always available to further discussion. That brings me to the end of this session around the truth around Cybersecurity. Thank you very much for your time and I look forward to seeing you all again soon.
Speaker 3 - 01:05:20 Hi, James. Just before I pop off.
James Dell - 01:05:25 Yes.
Speaker 3 - 01:05:27 So I’ve actually just really interesting stuff. It was really sorry, lots of words. End of the end of a Monday. No, it was really interesting hearing you talk about all the case studies you’ve been through, but this like, did you say would you think you’ve covered like.
James Dell - 01:05:50 The worst of the worst?
Speaker 3 - 01:05:51 You mentioned the huge one that you came in and dealt with.
James Dell - 01:05:53 Yeah, I think some of these are kind of it gives a good flavor across the spaces. There’s quite a few that happened around kind of 2021 to 2022, which were all very much targeted one sector. We saw a lot of attacks in education and that was across the board. I think everyone saw that it was in the press a lot. There was a window where I think we used to about seven or eight back to back, where it was just education providers being hit. You see it, and they have a very similar vein to their attacks. The ones that I shared there are kind of the very worst 16,000 users, but then some that cover some of those nice user sizes. We do see the kind of 25 users get hit with attacks, but they’re kind of the ones you can recover from with an afternoon of hard thinking.
James Dell - 01:06:36 Whereas someone that’s been running for 20 years, all that data becomes very difficult to cover. Yeah.
Speaker 3 - 01:06:43 Cool. Well, thanks for the yeah, thanks for the session.
James Dell - 01:06:47 No worries. Enjoy the rest of your evening. Thank you very much for your time.
Speaker 3 - 01:06:50 Likewise. Bye.