All businesses make use of computer services and other assets. They have at least some susceptibility to loosing those services and assets. Continuity Management looks at the risks and makes judgements about what is an acceptable risk.
![multi-stage flow chart](../images/pexels-photo-533189.jpeg)
Imagine there's a 1% chance of a specific
disastrous eventOpens a new window in the next twelve months: if you can prepare to protect your systems from that risk by spending £2 or €2 today, you would take it. If there was a one in a million chance of that event happening next year, would you spend that money?
Probably yes: the cost is about the same as a lottery ticket and the risk of the event happening are shorter odds than winning the jackpot. If the protection had a one-off cost of £10,000 or €10,000, then you would not spend the money for the one in a million risk, but somebody would be thinking hard if it was the 1% chance, and probably a "no-brainer" if the risk was assessed at 10%.
A regular Disaster Recovery - Business Continuity exercise has several benefits, including preparing staff for something you hope never happens. The trite adage "failure to plan is planning to fail" comes to mind. A well-crafted scenario can also shake individuals out of complacency. A longer article "
Continuity ManagementOpens a new window" is available on Linkedin.
Examples:
2022 CitrixBleed (or Citrix Bleed, CVE-2023-4966) was known to be
exploited in the summer of 2023.
Details were published in October 2023, with a severity rating of
critical. It later became apparent that several cybersecurity threat actors
- probably monitored the publication of the vulnerability
- were able to reverse engineer the vulnerability
- used a directory of Citrix Servers to hunt for targets
- to assess if the server was accessible
- plus if the patch had not been applied
From there they could then infiltrate their victim's network, encrypt critical data and extort a ransom.
One simple low-cost option would have prevented infiltration, even if the patch had not been applied.
2024 A defective configuration file was deployed to Windows machines running CrowdStrike Falcon between 04:09 UTC and 05:27 UTC on Friday 19 July 2024. This caused the computer to fail into an error state. Since it was mainly large businesses using CrowdStrike Falcon, the effects on the public were quickly noticeable. Bank ATMs did not work, medical histories were not available to doctors and hospitals, some airlines and airports tried to rely on paper systems. Remediation was a slow and convoluted manual process: refer to this knowledgeable
article by Amber DaSilva. For a skilled and experienced engineer, a quick fix took perhaps 20 minutes, but a difficult case could take five hours. A small system with less than 20 servers might take a day to fix: a modestly sized enterprise might have hundreds of machines to fix, a large enterprise would likely have thousands of machines to fix.
Like Covid-19, neither of these examples are Black Swans: all had predecessors, so they are all more properly described as Grey Rhinos. Like Covid-19, contingency planning had not been up to the job. Options could have been in place to mitigate many of the affects.
We will carry out risk assessment and provide an analysis prioritising potential risks to the organization and it's operations.