30 May 2014
Crisis communications and the CEO - By Jim Preen.
These days, CEOs have to be visible in an emergency. If the media feels they’re hiding, questions will be asked. Why’s she not taking responsibility? What’s he got to hide? Part of the problem for...


23 May 2014
Businesses ‘slow’ to address cyber risk exposures
by CRI | 22 May 2014 Businesses are still “slow” to understand the mounting need to address their exposures to cyber risk, research by Aon has found. The cost of cybercrime in Australia now...


16 May 2014
Crisis Leadership Lessons From the Great Antarctic Explorer – Sir Ernest Shackleton
Sir Ernest has been called "the greatest leader that ever came on God’s Earth, bar none" for saving the lives of 27 men stranded with him on an Antarctic ice floe for almost two years. The...


More News



Each month ContinuityCoach.com brings you a case study showing how Business Continuity Management has helped businesses to recover from a major disaster.

The aim is always to show you the practical applications of Business Continuity Managment and how important it is for your business to have a continuity plan. All case studies are PDF downloads.

 
Case Study – Volcanic ash crisis in the UK April 2010
A question to start - prior to April 2010 what would have been your rating for probability & impact of a volcanic eruption crisis in the United Kingdom? Answer - so low as not to be taken seriously, given that the UK has no volcanoes. Major disruption happened to me when visiting the UK for a planned two days that turned into a 10 days enforced visit. I witnessed first hand the consequences that gave valuable lessons on how Business Continuity Management provides answers for both predictable & unexpected incidents that escalate. Speed of events can overwhelm – one day things were normal, the next, all air travel was closed down & individuals and organisations hit a brick wall in getting from a to b. You’re on your own - Government agencies were slow to respond & in some cases advisory only. Costs escalate quickly – hotel accommodation, without other options, is expensive & availability of extra finance is essential. Insurance claims may be declined & even if accepted settlement is not instantaneous. Somewhere to go is vital – you need a roof over your head for basic survival e.g. shelter, food, sleep & place to communicate from. Communication needs to be quick & easy – those with a need to know your status are families, government embassies, support organisations, businesses – all of whom need to be pre-listed for quick easy reference. Mobile phones are essential & must be able to be re-charged. Computer access – best achieved with a notebook computer that has wireless internet capability. A portable computer will also give access to files, email and the internet to quickly continue essential communication as well as continue essential work responsibilities or projects. Internet cafes are useful but have limitations. Alternative office space may not be available. Power adaptors essential – have one with you for the country being visited. Risk management has failed & your immediate focus is survival, communicating and recovering to a near to normal state as quickly as possible. Risk assessment - Don’t try to be too specific on possible threats. Instead use broad categories such as “climate” “financial” “natural” etc to cover the possibilities but avoid huge detail & possible missed threats. Regularly create “what if” scenarios to make sure you are ready for unplanned incidents. At least once a year do this. That old adage “don’t panic” holds good. It takes a conscious effort & determination to stay calm & not to see the gloomy worst of the situation. De-brief after the event to establish lessons learnt and update your Business Continuity Plan.
 
 
Collection of business continuity management case studies published
The Business Continuity Institute has published a collection of case studies submitted by its members that “articulate their experiences of the value of business continuity management to their organization”. Contact the BCI or us for more information.
 
 
Power outages and back-up generators
Thousands of travellers in New Zealand were grounded on Sunday 11 October 2009 for several hours as airports around the country were thrown into chaos when Air New Zealand's computer system crashed. Planes were delayed for up to two hours yesterday as the airline's electronic check-in system failed, forcing flights to be painstakingly processed one by one. The system crash, which happened about 10am, meant some flights were cancelled. It also affected online bookings and call-centre activities. Bruce Parton, Air New Zealand's group general manager of short-haul airlines, said more than 10,000 people were affected by the breakdown. The airline had called in extra staff and handed out food to help apologise to waiting travellers, he said. "It was the end of the school holidays, so you couldn't ask for a better day for this to go wrong," he said. When all the airline's computers were down, the "chaos" meant staff resorted to using pen and paper to check flights in, Mr Parton said. The airline would be meeting computer manufacturer IBM this morning to "express our concern", Mr Parton said. Air New Zealand chief executive Rob Fyfe has lashed out at IBM in an internal email about yesterday’s mainframe crash that crippled services and disrupted thousands of passengers. "In my 30-year working career, I am struggling to recall a time where I have seen a supplier so slow to react to a catastrophic system failure such as this and so unwilling to accept responsibility and apologise to its client and its client's customers," he says. "We were left high and dry and this is simply unacceptable. My expectations of IBM were far higher than the amateur results that were delivered yesterday, and I have been left with no option but to ask the IT team to review the full range of options available to us to ensure we have an IT supplier whom we have confidence in and one who understands and is fully committed to our business and the needs of our customers." Air New Zealand was to meet IBM today over the crash, which took down airport check-in systems, as well as online bookings and call centre systems about 9.30am, affecting more than 10,000 passengers and throwing airports into chaos. The airline said most systems were restored around 1.30pm,but the passenger backlog did not start to clear until self check-in kiosks were up and running again about 3.30pm. Air NZ's short haul airlines group general manager, Bruce Parton, told Radio New Zealand the fault appeared to have been caused by a power failure, followed by a delay in a back-up generator running. "Ten thousand-plus customers affected on the last day of holidays, and millions of dollars of revenue not going through our online site, you can be assured we'll be having some very serious discussions with IBM today." But most passengers delayed by the outage were unlikely to get compensation. "We'll go through that today. Most people moved within an hour and so it doesn't hit the threshold (for compensation)," Parton said. Air New Zealand outsourced its mainframe to IBM in 1997. Four years later, it also outsourced its mid-range systems to IBM.
 
 
A disaster doesn't have to be a catastrophe if your business is well prepared, writes Julia Talevski.
In recent months we've witnessed the full range of natural disasters in Australia, from flash flooding to bushfires. Man-made disasters such as virus attacks, accidentally wiping data and power outages can also affect businesses. Having a disaster recovery plan in place is one thing small business owners should consider. What would happen if everything that relied on IT suddenly vanished? Would you have the ability to continue running the business? How long could you do it without IT before it begins to affect performance? It is almost impossible to prepare for the worst but planning is critical to ensure your business has the ability to get through in the worst-case scenario. A Telstra-commissioned survey revealed more than half of all Australian small businesses don't have a disaster recovery plan in place. It indicates about 52 per cent of businesses have not thought ahead and given more consideration towards a disaster recovery plan. When a storm struck the call centre of national delivery company Couriers Please in Homebush, it had no communication links for up to eight days. The storm struck during the Christmas period, one of the busiest times of the year for most businesses. Without any solid indication on when its systems would be back in full swing, the company had to think quickly of how it was going to keep its call centre operations running without affecting customers. "The downpour flooded the exchange pit that holds all of our telecoms," says the chief information officer of Couriers Please, Alistair Alderson. "At the time we thought it was going to be a one- or two-hour outage, nothing to the point of what we were going to be out for. It was hard to make calls on how we would deal with it." The company has other call centres throughout the country in Perth, Brisbane and Melbourne and for the first few hours it was able to flick a switch to divert calls to those centres so they could still be answered, Alderson says. Couriers Please uses a hosted contact centre application called Genesys. "It is all well and good for a short period but if you're talking two to eight days, the customer service kind of gets degraded in those areas as well because those staff can't take on that call volume for a sustained period," Alderson says. "We had to make a call on how we would deal with the NSW area and luckily we had a network connection in our head office and we were able to move hardware and staff there. It kind of saved our bacon a bit. "It's hard to gauge the damage on the business but overall it was a successful disaster recovery plan." Alderson says Couriers Please has about 70 office staff and about 500 contracted couriers who were left without a data connection but were still managing to get bookings and dispatch for jobs in NSW. The flooding experience gave the company the ability to deal with more disasters as they occur. And the flooding hasn't been the last disaster, either. Alderson says there was another situation where head office burnt down and there have been other communication outages since. "We knew exactly how to deal with it," Alderson says. "What you think won't happen, will." It can often be difficult for a small business to justify funding towards a disaster recovery plan versus other areas of investment. "For SMBs [small to medium businesses], the issues they face in terms of resilience to disaster recovery, they're no different to what enterprises face," says IBM's business continuity and disaster recovery services executive, Andrew Fry. "It's fair to say that SMBs may have a greater impact from a disaster." A disaster recovery plan isn't something most business owners consider until disaster strikes or they have a close call with losing their most precious assets, says the chief technologist at Hitachi Data Systems, Simon Elisha. "Solutions can be as simple as having back-ups that work and replications in place, along with a whole raft of technology solutions. The first step is the strategic decision to ensure the longevity of the company," Elisha says. Testing out the data backup plan is also crucial. "You hear the horror stories of where companies had a fully fledged backup system in place and they've gone to do a restoration and all the tapes are blank because no one ever tested whether or not they could restore the information or whether it was actually working in the first place. Doing it in a really organised, testable fashion is really important," Elisha says. "If you fail to plan, you plan to fail."
 
 
Case Study – results, good & bad, from a major fire in Queensland, Australia on 16 June 2009
This was a large scale fire that started late at night create a disaster zone involving a major shopping complex with multiple tenants. What happened • The cause was unknown but the fire spread rapidly through the ceiling to envelop the entire block • Nearby premises were extensively damaged not by fire but smoke & water • Electrical & air-conditioning systems were also destroyed • The whole area was shut down by Fire Services • Security guards were called in by property owners to control access • Staff of businesses affected who tried to visit the site next morning were distraught at what they saw • Media were quickly on the scene to write their stories for next day papers & were looking for any comments What went wrong? 1. The building sprinkler system was activated but did little to stop the fire 2. Most media comment was negative for the businesses involved. There was no mention of crisis and continuity plans to quickly restore business activity 3. One tenant who wanted to be anonymous, said “we don’t know what will happen & this will affect our business massively” 4. No “business as usual” statements appeared in the press for subsequent days What should have happened 1. The possibility of such a happening should have been foreseen from a simple risk management process involving answers on a) probability, b) severity c) controls 2. Each business should have had a crisis & business continuity plan that had been tested before the event & then relied upon to deal immediately with the crisis 3. Staff should be asked to stay at home & not be allowed to visit the site, distraught staff will only hinder recovery & possibly make emotional statements to press 4. A competent spokesperson should issue a brief and positive message of positivism to the media, speculation should not commented on 5. The existence of adequate insurance should be commented on 6. Alternative premises with adequate resources e.g. computer systems & data back-up should have been considered and resolved in advance of the fire
 
 
Lessons learned from Mumbai terrorist attack
On Thursday 8 January, Homeland Security and Governmental Affairs Committee Chairman Joe Lieberman, ID-Conn., and Ranking Member Susan Collins, R-Me., heard testimony from top intelligence and law enforcement officials about lessons learned from the terrorist attacks on Mumbai in November 2008. Department of Homeland Security Under Secretary for Intelligence and Analysis, Charlie Allen, discussed the Department’s tactical lessons from the attack, including the fact that disrupted plots may resurface, noting how Indian officials had apprehended a suspect months earlier with what appeared to be plans to attack the Taj Hotel. He also discussed the challenges associated with responding to a similar attack in a major US city. The FBI’s Donald Van Duyn noted the leading-edge technologies used by the Mumbai terrorists to communicate with each other and their superiors back in Pakistan during the attack. And New York City Police Commissioner Ray Kelly said law enforcement needs a renewed focus on educating the private sector on potential security threats to soft targets, and highlighted the NYPD’s outreach efforts. All three witnesses discussed the threat posed by Lashkar-E-Taiba on a global scale and to the US homeland. “We need to understand the implications of some of the tactics used successfully in these attacks,” Lieberman said. “For example, we know that the attackers traveled undetected from Karachi to Mumbai by boat. What are the implications of this attack from the water for our own maritime security? We need to look at the targets of this attack and determine whether we are doing as much as we should be doing to appropriately protect our own ‘soft’ targets, including shopping malls, hotels, and sporting venues. We need to better understand the threat to the United States from Lashkar E-Taiba. And we need to examine how we can strengthen our homeland security cooperation with the Government of India and other allied governments in the wake of this attack.” Said Collins: “The murderous assault on Mumbai deserves our attention because it raises important questions about our own plans to prevent, prepare for, and respond to terror attacks in the United States. Careful analysis of the tactics used, the targets chosen, and the effectiveness of the Indian security forces’ response provide valuable insight into the strengths and weaknesses of our own nation’s defenses.” Allen and VAN Duyn also discussed HOW Somali-Americans are being recruited By Al-Shabaab, an Al-Qaeda-affiliated terrorist group to travel to Somalia where they were trained to wage jihad, noting the implications for radicalization of American citizens to carry out attacks at home or abroad. Last October, a US naturalized citizen carried out a suicide attack in Northern Somalia on behalf of Al-Shabaab. Source: Senate Homeland Security and Governmental Affairs Committee.
 
 
Queensland Australia, Communication meltdown - 15 July 2008
When the contractor severed the fibre optic cable, the system was re-routed through an inland back-up network that collapsed after a system card failed at Stanthorpe causing an internet and mobile phone black out on the Optus band. It's believed Optus had a back-up card flown up from Sydney and then by helicopter from Archerfield to Stanthorpe. The Queensland government now wants to know why a backup system did not prevent the four-hour shutdown, which also affected ATM and EFTPOS services. Services were partially restored early afternoon, but there were still major problems. Hundreds of domestic and international flights were delayed, and businesses were left without internet or mobile connections. An Optus spokesman said mobile, landline and Internet service to and from Optus customers throughout Queensland and northern NSW were down from about 8am and restored about 12.30pm. Emergency numbers were still available although Queensland Health said some of their services had been affected. "The following services were impacted for Optus customers: fixed line voice calls to and from Queensland, mobile services for customers located in Queensland, Internet browsing for Queensland customers to services outside of Queensland," an Optus spokesman said. He said technicians were sent to repair the cable and apologized to Optus customers for the inconvenience. Frustrated south-east Queensland customers alerted Optus to the major communications breakdown. Callers attempting to reach affected areas got a message stating: "We regret that all lines to the area you have dialled are busy. Please try again later." Optus learned of the drama when phones started ringing constantly in its Sydney offices. A number of services throughout Queensland were affected, including Brisbane's airport and other public transport. Brisbane Airport Corporation spokeswoman Rebecca McConochie said the domestic and international terminals were 'heavily affected.' "The electronic check-in and baggage handling systems were unable to operate airlines were having to use manual systems to maintain services," she said. "All our phone lines and internet across the airport were also affected." Ms McConochie said all airlines experienced flight delays, but the maximum delay was one hour. "No one missed any flights or had reports of cancellations." Ms McConochie said every piece of luggage had to be manually tagged and placed on each flight. She said passengers on afternoon flights should expect a flow-on effect of the delays, with all flights hope to be back on schedule by this evening. "Passengers should check with their airlines for adjusted times on their flight schedule," she said. An angry Premier Anna Bligh said Queensland deserved a better telecommunications network that did not fail when a single cable was cut. Ms Bligh said she would seek urgent talks with the Federal Government to ensure networks had backup systems in place. ``As Premier of Queensland, I am not happy that our entire telecommunications system has been disrupted this morning because of one cut in a cable,'' she said. ``We've seen chaos at airports, we've seen people unable to communicate with each other. Modern Australia needs a telecommunications system that can withstand one cut.'' Ms Bligh said businesses affected by the problem were entitled to be angry but would not say if she thought they deserved compensation. The TransLink Transit Authority and its 24-hour call centre were not able to receive calls because of the Optus for a few hours and directed customers to try logging on to their web site for information instead. Others, like Yellow Cabs and Black and White cabs, were not affected as they were with Telstra. Other mobile phone companies were also affected. Calls to 3Mobile were met with an automatic recording saying the line was too congested and to ring back later. Calls to Virgin Mobile have not yet been returned. Telstra spokeswoman Elouise Campion said Telstra customers were not affected, although they were not able to get through to those on Optus phones. She said Triple 0 calls were still operational for Telstra customers but she was not sure for Optus customers. "Telstra operators take triple 0 calls but if you don't have an operational phone it doesn't matter who you're with," she said. "But I won't assume so check with Optus." The Optus spokesman had not returned calls. The Optus website did not appear to have any reference to the problem.
 
 
Reputation & Brand
Reputation & Brand on the line & lost, in each of these actual events 1. Undisclosed market agreements 2. Contaminated water 3. Financial misdoings by bank traders Background In each of these case studies the organisations involved suffered huge damage to their reputations & brand from allegations made against them for incidents that were entirely unpredictable. Each case study organisation was a market leader and a listed company. The organisations claimed to have risk management procedures in place. Outcomes • Loss of customer confidence in brand • Large drop in share price • Loss of market share • Substantial drop in earnings • Loss of directorships and jobs (2,500 in one organisation) • Takeover (new owners for one organisation) Lessons learned 1. Risk Management & Business Continuity Management (BCM) that is inactive are useless. 2. The allegations were slow to be answered. That suggested there was no planned way to handle critical comment. Had BCM been in place it would have been reasonable to expect the following: • Business Impact Analysis and Risk Assessment should have identified & treated the risks that caused the incidents. From that point, follow up action would be documented including Business Continuity Plans that were exercised. • Strategies would have been thought through. • Incident/Crisis plans would be in place with focus on communication that includes positive media releases and responses. • Business Continuity Plans would be in place. • Awareness and training would have taken place. • Maintenance of plans would ensure they were up to date. • Exercising/testing of plans would have taken place with any gaps identified and remedied.
 
 
Case Study – Earthquake disruption & six important lessons learnt.
Sometimes the popular image of an earthquake is that of destroyed buildings and razed infrastructure. In real life this is not always the reason that causes businesses to suffer and possibly fail This month we highlight an almost fatal disruption to a small business that happened without any damage whatsoever to the building and equipment within, following an earthquake. This true story begins on 29 December 2007 when a major earthquake hit a city in New Zealand at 9.00 p.m. The violence of the shake was frightening and fears for personal safety were paramount followed by apprehension concerning houses and apartments that were fully occupied after the end of a normal business day. What then happened was not at all normal. Essential services fail Power supply failed and darkness prevailed. So where was the back-up lighting? Torches in most household cases provided some light in the pitch darkness of the gathering night. Household survival comes before business concerns The proprietor of our case study business was preoccupied with his family concerns and no thought was given to his leased commercial premise other than to briefly ponder what may have happened there. Emergency services were getting into visible & audible action with messages being broadcast over radio urging the community at large to beware of a possible tsunami, triggered by the under sea quake. The populace were urged to seek high ground & tension was heightened from fear of a massive tsunami that could sweep all before it, similar to graphic images from recent Asian tsunamis. No such tsunami happened but it was well into the early morning before that realisation became clear. CBD the next focus The CBD could not escape unscathed and as households recovered some semblance of confidence in their safety, concerns arose from our case study professional businessman that the CBD building from where he worked was damaged. It was early morning when he ventured into the city and apprehensively peered at the unfolding picture of buildings with faltering structures. On arrival at his business site he breathed an initial sigh of real relief that the building looked undamaged. However this was premature as surrounding the building were portable police barriers blocking every entrance. Being a well known professional businessman helped, as the police officers on duty at the site relented in giving him brief access to his professional suite. No damage to the business - but!! Amazingly the interior was unscathed and all the specialist professional equipment was intact and undamaged in any way – hallelujah indeed – but that was short-lived. Access denied by police On emerging from his brief incursion the police were very blunt that there was no way anyone was going back into the building until an engineering clearance was available – but how long would that take? Our intrepid businessman made his own investigations and found a section of a higher level in his building was suspect and needed remedial work before anyone was allowed back to open for business. Unexpected delays – where is the building owner? The problem then worsened considerably. The building owner was overseas and couldn’t be found. Building closure indefinitely The local authorities merely maintained the building closure notice and alarmingly to out business owner, it now seemed he could be shut out of business for an indeterminable time. No insurance for business interruption had ever been obtainable by our business man, due to various unchangeable reasons & even if it had been it would not have protected against loss of brand image & long term loss of customers. Where was the businesses cash flow to come from? Hardly comforting all of this when the business had patient appointments booked well ahead on a daily basis – the lifeblood of the business was now in jeopardy. No effort to contact patients was made as there was too much confusion elsewhere for our business man for that to receive positive action – that was not a great strategy Our business owner was now desperate for something to happen – still no sign of the building owner! Find an alternative premise - urgent!! So desperate messages went out to real estate agents to find an alternative premise, a possibility that had never been considered before. Premises were easily found but unlike other businesses that needed IT systems only, our business man needed specialist equipment not available “off the shelf” Luck intervenes At this point in time the only saving grace for our businessman was that the timing of the earthquake had fortuitously coincided with his annual closedown of three weeks. However forward bookings were in place for the recommencement of the professional business and time was rapidly running out. Our business man redoubled his efforts to find out what was needed to re-open the building – the prospect of alternative premise was not looking a practical possibility. Facing disaster One other possibility then reared its unattractive head – move out of the district altogether and take up a locum professional position to tide the family over. However this would mean that existing patients would go elsewhere and be lost forever thus ending the business It was then discovered that the remedial work required to allow the building to re-open was in a remote part of the building and would only take an hour or so work – frustration again. More good luck The building owner had now been found in Asia. He was willing to authorise remedial work and things lurched back in favour of our business man staying on site and hanging on to his patient base without too much loss of income. Local authorities now moved to authorise building repairs and in no time it seemed, our business man could see light at the end of the tunnel – his business, patients and income could be saved but only just. Did the business survive? Today for our businessman its business as usual – albeit chastened greatly by a frightening natural near disaster that caused no physical damage to his business but effectively shut him down literally overnight without warning. Our business man was extremely fortunate to see only a slight drop in cash flow and some patients gone forever elsewhere. The business had teetered on the brink of extinction and survived by sheer luck – not a good way to be resilient. Six lessons to be learnt from this case study 1. Risks need to be considered regularly & “what if” scenarios thought of. 2. Strategies to deal with any disruption need pre thinking, again based on “what ifs” 3. Documenting a plan of action is next, starting with managing the initial crisis, not least of all – whom do you communicate with? – customers and patients should all be part of that early communication as much as all other stakeholders. Could be a radio or newspaper ad. and simple notice posted in the window at the premises. 4. Once the crisis is over, recovery must be addressed, where would you go? with all its implications, is an imperative to be decided in the planning process. The initial place to identify is an emergency command centre. That sounds daunting but could be as simple as someone’s house or a hotel. 5. Once is not enough to address the basics above & more. Annually at least, the plan should be updated and tested by exercising that can be fun as well as instructive. Lessons are always learnt from exercising & that’s a lot better than finding out when the earthquake actually hits. 6. Insurance may be unobtainable and even if possible, more than just insurance is needed. Footnote Continuity Coach.com was designed for this very scenario in mind and is an effective management practice that we recommend for all organisations, large medium or small.
 
 
Caught without a DR plan, fire destroys retailer's IT centre
After a fire completely destroyed its IT centre earlier this year, a major fresh produce retailer, decided to deploy a disaster recovery solution. The company's chief financial officer, said the need for a BC/DR plan had been identified prior to the fire but when it did happen there was no formal plan in place to fall back on.
 
 
Crisis Management at Virginia Tech, USA, 30 Aug. 2007 report.
A day after a scathing state report criticised Virginia Tech’s handling of the shooting massacre on 16 April 2007, Charles W. Steger, the university president, defended his administration’s actions Thursday and said he had no plans to resign. “Based on what we knew at the time, I believe we did the right things,” Dr. Steger said at a news conference on campus, in Blacksburg. “You have to understand how fast things were occurring.” The report, prepared by a panel appointed by Gov. Tim Kaine, said lives might have been saved had university officials warned the campus earlier that a killer was on the loose. But Dr. Steger said officials did not do so to avoid panic, and to ensure that only accurate information was disseminated. He rejected the report’s depiction of a two-hour gap in police activities between the first shootings, when 2 people were killed in a dormitory, and the second set, when 30 people were killed in a classroom building. The police worked continuously through that period, and their actions improved the response to the second shootings, Dr. Steger said. He said the police had learned little from the first crime scene that would have warned them that more killings were about to occur. Dr. Steger agreed with the report’s conclusions that mental health tracking on campus was deficient and that communication among agencies had broken down. But he added that the university had received limited information about the mental health problems of the gunman, Seung-Hui Cho, when he arrived on campus. “In Cho’s case, no one at this university had any foreknowledge of his mental health problems that seemed dominant throughout his life before college,” he said. In the end, he added, Mr. Cho, 23, was solely to blame for a crime Dr. Steger called “unprecedented in its cunning and murderous result.” Dr. Steger said he had not considered resigning, based partly on the support he had received from alumni, students and faculty members. Larry Hincker, a spokesman for the university, said the university had to think very carefully before it adopted some of the report’s recommendations, like requiring key cards for most campus buildings, which he said would greatly restrict how students, faculty members and the public interacted. The report drew mixed reactions from families of victims, many of whom have waited for months for a full accounting of the shooting. “I think the report is excellent,” said Andrew Goddard of Richmond, whose son, Colin, was shot four times and survived. “I feared a whitewash, but it wasn’t.” Mr. Goddard said he did not think it was necessary for anyone to resign or take blame. But Vincent J. Bove, a security consultant and spokesman for the families of six victims, said the relatives he represents were “infuriated” that the report did not place clear blame. “Money and lawsuits aren’t the issue for them — accountability is,” Mr. Bove said. Families were especially angry to learn that the university had waited so long to send campuswide warnings, he said. At an appearance with panel members Thursday morning in Richmond, Mr. Kaine said he was satisfied with the report’s conclusions, calling them “comprehensive and thorough, objective and in many instances hard-hitting.” He said he saw no point in demanding firings, believing that campus officials had suffered enough. “I want to fix this problem so I can reduce the chance of anything like this ever happening again,” Mr. Kaine said. “If I thought firings would be the way to do that, then that would be what I would focus on.”
 
 
Electricity bungle leaves Johannesburg, South Africa, hospital hobbled.
The following case study highlights the disasters that can occur if cost cutting is taken in isolation and regardless of the possible Business Continuity consequences. Coronation hospital was forced to shut its doors and turned patients away after it was hit by a power failure. The hospital, in Johannesburg, was completely shut down yesterday with only the intensive care unit and neo-natal operating with the assistance of a generator. Zanele Mngadi, Gauteng health spokeswoman, said the power failure was caused by a technical problem: “The nature of the electric fault made it impossible for the generator to work throughout the hospital.” The spokesman for health in Gauteng, Jack Bloom, said that eight circuit breakers had failed and the hospital’s electricity board had problems. He said: “My information is that the maintenance contracts for hospital generators were recently cancelled by the department of public transport, roads and works.’’ He described the power failure as an appalling situation. He said Johannesburg hospital’s casualty had to close.
 
August 2007  
Mercury Energy (NZ) - 29 May 2007: BCM case study of an incident that escalates to crisis status
Business Continuity Management Case Study – Mercury Energy (New Zealand) incident on 29 May 2007 that escalated to a crisis involving intense media interest, outsourcing, reputation, corporate social responsibility, financial liabilities, accountability & Our comment so far: This incident has a long way to run yet. The ramifications for Mercury Energy are substantial and we shall report on events as they unfold. It seems the practice of cutting power is outsourced by Mercury and attempts have been made to lay the blame at the door of the contractor. This defence has already been rebutted. More certain seem the inadequate procedures in place by Mercury when engaging and training contractors. Corporate Social Responsibility (CSR) also has been raised. It seems certain that the charter under which Mercury operate has clearly stated responsibilities for CSR. However little was evident by Mercury in their practicing of CSR. Instead it was just a glossy image that was more image making than management practice. Had Business Continuity Management been followed properly at Mercury, this crisis situation should not have arisen. The incident Folole Muliaga, a 44-year- old mother-of-four died in her home on 29 May 2007 just two hours after the New Zealand electricity supplier Mercury Energy disconnected the household because of an unpaid account. “Yet again we see a so-called New Zealand State Owned Enterprise put profit before people in New Zealand,” said the Solidarity Union’s protest organiser, Joe Carolan. “Mercury Energy . . . are corporate bully boys who prey on the weak, old and vulnerable, and are no better than the corrupt money lenders who plague our communities.” Mercury Energy said the power was cut off because of increasing debt on their power account, despite two payments being made in the past month. The outstanding balance on the account was $168.40 General Manager James Moulder said that company procedure was followed in terms of notices warning of impending disconnection being couriered to the house over a six week period. Mr Moulder said the contractor who went to the house to disconnect the power had no idea Mrs Muliaga relied on power to operate her portable oxygen machine. Family hires lawyer to tackle company “Someone needs to be held accountable” The family of the woman who died after Mercury Energy cut power to her oxygen machine will consult a lawyer today after the company absolved itself of any blame. Family spokesman Brenden Sheehan said someone needed to be held accountable for the death of mother of four Folole Muliaga, 44, and her children needed to be provided for in the future. Mr Sheehan said the family asked him yesterday to engage legal representation and had hired Auckland lawyer Olinda Woodroffe. “The family is very angry,” he said. “We’ll be taking this as far as we can.” Mercury Energy said it had no knowledge power was needed to keep the machine going. Doug Heffernan, chief executive of Mighty River Power, the State Owned Enterprise which owns Mercury Energy, said yesterday they deeply regretted the death of Mrs Muliaga but they had done nothing wrong. Protesters then converged outside Mercury Energy’s headquarters in Auckland’s in response to the power company’s treatment of Ms Muliaga. The group, made up mostly of union members, held placards reading, “People before profit” and “Contract killers”. Prime Minister enters the scene Helen Clark, Prime Minister of New Zealand, says it is “unbelievable” that a Mercury Energy contractor cut the power of a sick south Auckland woman when medical tubes were clearly visible coming out of her nose. Miss Clark has said the regulations around disconnecting power supplies might need to be strengthened as a result of the death, but today she also said the actions of the individual contractor who cut off the Muliaga’s power were alarming. Mercury needed to stop concentrating on defending its actions and be more open. “I think there is a very confusing situation here from Mercury Energy and my advice to them would be to stop digging right now.” She said “The public is entitled to full accountability on this.” Miss Clark said since last year there had been Electricity Commission guidelines for retailers on how to assist low income consumers in regard to payment. There was also a protocol between power companies and social service agencies on the subject. She said the case, which has been reported around the world also conveyed a bad image of New Zealand. “This is intolerable. We all feel not just embarrassed but devastated that this incident of heartlessness by a company and a contractor has gone around the world conveying an image of New Zealand that we don’t like of ourselves. “We are not a heartless people. People do care as can be seen in the outpouring of aroha and love for this family.” Miss Clark said even if Mrs Muliaga’s death was not directly related to the power cut, Mercury’s actions to cut the family’s power in the circumstances was wrong.
 
 
 
 
Power disruption is fastest growing downtime threat to UK organisations
SunGard Availability Services has revealed the major causes of business disruption in 2006, according to its invocation statistics. Power related disruptions - the fastest growing category - increased by over 350 percent between 2005 and 2006, accounting for over a quarter (26 percent) of customer disaster declarations (invocations), up from 7 percent in 2005. Hardware failure remains the leading cause of business disruption, covering half (48 percent in 2006 compared to 49 percent in 2005) of SunGard’s customer invocations. Flooding and infrastructure-related invocations such as air conditioning faults and uninterrupted power supply (UPS) loss were the third largest cause of business disruption, representing 5 percent of invocations each in 2006. Keith Tilley, managing director UK and senior vice-president Europe, SunGard Availability Services, said, “These figures contain no surprises, yet no matter how trivial the cause, an outage can have potentially serious consequences for the business - particularly if the system in question is supporting a customer-facing website or a contact centre. It is critical that organisations consider the impact of any potential incident, so the most important information remains available - come what may. With IT equipment drawing more power than ever, it is imperative that businesses plan around possible interruptions to their power supply.” SunGard’s most unusual invocations: - Unexploded bombs from World War II were discovered in a nearby building site causing evacuations; - Cleaner unplugged the main server to use the vacuum cleaner; - Sewage blockage rendered toilets inoperable forcing employees to move to a SunGard Availability Services facility; - Local youths set fire to a wheelie bin denying access for employees; - Theft of PCs and servers from customer site. www.sungard.co.uk
 
 
Business continuity success story
The Manager of the ANZ Bank in Tonga is well trained in business continuity. He is based out of Melbourne and, when the worst happened in November 2006 (a fire occurred at the Tonga Branch on Thursday, 16 November 2006), the Manager saved the branch information immediately by closing down the bank and backing up all systems through to Melbourne, realising they were facing a difficult situation. Bank staff were also told to go home, thereby potentially saving lives, as the fire continued to spread and completely destroyed the bank building. The Manager had already put into place the Business Continuity Plan that he had been well trained on in the past. He instigated a rebuild at an alternative site, whereby complete joinery was made up by tradespersons in Tonga and a fit-out occurred at the alternative site. IT systems and supporting infrastructure were installed, and the branch was fully operating by the following Tuesday, 21 November 2006. Not bad when you lose your premises and can quickly carry on business as usual at an alternative site.
 
December 2006  
Gas supply disruption – the aftermath
Council refuses responsibility (Dominion NZ)

Wellington (NZ) City Council is refusing to accept responsibility for a multi-million dollar gas outage that crippled the city’s hospitality industry – depite a report confirming its ruptured water pipe was to blame. In a bid to deflect liability, the Council says gas distributor Powerco should have properly assessed the risks to its gas network and taken steps to ensure continuity of supply. About 1,000 central city customers lost gas when a Council-owned water pipe ruptured on 30 August 2006, slicing through a neighbouring gas pipe beneath Bowen Street, flooding the gas network with water. Some customers were without gas for three weeks as teams of contractors battled to drain the pipes. Restaurants were forced to scale back menus, while hotels lost hot water and central heating. The crisis cost the hospitality industry an estimated $5 million in lost business. Most importantly, those businesses with Business Continuity Plans that had been correctly installed and kept up to date and exercised, were able to weather the crisis and carry on business as usual.
 
September 2006  
Unexpected disruption to business following failure of gas supply
Buildings evacuated after major gas leak

Wellington, NZ (the beginning) – A major gas leak has led to office workers being evacuated and traffic disrupted in the Wellington central business district.

Civil servants were evacuated from Bowen State Building after the smell of gas was detected.

Parts of Bowen Street and The Terrace were closed to traffic forcing commuters to find other routes to work.

The problem stemmed from an interruption to the gas supply in central Wellington due to water entering Powerco’s gas distribution network The problem had spread progressively throughout the day.

Police were working with Powerco, deverting traffic and ensuring the affected streets were securely blocked off.

Parts of major streets were closed to the traffic, affecting thousands of commuters trying to get to work.

Following days:

Gas users could face two days without hot water.

About 50 staff in the Bowen State building – next to the Beehive – were evacuated at 7 am after workers smelt gas, and rush hour commuter traffic slowed to a crawl outside.

Gas utility Powerco said 1,000 properties were affected.

Wellington Brewery Company was one of the properties affected. The boutique brewer was halfway through a brew when its gas boiler failed, spoiling the beer. It is estimating $10,000 of costs so far.

Some businesses opted to close.

Ibis Hotel manager Olivier Lacoua said 100 Ibis guests had been sent to stay at its sister hotels the Novotel and Mercure, which were not affected. Remaining guests were being served food cooked on hired electric fryers and hot plates, and some were showering in portable showers in the hotel basement. Others would be shuttled to the Novotel and Mercure hotels for showers.

A team of 70 people, including contractors, was working with Wellington City Council on the problem but it appeared more water than first thought had been pumped into the gas system and it would take at least another 24 to 48 hours to restore the network fully.

“We will stage the restoration, area by area, as the water is removed from the system and we can confirm that it is safe and secure.” There was no threat to public safety, a spokesperson said.

Restaurant Association president Mike Egan said his catery, Monson Poon, had been closed for 5 nights and he estimated he lost NZ$ 45,000.

Some small restaurants would not recover form the 5 day loss of trade and the loss industry wide would be between NZ$ 4 million & NZ$ 5 million.

Many restauranteurs were small business owners who could not afford a legal battle for compensation, and insurance to cover infrastructure faults was not widely held.

Hotel chain Accor estimated their loss to be NZ$ 100,000 in revenue.

Powerco said the issue of compensation was complicated. Powerco’s priorities were to ensure the network was safe and secure and restore the gas supply to all the affected areas as soon as possible.
 
August 2006  
Dell batteries end up too hot to handle – Business Continuity on display
Computer company Dell’s battery recall highlights how vulnerable technology production has become.

There is the nail test, in which a team of engineers drives a large metal nail through a battery cell to see if it explodes. In another trial, laboratory technicians bake the batteries in an oven to simulate the effects of a digital device left in a closed car on a sweltering day – to check the reaction of the chemicals inside. Random batches of batteries are tested for temperature, efficiency, energy density and output.

But the rigorous processes that go into making sophisticated, rechargeable batteries – the heart of billions of electronic gadgets around the world – were not enough. Last week Dell, a computer company, said it would replace 4.1 million lithium-ion batteries made by Sony, a consumer-electronics firm, in laptop computers sold between 2004 and last month.

A handful of customers had reported the batteries overheating, catching fire and even exploding – including one celebrated case at a conference this year in Japan, which was captured on film and passed around the internet. The cost to the two companies is expected to be US$200-$400 million ($313-$626 million)

Squeezing suppliers to the last penny, using economies of scale by placing huge orders, and running efficient supply chains with little room for error. It was a volatile environment in which mistakes have grave effects.

Compared with other product crises, from contaminated Coca-Cola in 1999 to Firestone’s faulty tyres in 2000, Dell can be complimented for quickly taking charge of a hot situation. The firm says there were only six incidents of laptops overheating in America since December 2005 – but the internet created a conflagration.
 
August 2006  
Major Disruption by Fire at Tip Top Bread on 1 June 2002
(Tip Top is a division of George Weston Food Limited)

You’ve just lost your main production plant – what happens next?
Business as usual has always been the simple Tip Top objective and message from senior management, despite the ongoing minor crises that are part of the everyday cycle of baking and delivering bread. Tip Top supply around 50% of the bread supply to Sydney and NSW from five bakeries in NSW State.

The Disaster
Tip Top were put to the ultimate test when disaster struck at their largest bakery, Fairfield, that went up in flames in the early morning hours of Sunday 2 June 2002 & burnt to the ground.

Rodney Molla the NSW State General Manager was awoken at 1.30 a.m. and the Tip Top Division Chief Executive, Brian Robinson an hour or so later, with the news of the seeming disaster. Bread was due out Sunday & daily thereafter, on time to customers

Safety of people the first priority
The immediate concern was for the safety of staff and this necessitated effective and prompt communication with Fire Services whom had taken control of the site to fight the fire. The fire was only brought under control after the bakery had been all but destroyed including all the stock-pile for that days delivery. No fire sprinklers were installed but had been considered and decided against in previous years on the grounds of a cost/benefit analysis. The plant was over 20 years old and over costly protection could not be justified

Communication becomes critical
Communication became the first real crisis issue. Management had to convene in urgency and the first real crisis & planning meeting between five senior managers took place at 8.00am on Sunday morning. All previous communication protocols were temporarily suspended with one designated senior manager taking overall leadership and control

No major injuries or casualties were encountered from a staff count of over 200. Much of this being due to prompt action to evacuate to a safe distance from the fire

Other factors
Of additional value, in the potentially disastrous situation developing, were several interesting issues:

• Bakeries thrive on crises, it is a part of every day normal baking. Not so much major fires but small incidents that need urgent treatment. So the background was in place

• Bakeries operate in virtual JIT conditions. This means there is no reliance on stockpiled bread making supplies

• Ongoing management training had created a common set of principles, the Four Quadrants, for organisational leadership. This accrued common knowledge was of considerable assistance in handling the many decisions needed to bridge the gap of a major crisis

Where to go to make bread
Alternate owned production sites already existed. Four bakeries were still operating in reasonable proximity. It was the logistics of shifting people and boosting production that mattered. In all, 150 people were relocated to alternate owned bakeries. Meantime competition from rival bakeries was intensifying and any thought of calling on competitors for assistance was found disappointingly impossible

People are the key
The response from staff to help out was all but overwhelming. This proved the benefit of a trained and dedicated workforce. No praise is high enough according to Operations Director, Robert Polycarpou who witnessed many acts of staff contribution above the normal

Data was lost
Many records were destroyed but some were retrievable from undamaged computers. Back up computer hardware and software was available at Chatswood and manual invoicing was also employed for 7000 customers. Payroll records were also lost and staff were paid using a historic average basis from old records

Communication with all staff was easy with home numbers held off site. In the event many staff turned up at the burning site and were informed on the spot of their options

Media handling and reputation management
Saatchi were given the task of media handling. Innovative and positive messages began appearing in the daily newspapers

Meantime communication with stakeholders was swinging into gear. From owned alternate locations calls went out to major customers directed by David Kadir, National Sales Director

Startling BCP findings
Our research had some startling findings:

No formal business continuity plan was followed, however the principles that should be part of all good business continuity plans were in place. In any event there was no time to study a business continuity plan, so no plan was employed during the crisis or the immediate recovery period. However what existed was the very basis of theoretical business continuity planning. We have shown this in table form so as to relate the Tip Top experience with ContinuityCoach.com best practice for business continuity management

Business Continuity practices from ContinuityCoach.com disciplines History at Tip Top Did it work
1. Project management, senior management support Yes Yes
2. Risk analysis & business impact analysis Yes Yes
3. Strategy setting Yes Yes
4. Emergency planning Yes Yes
5. Alternative sites ready to go Yes Yes
6. Staff awareness of what to do Yes Yes
7. PR and media control Yes Yes
8. Liaison with government agencies eg Fire Services Yes Yes

Outcomes
So how did Tip Top come out of this major fire?
• Bread delivery was maintained daily with 95% of normal revenue maintained
• Customer retention was maintained despite competition raids
• Staff morale was high & no jobs were lost

Key reasons for business as usual at Tip Top
What contributed most to this successful outcome?
• Staff attitudes were always positive and entrenched loyalties paid off
• Management leadership was based on organisation wide “Four Quadrant” training, so all managers were working from the same management knowledge base
• Alternative site bakeries existed to take up the extra production needed
• Close communication with stakeholders & customers
• Media handling was proactive using external advisors
 







Copyright © 2013 Continuity Coach Pty Ltd. All Rights Reserved. Terms & Conditions. Privacy Policy