BFM2013_3_00_Introduction – Business continuity governance

This issue of BFM was focused on “Business Continuity Governance”: how to ensure that a business will be able to cope with unforeseen events with minimal disruption and minimal additional costs, via a continuous, knowledge-based reassessment of business needs.

Adopting a perspective focused on cultural and organizational change allows to create a set of guidelines that are flexible enough to evolve as your own business environment changes, while enabling the long-term structural sustainability of your business.

BFM2013_3_01_A knowledge-based definition

Our approach is that Business Continuity should be considered part of a common framework of processes focused on ensuring the long-term viability of your business, and not just an add-on rulebook.

The remaining sections of this chapter are just an introduction to our suggested approach to Business Continuity: future issues will deliver a more detailed description of the subjects outlined here.

Some Business Continuity initiatives results in manuals based on the assumption that everybody will behave as planned when it will be required, and that all the details will be magically remembered by everybody involved.

A knowledge-based approach should start from a clear identification of the existing behavioural constraints, i.e. what is considered “normal” within your own specific environment.

Then, beside defining your own “business continuity model”, you should also identify a “convergence roadmap”, focused on adapting either your existing behavioural constraints, or your business continuity blueprint, or both.

Aim: to obtain a behavioural change that will ensure that the required level of readiness will be in place.

After a first implementation, a continuous improvement approach will monitor and reinforce the level of readiness achieved, to ensure that your Business Continuity assumptions are realistic.

BFM2013_3_02_Business Continuity as a project

The most common approach adopted to business continuity in private companies derives from the typical IT systems project activities, i.e. you select the requirements that are to be considered inside the system (within “scope”), and plan the deliverables accordingly.

What do you get? A continuity project carried out by external resources with minimal involvement of internal resources, where instead a continuity service (whose key actors should be internal resources) is what is required.

A software can be designed to deliver a certain set of results based on constraints it receives from a carefully designed environment.

Unfortunately, as discussed in the previous issue of BFM (Issue 02 Strategic Outsourcing ) almost no business can imagine to achieve the same level of control on its business and human environment.

The risk inherent in adopting a typical project approach?

That, in order to ensure compliance with the design, complexity will be obviously reduced by ignoring elements that are “outside scope”.

A more appropriate approach?

Business continuity as a programme that creates a set of services, services whose “delivery agents” will be their users .

BFM2013_3_03_Business Continuity vs. crisis management

Another Business Continuity approach focuses more on disaster recovery, to reduce the impact of any unforeseen event and shorten the time required to return to the pre-crisis level.

Crisis management stems from the need to ensure that the fabric of society is kept in place after unforeseen events whose consequences, if not managed properly, could generate damages possibly greater than the original disturbance.

A typical example is managing the aftermath of an earthquake, or trying to activate an evacuation plan.

Eventually, also the private sector started adopting a crisis management approach, extending disaster recovery from the use of redundant facilities kept available “just in case”, to the building of less-than-optimal supply chains, more resilient that a global just-in-time that ignores geo-political realities.

The main problem with this approach is that it relies mainly on special rules to be applied in special cases: this implies that significant additional costs could be required to maintain the required level of readiness.

Another pitfall is due to the perception that “crisis management” is a choice to surrender.

In reality, crisis management is a side-effect of assuming that some risks must be managed, and neither prevention nor avoidance are viable choices.

BFM2013_3_04_Coping with uncertainty

Business Continuity is perceived as a challenge because since the XVIII century we constantly prized (the illusion of) absolute knowledge.

Since the advent of “scientific management”, we tried to “bean count” any event, often adopting the un-scientific approach of excluding information that did not fit our carefully designed models.

As our technology improved, adding more and more layers between every day, intuitive activities that we can carry out and the working of instruments and processes in our complex societies, we developed a defence mechanism to avoid accepting our impossibility to cope with a gazillion of details: we “layered” our approach to reality, assuming that layers we do not cope with are managed elsewhere.

While the increased fragmentation and specialization increased the efficiency, it reduced the strength of our governance, as we were unable to have a comprehensive view of the reality, and nobody had real operational responsibility.

An excessive focus on individual trees, with almost nobody caring even for her or his own forest: for an urbanized population, it is normal to assume that there are experts readily available for any need.

Our companies extended supply chains and increased complexity by outsourcing to third parties- often forgetting that maybe also our suppliers would apply our approach, and that a chain (including a supply chain) is as strong as its weakest link.

Using a spreadsheet we de facto outsource to the hardware and software supplier our computational skills: how many people are still able to carry out basic business computations in their own mind?

Most people trying to cope with Business Continuity focus quite often on something akin to an asset logging system.

What they try to do is not to control the purpose and identify alternative processes, but instead to maintain the current level of support and activity- crystallizing the “status quo”.

Our suggestion is to recover the way to have a grasp of the overall picture, partitioning the organization like a puzzle, and focusing on the knowledge interfaces between parts.

The adoption of this knowledge-fractioning ideas leads to the ability to define alternative paths to produce the same results, while stating the minimal level of activity that is required to cope with the unforeseen loss of a piece of the puzzle.

Except the military and organizations that are required by law to add redundant resources to ensure business continuity (e.g. banks, utilities), few organizations can afford the luxury of adding more than minimal disaster recovery facilities.

A technique that we used in various “knowledge and organizational mapping” assignments is to first recover the capability to visualize information, before we ask to start to collect and chart data.

As an example, for organizational design and database design in the early 1990s we applied some simple tests to see if managers and others were still able to think visually (nowadays, white-collar staff is mainly exploiting logical and verbal capabilities).

If not, we asked them to bring a pair of scissors, a pencil, and a notepad; after identifying some relevant idea, we asked them to write on a piece of paper each idea, and then cut them out, and try to rearrange them physically.

Once the optimal positioning was found, then the first draft was drawn on paper or using software tools for organizational design and mind-mapping.

Alternatively, in more recent times, a whiteboard and phone camera replaced paper-and-scissors.

Without these kindergarten-level exercises, endless time would have been spent drafting and re-drafting, due to the inability to think visually.

Eventually, the people involved recovered the ability to visualize knowledge and connections, and therefore to see each part of the corporate puzzle.

A visual approach enables to spot discrepancies faster than with the typical bean-counter approach.

But even while coping with uncertainty there are times where actually some number crunching (e.g. radar charts to compare “organizational maturity/compliance profiles”, scatterplots to identify “behavioural clusters) enables to “visualize” the interactions of dozens of entities.

BFM2013_3_05_Managing expectations

In modern physics, quantum theory required a paradigm shift, as the old deterministic cause-and-effect model wasn’t enough.

Interestingly, while in the normal, physical world of our everyday experience this does not apply (i.e. observing does not interfere with the observed phenomenon, in normal conditions), human relations result in what is called the Hawthorne effect: observing humans influences their behaviour.

As stated in previous issues, as any other activity that involves communication and the management of knowledge, Business Continuity requires not only planning in order to achieve the intended results, but also managing perception from the intended public and any other potential stakeholder, to ensure that the message is delivered correctly.

This is a “degrees of freedom” issue: as both the human observer and the human observed have their own value systems, and multiple “community memberships”, the sheer observation can result in unpredictable side-effects. That’s why the “double blinding” methodology is used for clinical tests- to avoid that even those leading the experiment influence the subjects involved.

Therefore, introducing a Business Continuity approach adopting models derived from environments like the military is counterproductive, as those models assume that all the “human resources” have been moulded (i.e. “brainwashed”), and even in those “controlled human environments” all that behavioural training does not necessarily generate the desired responses (otherwise, we would not need to have trials about violations of Geneva Conventions).

BFM2013_3_06_Perception and reality

Managing expectations to generate the intended behaviour requires understanding the relationship between messages and their perception.

The “human resource” concept is fine as a definitional element, provided that the assimilation of personnel to other assets is not stretched too far.

An example of a stretched analogy: if human resources are indeed assets or liabilities that can be managed like any other asset, then given a certain set of inputs, they always produces a defined set of outputs.

As discussed before, we have nowhere close to 100% control on either the environment or the value system of our “public”, and therefore a certain stimulus could result in unexpected feed-back.

In the past, this was limited to people-to-people circles, but since mid-2000s (i.e. after the original version of this material was published), online social networks brought to the fore a completely different social dynamics- something that, in complex, regimented, organized societies we had forgotten.

You can read some articles on political and social advocacy and marketing posted online, or you can simply head for the nearest bookshop and… pick up old, pre-Internet books studying cultural anthropology and behaviour in tribal societies.

Yes, all the technological development since mid-2000s made most of the commentators forget lessons that anthropologists kept repeating.

If you have some spare time, Stanford University released on YouTube a course on “Human Behavioural Biology” that is worth watching if you are working in HR or cultural and organizational change: a faster way to move onto the subject that dozens of books; see an alternative, book-based approach to the same concepts on LibraryThing.com .

A business example: the yearly salary increase and related emoluments will produce diminishing returns in motivation, as it will be taken for granted and become a “floor”.

Obviously, unless you can build an inflationary system where each year the increase is greater than the one delivered the previous year.

Varying stimuli, e.g. tailoring prize/reward to the specific performance issue, will produce a constant conscious re-assessment of the environment.

Management approaches focused on building the right mix of perception and reality have to be carefully monitored, to avoid excessive manipulative practices that could easily backfire, or over-investment, if compared with the specific needs of your organization (e.g. optimal level of turnover, if your industry has seasonal staffing levels).

BFM2013_3_07_Self-protection and threshold levels

Our brain is tailored to ensure our survival, and while our sensory system allows a fine degree of perception, our brain “shuts off” irrelevant perceptions that are unsuited to our environment, to avoid overloading.

Consider the following example: the basic elements inside our eyes are able to discern a single lighted match kilometres away in pitch darkness/black (or so they say).

Side-effect: our brain and sensory system cooperate to avoid overloading during the day.

The main concept is: an increase in a specific stimulus that stays at the increased level for a considerable length of time results in reduced sensibility to the continued presence of that stimulus- i.e. at that level, it is considered a safe part of the environment, almost background noise.

This observation stands true not only for physical aspects, but also for other parts of our interactions with the environment; why should our brain build different schemas?

It would be simply too inefficient (and we have already enough inefficient left-overs from evolution, without our brain and body).

At the same time, repetition reduces the need to make a conscious effort (e.g. remembering a process) to produce a specific reaction to a stimulus.

The risk? If you have just a hammer in your toolbox, every problem becomes a nail

BFM2013_3_08_Raising awareness and “controlled crises”

The basic suggestion is: avoid building conditioned reflexes, unless you mean it and are confident that you can manage the consequent reduced flexibility.

The “fire drill” is the typical reflex building schema, as the purpose it to ensure that a stimulus (the fire alarm or fire/smoke presence) will generate a predictable behavioural pattern (reach fire exit, etc.).

Why it works:

  • you build the stimulus-based conditioned reflex
  • you do not overuse the stimulus to the point of becoming part of the environment (i.e. it is still unusual)
  • the stimulus and the associated actions are simple and intuitive, and pictorial aids help to ensure the result.

The less control you have on your “public”, the more you have to rely on controlled crises to raise awareness, as this would be the only way to avoid endless discussions on the actual need to solve the problem you want to raise awareness to.

As with any other activity, try to avoid building a routine around the awareness-rising activities, like scheduling drills at regular intervals, and focus on specific areas of intervention.

Example: in Italy, during the early 1970s oil crisis the Prime Minister had one street light out of two switched off- years later, he said that it was pointless, but really aimed at raising awareness about the need to cut down on electricity consumption, and the individualist inclination of Italians ruled out a traditional appeal to “national interest”.

BFM2013_3_09_Systemic approach to Business Continuity and Risk

Outsourcing, Business Continuity and Risk Assessment/Management all require a perspective that considers the impact on the organization as a whole, not as a sum of parts: a so-called systemic approach.

Adopting a systemic approach could start with a simple change: it is a discovery and mapping journey according to priorities, not just a traditional analysis.

If you were to explore an archaeological site, you would have to identify and design a set of priorities:

  • how to proceed with the discovery and “digging”, with a modicum of disturbance of the site
  • how to document whatever you find, on the basis of your priorities and on the initial findings
  • how to report and maintain the results, as the “site” could be subject to other disturbances
  • how and when to involve experts that could improve your understanding of specific artefacts.

If this seems far-fetched, just consider your current Business Continuity activities and the relationship with SLAs agreed with your suppliers.

Do you have a “mapping” of the coverage not only by each supplier/SLA, but overall of your Business Continuity needs?

As shown in the previous chapters, we consider insurance a risk management tool whose efficacy is in covering the financial risk of any activity, not a replacement for a sound priority-setting that should identify which part of the activities could create a “domino effect”, and therefore where continuity resources should be focused on.