The First Rule of Human Risk is…

I’m often asked for my top tips for managing Human Risk.

Over the next five weeks, I’m going to reveal the Five Rules of Human Risk, beginning, appropriately enough with the first:

Rule 1: Human Risk can be managed but not eliminated

On the face of it, this is a statement of the blindingly obvious. Yet it is fundamentally important; if we really want to manage Human Risk, then we need to accept that we can’t control every aspect of human decision-making. No matter how hard we try.  

In part, because people are fallible and everyone makes mistakes. But also because 21st-century organisations need to innovate in order to be relevant.

By default, innovation requires trial and, importantly, error. If organisations need to operate on this basis, then the individuals within them will require some latitude in this regard. 

In the Knowledge Economy, the very skills that people are hired for are major potential drivers of Human Risk. Whilst technology will reduce some incidences of it, notably by automating repetitive processes where human error can be prevalent, it will increase others.

This is because people are spending more time doing the things the machines can’t; tasks that involve judgement, nuance and emotional intelligence where the risks of getting it wrong, can be material.

Technology is also democratizing Human Risk. Historically, some forms of it were largely the preserve of the C-Suite. Nowadays, powered by smartphones and social media, even the most junior employees have the ability to cause reputational damage to their organisation.

In an uncertain world, organisations will find themselves dealing with unpredictable situations, where there may not be a precedent or playbook answer. In some cases, the way things have been handled in the past, may not be a good guide as to how they should be handled in the future.

Given the rapid pace of social, cultural and technological change, people in senior positions may understandably struggle to manage issues involving 21st-century norms. In nuanced situations, the nature of the response may well overshadow the underlying issue. For better or for worse.

I’m not suggesting that we simply tolerate mistakes. Clearly, we don’t want the people who maintain aircraft, run nuclear power stations, perform medical operations or prepare our food, to be cavalier in their approach to errors.

Equally, we don’t want to encourage North Korean cultures of fear, where mistakes result in punishment and issues are covered up. Human Risk events can become valuable learning opportunities if they’re handled in the right way

We need to think smartly about how to navigate this challenge. Influencing human decision-making is not the same as programming an algorithm: people are autonomous, with opinions and feelings. Getting them to “do the right thing”, particularly where “the right thing” might heavily depend on the circumstances, requires their engagement not simply their subservience.

This is why I’m a passionate advocate for the deployment of Behavioural Science (BeSci); “Bringing Science to Compliance” as I like to refer to it.

Because if we want to influence people we need to work with, rather than against, the grain of human thinking.

This means designing systems that operate with an understanding of the way people actually make decisions, rather than the way we would like them to.

Here’s what happens if we don’t:

By accepting that Human Risk will occur, we can build control frameworks that make organisations more resilient. Recognising that it cannot be eliminated in its entirety, permits a more honest and sophisticated approach to risk management.

How we can go about this, will become clear from the next 4 Rules. For now, I’ll leave you with the first:

The author is the founder of Human Risk, a Behavioural Science Consulting and Training Firm specialising in the fields of Risk and Compliance.

error

1 thought on “The First Rule of Human Risk is…

  1. I’m reminded of the Zanussi catchphrase for their whitegoods ads from the 1970’s – “the appliance of science”………

    I came up with “Contextual Compliance” to describe how I build training programs around behavioural/conduct risks – providing an emotional engagement with the need to observe behavioural rules rather than just a knowledge based synthesis of facts.

    Cheers
    David

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

All original content on these pages is fingerprinted and certified by Digiprove
%d bloggers like this: