The Fifth and Final Rule of Human Risk is…

Following the first four Rules of Human Risk, here’s the fifth and final one:

Just because you can, doesn’t mean you should”

I’ve left this Rule until last because it is arguably the most complicated one to explain. This is because it has a dual-meaning; one in relation to the behaviour of the target audience and the other in relation to how we seek to control the risks associated with that behaviour.

I’ll begin with the behaviour of the target audience. Logically, we might want to control all aspects of their decision-making, but as we saw in Rule 1 (“you cannot eliminate Human Risk, but you can mitigate it”) that’s impossible and as we saw in Rule 3 (“The Human Algorithm is complex and often illogical”) it’s not always desirable. Particularly in the Knowledge Economy where we often need people to have a degree of freedom in how they respond to situations. 

In this context, “just because you can, doesn’t mean you should” is a mantra we need to instil in the target audience’s thinking.

What we’re trying to avoid is “unthinking Compliance” where people slavishly follow rules and do no thinking whatsoever for themselves. It’s the same dynamic we see when people do precisely what their GPS unit tells them to and they end up driving into rivers. Or in this case a Munich underground station:

There’s a balance to be struck here. Some situations will need people to do precisely what they are told, without any deviation. Yet, if a control framework sends a signal that the organisation has taken away the individual’s right to think for themselves, then we shouldn’t be surprised if people adopt an approach of presuming that if something isn’t forbidden, then it is permitted. 

Of course, we don’t want nuclear power workers suddenly getting creative when it comes to managing the contents of the reactor. But we do want them to be alert to things that the control framework might not have contemplated; if they see a fire then we don’t want them concluding that there isn’t one, because the fire alarm system hasn’t reported one. 

In a rapidly changing world, past experience is not necessarily a guide to future risks. Social and technological developments can rapidly render rules obsolete, meaning that heavily rule-bound organisations are ironically at greater risk than those that are less prescriptive.

If a control framework sends a signal that the organisation has taken away the individual’s right to think for themselves, then we shouldn’t be surprised if people adopt an approach of presuming that if something isn’t forbidden, then it is permitted. 

So the first aspect of the Rule is to ensure that the target audience needs to be aware that there is an element of self-policing to the management of Human Risk. What we might have once referred to as “common sense”. Sadly, as we know, common sense isn’t all that common.

To enable this requires the second aspect of the Rule to be respected, which applies to the approach we need to adopt towards managing Human Risk. What differentiates Human Risk from most other forms of Risk, is that the source of it is sentient and reacts to the control environment. Put simply, people have feelings.

We know this from our interactions with others; if someone is nice to us, then there’s a greater likelihood of us being nice to them. Equally, if someone is nasty to us, then we’ll probably respond by being nasty to them. Even when we might not be able to respond to their face, we’ll find other ways. We’ve all heard the stories of waiters spitting into the food of customers who have been rude to them.

What this means in the context of Human Risk Management is that people will respond to the control environment according to how reasonable they think it is. Note the word “think”; this isn’t about whether the control environment is reasonable. It is about how the target audience perceives it to be.

People tend not to respect speed limits they think are ridiculously low; it’s why you’ll often find explanations for very low limits outside schools. Without that knowledge, we’re likely to think the person setting the limit is restricting us unnecessarily.

Perception matters enormously. So if I feel that you don’t trust me, then I’m very likely to return the favour. It is critically important that those tasked with managing Human Risk, do so in a manner that considers how the target audience will react. 

Yet, many organisations introduce processes that fail to do this. I’m thinking particularly of “backside covering” exercises where there is a potential conflict between the interests of the employee and employer.

Often signalled by the phrase “sign here” or its digital equivalent, employees are required to do things they instinctively know are there in case things go wrong when the employer can use them against the employee.

We might sign, but we won’t feel good about it and if there’s a qualitative element to what we’re signing, we might not put as much effort into it as the employer might expect.

My point isn’t that organisations shouldn’t protect themselves or that they shouldn’t have the ability to tell their employees what to do. But if they want those employees to take responsibility and be accountable, then they will need to give them a sense of agency. Particularly where the role they’re being asked to undertake, require them to respond intelligently to changing circumstances. If you treat people like small children, then you shouldn’t be surprised when they behave like them! 

For these reasons, my message to those managing Human Risk is that you should think before deploying techniques that might not land well with the target audience. At least consider how they might react. You may choose to go ahead with something that they won’t like or engage with; that may be your only option. But you should investigate whether a BeSci-infused approach might be able to achieve the same objective by other, more effective, means that achieves positive cooperation, rather than enforced compliance.

All too often the approach taken is to default to traditional techniques that show no consideration for the human component. This Rule has two separate parts: if we’re expecting our target audience to be thoughtful in the way they behave, then we owe it to them to do the same when we’re looking to influence their behaviours.

The author is the founder of Human Risk, a Behavioural Science Consulting and Training Firm specialising in the fields of Risk, Compliance, Conduct and Culture.

error

1 thought on “The Fifth and Final Rule of Human Risk is…

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

All original content on these pages is fingerprinted and certified by Digiprove
%d bloggers like this: