The Third Rule of Human Risk is…

We’re at the mid-point of the Five Rules of Human Risk, meaning it is time to reveal Rule Number Three:

Rule 3: The Human Algorithm is complex and often irrational

Or as I prefer to think of it the “Why we need to bring Behavioural Science (BeSci) to compliance” Rule. 

Before we begin, an explanation for those less familiar with BeSci. The word “irrational” isn’t meant negatively. 

I use it in the sense that Economists do. If we were rational, then we would be “Homo Economicus”; Mr Spock-like utility maximizers who consistently take economically optimal decisions. We don’t.

We buy things we don’t need, often paying over the odds for them. And we do things that make no sense from a purely rational perspective, literally engaging in activities that can reduce our life expectancy, like smoking.

Ultimately these attributes are what make us human. Without them, we’d be pretty dull, have no friends and be creatively unimaginative. UK based readers of a certain age may remember this Viz magazine cartoon character:

(c) Viz Magazine. Source: http://viz.co.uk/2015/05/23/mr-logic/

All too often, those who seek to manage Human Risk treat their target audience as if it consisted solely of Mr and Mrs Logics. In doing so, they make flawed presumptions about the drivers of human behaviour.

We’ve seen in previous rules that we will only effectively influence human decision-making if we do so with, rather than against the grain of human thinking; by designing for the way people actually behave, rather than the way we would like them to.

I’ve used the word “algorithm” in the Rule in full knowledge that most of us tend not to use that term to refer to what is going on in our heads. Yet our brains are complex processing machines which take inputs and produce outputs in the form of decision-making and behaviours. By “complex“, I don’t mean “sophisticated“. We are just as capable of making extremely poor decisions as we are of making highly intelligent ones. The complexity I’m referring to is in how the algorithm operates.


Who’s going to drive you home?

To see that complexity in action, consider the challenge of programming self-driving vehicles. If the algorithm wasn’t complex, then it would be very easy to program a machine to drive a car. It really isn’t. 

Programming the rules of the road, such as how to respond to signs and respect speed limits, is relatively easy. 

But teaching machines to respond to changing circumstances is much harder, as this NY Times article explains. The reason for that is largely the unpredictability of human road-users:

It’s much more difficult to prepare self-driving cars for unusual circumstances — pedestrians crossing the road when cars have the green light, cars making illegal turns. Researchers call these “corner cases,” although in city traffic they occur often.

Equally challenging is teaching self-driving cars the finer points of driving, sometimes known as “micro manoeuvres.” If a vehicle ahead is moving slowly looking for a parking space, it is best to not follow too closely so the car has room to back into an open spot. Or to know that if a car is edging out into an intersection, it can be a sign the driver may dart out even if he doesn’t have the right of way.

This dynamic is important when we think about managing Human Risk. Because, at least in the short-term, what we’ll be handing over to machines are basic repetitive tasks that they are better at than we are. In the meantime, we’ll be spending more time doing the tasks that we are better at; those involving judgement, nuance and emotional intelligence. The very same skills, that can bring out the worst in us. 

What we need in the Knowledge Economy is a workforce that is empowered to respond intelligently to situations that won’t always be predictable; things you can’t program machine algorithms for, but for which the Human Algorithm is ideally suited.

Like customer service, where delivering the right outcome often requires adaptive thinking. For one customer, the right answer might be to offer a refund she isn’t legally entitled to but which will make her happy.

Another customer might simply want to feel as though he is being listened to.

And a third might just want an innovative solution to whatever problem they are trying to solve.

A good example at the high end of customer service (and wallet) is the Ritz Carlton hotel chain, where employees are empowered to spend up to $2,000 on “rescuing a guest experience” without manager approval. Humans know how to deal with this. Machines would need to be taught how to come up with the right answer and might still get it wrong.

Whatever we hire them to do, we won’t empower people in the right way if we don’t find a means of engaging with them in a manner that reflects the way in which they think. There is no point in hiring smart people if we always treat them like they are idiots.

There may be times when we need to carefully control what people do; giving airline pilots, pre-flight checklists is a good way to ensure they don’t forget something important.

Equally, I don’t want people working in nuclear power stations or who maintain public transport to feel empowered to be creative in their work.

But even in those roles, we don’t want mindless compliance. We need people to think, be engaged and spot things that might be out of the ordinary.

To get this balance right, we need an understanding of the factors that drive the decision-making we want to influence. There are too many for me to list here, I’d need an entire book for that. Until that book arrives (and yes it’s coming very soon), here are a few key pointers:

Reality is in the eye of the beholder

Many aspects of human behaviour are driven by how people perceive the world to be, rather than the way it actually “is”. That’s because what we see as reality, is merely a perception of it. Confused? Bear with me.

Watch any professional sport with fans from two opposing teams and you’ll find them criticising the other team and the officials for decisions made against their own team.

They are often quite literally blind to the failings in their own team’s behaviour. It’s almost like they’re watching two different games.

This also plays out in 1:1 interaction. Ever found yourself having to say something along the lines of “I didn’t mean to offend you”?

Assuming you and the person you’ve offended are both being honest, then the difference between the two interpretations of the same thing is a matter of perception.  They thought it was offensive and you didn’t.

The fact that you didn’t mean to offend them, changes nothing about their fact that they were. Our perception of the world becomes our reality.

In a risk management context, this is critical. If we try to influence people in a manner they find inappropriate, then they will react against it. An organisation imposing a rule will find it harder to do so if the organisation’s authority is itself not respected. Either generally or in the specific field in which it is seeking to regulate.

Equally, risk itself is a matter of perception. Each of us has our own level of risk appetite. You only have to look at how people drive to recognise this dynamic in action: some will feel comfortable driving fast, others will prefer to drive more cautiously.

Exposure to risk also changes our perception of it. A Formula One driver will have a different perspective on speed, to someone who rarely drives. Equally, an environment that feels safe, can make us under-estimate risk.

One of the challenges of modern cars is that they feel incredibly safe. Which they are, relative to older models. But as we sit in air-conditioned calm, surrounded by safety devices, we can get a false sense of how fast we’re driving and the inherent danger of the activity.

What this means is that if we want to manage Human Risk, then we need to be mindful of the perception of the target audience in the way that we do so.


We’re all under the Influence

One of the challenges we face when managing Human Risk is that rules aren’t the only influence on our behaviour. We also take our cues about what is acceptable from the things around us, particularly the behaviour of other people.

We all know this intuitively. Travel to a foreign country you’ve never been to and not sure of how things work? A simple rule of thumb, or heuristic, that we often deploy is to copy what the locals do. Whether that’s in choosing a restaurant or in crossing the road.

If I’m in Germany, I find myself diligently waiting for a green light before crossing a road; something I usually don’t do in the UK. That’s not just because the rules about jaywalking are stricter in Germany, though I believe they are, or because the inherent risk of the activity differs.

It’s probably a similar experience being hit by a car driving on the right of the road, as it is being hit by one driving on the left. Rather, its because I want to fit in. I’m unlikely to get caught doing it by a police officer. But I’m very likely to be berated by a member of the public. It’s just not the done thing.

Nightclubs use the dynamic of “Social Proof” (we copy others) to their advantage, by artificially creating lines to send a signal of the popularity of the venue. Restaurants do the same thing by filling the tables nearest the window first.

There’s a reason celebrities are asked to endorse products and “social media influencers” are seen to have value. “Tone from the top” matters, but it isn’t just senior people that can influence behaviour.


Things are no different when it comes to respecting rules. A sign might tell us that a speed limit is in force, but if we see lots of other drivers breaking that limit, then we’re more likely to do so ourselves.

We’re so attuned to looking at what others do that we don’t even need to see other people doing it to copy them. Merely being told that this is what other people do or seeing evidence that they’ve done something, can be enough of a hint that we ought to do the same. 

Something readers should bear in mind if they’re ever minded to deal with excessive breaches of a policy, by emailing a reminder of the rule to the target population with an exhortation to comply with it. The mere act of sending that email highlights widespread non-compliance. After all, you wouldn’t send the email unless lots of people were doing it…

It’s a hard habit to break…

Humans are creatures of habit. Next time you’re having a shower, try changing the order in which you wash. It’s highly likely that you’ll default to a particular order (in my case shampoo first) and trying to do it in another sequence will feel awkward.

Yet this point seems to be lost in many traditional compliance and risk management methods. Instead of recognising this, they work on an implicit presumption that learned cognitive dynamics can simply be overridden or ignored in favour of “logical” instructions.

Just telling people to behave differently generally isn’t that effective. There’s a reason we don’t just tell people to stop smoking; they need help to be able to do so. Willpower alone is often insufficient. Even if it is sufficient, telling people what to do doesn’t necessarily give them the willpower.

Spitting in the wind…

In conclusion, if we really want to mitigate Human Risk, then we need to understand the algorithms that drive human decision-making. The things I’ve highlighted are just three of the many things that influence our decisions and therefore behaviours. The good news is that although the algorithms are complex, many of our behaviours are predictable and small changes in how we seek to influence, can have a big impact.

Of course, there may be situations where a “traditional” approach is genuinely the best answer. But even if that is the case, I’d argue that you can only reach that conclusion, if you’ve understood what BeSci dynamics are in play.

Not deploying BeSci techniques to manage Human Risk is like spitting in the wind. You might get away with it, but all too often you end up wearing a more embarrassing version of the very problem you’re trying to solve. 

The author is the founder of Human Risk, a Behavioural Science Consulting and Training Firm specialising in the fields of Risk, Compliance, Conduct and Culture.

1 thought on “The Third Rule of Human Risk is…

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

All original content on these pages is fingerprinted and certified by Digiprove
%d bloggers like this: