The Information Laundromat

There’s been a lot in the news recently about suspected Russian interference in elections, so called “Fake News” and misinformation in general. It’s a topic that fascinates me; not least because I like to think that I’m smart and able to make my own mind up about things.

Of course that’s a very naive view. We’re all susceptible to manipulation; otherwise the PR, marketing, investor relations and lobbying industries wouldn’t exist. But the nature of how we can be and are influenced is changing radically, in particular through social media and the internet.

Thanks to technology that puts broadcasting abilities in everyones hands and social media platforms, anyone can share their content and opinions with a worldwide audience. Influencers now have the same, if not a greater, reach than traditional media.

One of the reasons social media works so well is that, on the face of it, it is self-selecting: we choose who we share content with and whose content we see. Whether that’s getting an update from Great Aunt Maude who lives on the other side of the world, or watching a video of a celebrity we find entertaining.

At heart we’re social animals; we naturally seek validation from others and like sharing stories and experiences. We also trust our friends and acquaintances. Social media amplifies and facilitates these tendencies. That can be good (charitable causes, connecting families living in different continents) or, more relevantly for this blog, bad.

The latter comes in many forms. Like “astroturfing” where influential journalists or politicians have their Twitter feeds taken over by huge numbers of fake accounts (some bots, some not) which seek to persuade them that a particular line of thinking is more widely accepted than it actually is, with a view to influencing their coverage or policy making. Like this Tweet from a well-known UK journalist:

Or the creation of “clickbait” stories that are simply untrue, but which play to people’s fears and which they then, for well intentioned reasons, share with their friends. More on that in this excellent Wired magazine article:

I’m reminded of money-laundering, the process by which criminals take the proceeds of crime and then “clean” it by mixing it with legitimate sources of income through the banking system.

Social media is like an Information Laundromat, whereby things that aren’t true, are pushed through channels that give the story legitimacy. So a story you see on your Facebook feed gets your attention whether you know it or not. Obviously things that are shared by your Friends have the most impact, but even those that aren’t will impact your subconscious. Sometimes things that begin as “promoted content” which the social media network weaves in to your feed between the things you’ve asked to see, is re-shared by users; turning an advertisement into an endorsement.

The mainstream media also gets in on the act. Have a look at how many stories are now sourced from social media. It’s a natural place for them to look, especially given the underinvestment in proper journalism these days. But it also gives further legitimacy to social media as a reliable information source.

It probably shouldn’t surprise us that people are seeking to manipulate opinions via social media. It’s low cost, easily scaleable and sadly, it works.

Because we’re predictable in how we behave and we’re not properly wired for the internet age, we’ve got a huge case of Human Risk in action.

We’re bombarded with ever increasing volumes of information that require us to take more decisions than we’ve ever had to before. To navigate that we rely on shortcuts that are instinctive to us. We trust the familiar and we become creatures of habit; just try switching the location of App icons on someone’s phone and see how it frustrates them! We can get information on everything and anything. Not all of it accurate.

Unfortunately, the tools we use to navigate today’s world aren’t necessarily going to serve us as well as they might have done in days of old.

Obviously this is likely to be more of an issue for digital migrants than digital natives. Although arguably the cynicism that older digital migrants demonstrate in the face of online banking and new perceptions of what constitutes privacy, might serve them better than those who are more trusting.

So what can we do?

The good news is that there’s lots. It starts by being better informed ourselves and sharing that understanding with others. I recommend the work by Mike Hind who has a fascinating podcast called The Disinformation Age and the research done by a not for profit organisation called First Draft News

Then there’s the fabulously titled Calling Bullshit which contains an excellent series of lectures which study the proliferation of BS in the modern world and promotes some ideas about dealing with it.

We can also not take everything at face value and to question things more. Mike has some simple tips on his website about how to stop bots from polluting your Twitter feed.

Given it’s a technological problem, there are ways that tech can help solve it. There’s a wonderful initiative called Re:Scam which uses AI to power an email bot that wastes scammers time. If you redirect scam emails to them, they’ll put their bot onto it. The more time the scammers spend engaging with the bots, the less time they can focus on exploiting Human Risk in others.

error

Human Coding

Having launched this site a week ago, I was worried that I might not have anything to blog about.

Maybe Human Risk wasn’t really a thing.

Fortunately (at least for the blog), Human Risk is alive and well in the UK where this week has seen a number of ministers resign for what The Economist summarised as Unparliamentary behaviour: sex scandals and ministerial mistakes.

It’s Human Risk in full effect: people doing things they shouldn’t.
Albeit, people who absolutely should know better.

Of course, this isn’t the first time Ministers have had to resign for behavioural reasons; there are plenty of examples of that in the past.
In fact, it’s been a feature of British government for some time that people in positions of power don’t behave appropriately.

So much so, that there’s something called The Ministerial Code which is a set of principles that outline what is expected of Ministers.

Note the word “Code” rather than rules. It’s a peculiarly British way of managing Human Risk. We love Codes.

Whether it’s The Highway Code that is a set of principles to guide road users or The Takeover Code which governs behaviour during (surprise, surprise) takeovers.

Codes are a form of principles based regulation; rather than containing a detailed list of rules, they outline principles that people in a given situation need to abide by. By not being overly prescriptive they allow for flexibility of interpretation, which in theory removes loopholes and means the regulations don’t need to be constantly updated.

As a means of managing Human Risk, Codes are an interesting idea. Because they’re not having to list out all potential scenarios they want to cover, they can be short and easy to read. Those subject to them have to think about their behaviour; you can’t just point to a (potentially badly worded) rule to justify your actions. The spirit of the law takes precedence over the letter of it.

Of course that only works well if the body enforcing the Code has some teeth. What’s amazing about some of the Ministerial resignations in the UK, is that they were allowed to resign rather than get fired. A code won’t control Human Risk, if the humans it is attempting to control don’t fear the consequences of non-compliance, but are instead prepared to run the risk.

error

Chief Behavioural Officer

Deloitte has just published a report entitled The Future Of Risk in which they highlight ten trends impacting the risk landscape for companies.  I’m delighted to see that the third trend they highlight is the importance of behavioural science.  I particularly like the idea of a Chief Behavioural Officer…

“Behavioral science is the study of human behavior through systematic research and scientific methods, drawing from psychology, neuroscience, cognitive science, and the social sciences. There is increasing demand for these skills in the business world—including risk organizations. What drives risky behavior? How do cognitive biases lead people to wrongly assess risk? How can risky behaviors be detected and modified? These are the types of questions leading organizations are looking to answer with behavioral science. In fact, some Fortune 500 companies today even have a Chief Behavioral Officer at the C-suite level”.

The entire report is available via the link above and is well worth a read.

error

An Introduction to Human Risk

We’re hearing lots about how Robotics and Artificial Intelligence (AI) is going to transform risk functions, with machines taking over many of the tasks currently done by people.  That doesn’t mean we’re all going to be replaced by Risk Robots, because no matter how far technology develops there will still be a need for humans. In part, because stakeholders, especially regulators, aren’t ever going to allow all responsibility and decision making to be abdicated to AI.

We are, however, going to need to change what we do and the Risk Officers Of The Future will need to develop new skills.  It’s why I support the idea that we should all learn the basics of coding. We don’t need to be experts, but to understand the risks in a technologically advanced world, requires an understanding of what is going on inside the machines.

It also means that we’ll need to be more “human” and focus on doing those things that the machines can’t. Whilst they can analyse, process and spot patterns better than we can, AIs can’t (yet!) inspire, challenge, persuade or use intuition.  Even if they come with a friendly voice like Amazon’s Alexa or Apple’s Siri, they have no Emotional Intelligence (EI).   To succeed in this world, we’ll need more EI than ever before.  It’s one of the reasons I’m a keen student of behavioural science.

To err is human..

Another is the increasing importance of something I’m calling Human Risk:

The risk of people doing something they shouldn’t, or not doing something they should

You only have to look at the number of times organizations explain things that went wrong by reference to  “human error”.  Even in situations where the human element might not initially be obvious, such as an IT outage like this:

British Airways Flight Outage: Engineer Pulled Wrong Plug

British Airways pointed to human error as the cause for mass flight cancellations that grounded at least 75,000 passengers last month and led the carrier’s passenger traffic to decline 1.8 percent.

An engineer had disconnected a power supply at a data center near London’s Heathrow airport, causing a surge that resulted in major damage when it was reconnected, Willie Walsh, chief executive officer of parent IAG SA, told reporters in Mexico. The incident led BA’s information technology systems to crash, causing hundreds of flights to be scrapped over three days as the airline re-established its communications.

Source: Bloomberg

Even things like natural disasters, which we can’t (yet) prevent, can be made that much worse by human action or inaction.  You might not be able to stop a hurricane, but you can substantially worsen its impact by not having appropriate disaster recovery planning in place.

To properly reduce operational risk, we need to have a better understanding of why people behave in the way they do, so that we can appropriately influence it.  This isn’t straightforward. As we all know from our own behaviours, human beings aren’t always rational.  So we need to incentivise them to do the right thing.

Senior Risk

One of the challenges of managing Human Risk is that it isn’t simply mitigated by experience. Perceived wisdom tells us that “practice makes perfect”; the more we do something, the better we are at it. Of course that’s true, up to a point. Play more tennis and you’ll get better at it, regardless of your natural talent.

But that logic doesn’t always apply, especially in organisations with a strong hierarchy. We’ve seen plenty of recent examples of senior leaders getting things wrong.

Take PwC partner Brian Cullinan who had the responsibility for handing out envelopes to presenters at this year’s Oscars ceremony. Seemingly distracted, when it came to the Best Picture award, Cullinan mistakenly handed the wrong envelope to the presenters. It is hard to imagine that outside the glamour of the Oscars, that someone of Cullinan’s status would ever have opted to hand out envelopes. You only have to watch the footage of the event to see that the original mistake was then compounded by a delayed response. It’s the kind of error you might expect from someone with no experience, rather than a senior partner. As the Academy’s CEO Cheryl Boone Isaacs put it:

They have one job to do. One job to do!

Then there’s the case of Barclays CEO Jes Staley, who was found to have twice attempted to unmask the author of letters to the Firm’s board that raised concerns about someone he had hired. It’s not difficult to see why having the integrity of the Firm’s whistleblowing process undermined by the CEO is a bad thing. And yet his actions did just that. Unsurprisingly the Firm’s regulators are unimpressed.

As we know, Human Risk within organisations is heavily influenced by the “tone from the top”. But it will also become more critical at all levels of organisations with the onset of automation. As the roles that humans perform become more cognitive and less repetitive, so the inherent risk of the activities they’re performing substantially increases.

Robot Risk

Which brings me back to the robots; they won’t make mistakes and will just do as they’re told.  Without a good understanding of behavioural science, we’re running the risk of deploying AI that mimics our bad habits. We’ve all heard of Unconscious Bias, but we also need to understand concepts like Narrative or Confirmation Bias (once we’ve decided something, we look for data to confirm we’re correct and ignore data that doesn’t) and Moral Licence (using the fact we’ve done something good, to then justify doing something bad).  We’re all susceptible to these, whether we know it or not. It’s bad in humans, but it’s really bad in machines.

Even when we program machines “correctly” to undertake logical processes, there can be unintended consequences. Take Uber’s “Surge” algorithm which hikes the price of rides when demand is high. It’s a legitimate business practice (airlines do it all the time) and it works seamlessly. But it’s not so good from a reputational risk perspective when natural disasters or terrorist incidents increase demand and leave the company open to legitimate accusations of profiting from emergency situations. The machine does what it’s programmed to do, but on a human level it produces totally the wrong outcome.

Amazon’s “Frequently Bought Together” feature is good for customers in that it recommends products that go well together; so if you buy a printer, it recommends the right cartridges to go with it.  It’s also good for Amazon as it increases sales. It’s less good when the same algorithm that powers that feature, ends up generating news headlines like this:

Potentially deadly bomb ingredients are ‘frequently bought together’ on Amazon:

A Channel 4 News investigation can reveal how Amazon’s algorithm can guide users to the chemical combinations for producing explosives.

Source: Channel Four News

When I think about what technology can do for us, I’m really excited. But I’m also absolutely convinced that whilst we need to learn to code the machines, we also need to de-code the humans.

error