I’ve just come across this interesting Wired magazine article that highlights why an increasing use of AI means an increasing need to be Human. Which, of course, leads to more concentrated Human Risk; as we automate more, the tasks which humans will be asked to do, will require precisely those skills that lend themselves to the greatest vulnerability to Human Risk. That’s because these tasks will by default involve more judgement and nuance. It’s what we’re good at and machines aren’t. But it’s also where we most easily display biases..