Posted on

One of the things that really annoys AI researchers is how supposedly “intelligent” machines are judged by much higher standards than are humans. Take self-driving cars, they say. So far they’ve driven millions of miles with very few accidents, a tiny number of them fatal. Yet whenever an autonomous vehicle kills someone there’s a huge hoo-ha, while every year in the US nearly 40,000 people die in crashes involving conventional vehicles.

Likewise, the AI evangelists complain, everybody and his dog (this columnist included) is up in arms about algorithmic bias: the way in which automated decision-making systems embody the racial, gender and other prejudices implicit in the data sets on which they were trained. And yet society is apparently content to endure the astonishing irrationality and capriciousness of much human decision-making.

If you are a prisoner applying for parole in some jurisdictions, for example, you had better hope that the (human) judge has just eaten when your case comes up. A fascinating empirical study, conducted in 2011 and peer-reviewed by the Nobel laureate Daniel Kahneman, found that “the percentage of favourable rulings drops gradually from about 65% to nearly zero within each decision session and returns abruptly to about 65% after a break. Our findings suggest that judicial rulings can be swayed by extraneous variables that should have no bearing on legal decisions.” Since an AI doesn’t need lunch, might it be more consistent in making decisions about granting parole?

In judging the debate about whether human intelligence (HI) is always superior to the artificial variety (AI), are we humans just demonstrating how capricious and irrational we can be? Er, yes, says Jason Collins, a behavioural and data scientist who now works for PwC Australia. In a wickedly satirical article in the online journal Behavioral Scientist, he turns the question we routinely ask about AI on its head: “Before humans become the standard way in which we make decisions,” he writes, “we need to consider the risks and ensure implementation of human decision-making systems does not cause widespread harm.”

Collins outlines four basic principles that we should apply before allowing humans to make critical decisions. The first is to avoid bias. This is difficult for humans because we are subject to a wide range of cognitive biases. Second, human-made decisions should be transparent, explicable and accountable. Indeed, but guess what? Humans are often inscrutable and while they can “create the impression of transparency through the verbal and written explanations that they offer, there is strong evidence that these explanations cannot be trusted to provide the true basis for the decision”. We might accurately think of people as black boxes, but with a better bedside manner than their purely algorithmic counterparts. Who can explain, for example, what goes on in what might loosely be called Boris Johnson’s mind? Third, human decision-making should be at least as good as AI or machine-learning alternatives. Sometimes, it turns out it’s not.

And, finally, human decisions should be consistent. This, too, we struggle with, although human judges make the best available stab at it. Two different humans confronted with the same decision will often come to a different conclusion, says Collins. The same human confronted with a decision on different occasions will also often decide inconsistently. In comparison, machines would be relentlessly consistent, at least in principle.

And the implications of all this? “Before humans become the standard way in which we make decisions,” says Collins, “we need to consider the risks and ensure implementation of human decision-making systems does not cause widespread harm.”

This is all good knockabout stuff and a good source of belly laughs for AI enthusiasts, but actually there’s a serious edge to it. For example, although bias is intrinsic in all machine-learning systems – and is just as common as it is in human decision-making systems – nevertheless, biased algorithms may be easier to fix than biased people.

That, at any rate, is the conclusion of a couple of empirical studies of racial bias in recruitment and healthcare published in the American Economic Review and Science. It turned out that uncovering algorithmic bias was relatively easy – it’s basically a statistical exercise. “The work was technical and rote, requiring neither stealth nor resourcefulness,” a researcher wrote. The humans in the system, on the other hand, were a different story. The researchers found them “inscrutable”, and discovered that “changing people’s hearts and minds is no simple matter”. Changing biased algorithms was “easier than changing people: software on computers can be updated; the ‘wetware’ in our brains has so far proven much less pliable”.

None of this should come as a surprise to anyone who knows anything about human nature. Our politics tell us that some people would rather die than change their minds. There’s something distinctively human about inconsistency, cognitive dissonance and sheer cussedness. And maybe that’s really why we fear AI: because it would be all the things that we are not.

What I’m reading

Down with democracy
The Punishment of Democracy by Will Davies is a remarkable, insightful reflection on the election campaign we have just lived through. Find it on Goldsmith’s Political Economy Research Centre’s site.

When Larry met Sergey
Nick Carr’s blog Larry and Sergey: A Valediction is a living obituary of Google’s co-founders, Larry Page and Sergey Brin, who have stepped down from managing the monster they created.

Go slow, wunderkinds
Guess what? Young people don’t make the best entrepreneurs. A heartening article, for oldies anyway, by Jeffrey Tucker on the website of the American Institute for Economic Research.

via The Guardian – Artificial intelligence https://ift.tt/38FNIFG

Leave a Reply

Your email address will not be published. Required fields are marked *