So does the transfer into practice work on the basis of very concrete examples of application?
Behavioural research is based on about seventy years of scientific knowledge. At the Max Planck Institute, very decisive contributions have been made here. And this is not detached from reality. So far, we have done a good job of managing this transfer into practice and taking the relevant aspects from science with us.
So it’s about improving decision-making processes in a clear and measurable improvement?
That is exactly how it is. We look at the decision-making processes in companies with the aim of systematically improving them. And that’s on the condition that you can also measure it. In the field of debt collection, we compare existing systems with new ones that we set up and where our cognitive algorithms process data to find a debtor-centred approach. Then we do classic A/B testing. One of our reference customers in debt collection normally achieves a repayment rate of 49 per cent, but with our support it ended up with 63 per cent. From a scientific perspective, that’s exactly what we want to achieve: measurable success that can be expressed in euros and cents.
At first, that sounds too good to be true.
Yes, that’s right. However, in a dynamic, complex environment, it is sometimes not so easy to always put your finger on the hard facts and derive actions from them. What you often find in companies is a strong hedging culture. In English there is a nice term for this called CYA = Cover Your Ass. Let’s take a closer look: Option A would be the right one, but it carries the risk of failure. Option B has a lower risk. What does the decision-maker do? Many choose option B because it gives them better cover and they can document it better.
In a survey conducted in companies, 20 to 50 percent of the respondents admitted that their decisions were primarily made to protect themselves. The consequences of this were shown in a study of a large German organisation with 120,000 employees: the company loses about 3.5 billion euros per year because of such decisions. On the basis of our field studies, we showed which measures can be taken to counteract this and reduce the frequency of defensive decisions by up to fifty percent. This is a classic example of how we proceed in order to measurably improve decision-making processes.
Are machine decisions always better?
Let me give you an example: In a project by colleagues from Stanford, in the US, the question was whether a defendant would be released with or without bail. To make this decision, the US uses very modern methods: a computer programme based on more than sixty variables and machine learning makes a prediction for each individual defendant. Based on certain parameters, the machine decides whether bail should be granted for this person or not. On the condition that he shows up at the court date at all.
The ‘human’ judges released 69 percent of the accused without bail in 1,000 cases, 31 percent under bail conditions. The complex machine learning algorithm, on the other hand, releases 79 per cent without bail, ten percentage points more. With the judge’s decisions, 13 percent of the accused did not appear in court, with the algorithm 12.4 percent. The success rate is almost identical, but the algorithm releases many more people without bail.
And when do you come into play?
In the project, statistical methods and methods from cognitive psychology were used to create a very simple algorithm that maps how people make decisions and is also very transparent and intuitively understandable. The variables are, on the one hand, the age of the accused and, on the other hand, how often this person has failed to appear in court in the past. The astonishing result: the simple algorithm performs just as well as the complex machine learning algorithm that is normally used.
In the second part of the interview, read how the combination of atriga’s expertise and Simply Rational’s research results in a convincing win-win situation.