View the M2020 program
6
Aug

Bias in algorithms and how it could be impacting you

Could bias in algorithms be impacting you?Artificial intelligence (AI) and machine learning are playing a much bigger part in our day-to-day lives than most people realise. An algorithm could be deciding whether you get a bank loan or credit card, whether or not you’ll be hired for a job and even whether or not you’ll be granted parole.

Essentially what these algorithms are doing is processing huge amounts of data, which then enables them to make predictions about what might happen in the future. This might seem, on the surface, as inherently objective. The problem is, humans aren’t. As much as we’d like to think we’re infallible, all of us have our own, often unconscious, prejudices, and these prejudices can – and do – creep in through interactions with the algorithm, from both developers and users. In other words, the algorithms are learning our biases.

The scary results of bias in algorithms

The results of these biases can be alarming. In 2015, it was reported that Google’s search algorithm was displaying far fewer ads for high-paying executive jobs when a women was performing the search, compared to when a man was. And in 2016, an analysis by ProPublica of risk assessment software known as COMPAS, which is widely used in the US to forecast which criminals are most likely to reoffend, found evidence of racial bias in the system. They found that “blacks are almost twice as likely as whites to be labelled a higher risk but not actually reoffend”. The system found the opposite of white criminals: “They are much more likely than blacks to be labelled lower risk but go on to commit other crimes.”

What’s even more frightening is such discriminatory decisions can become self-perpetuating. As the blurb for Cathy O’Neil’s book Weapons of Math Destruction reads, “If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues.”

Related post: The dark side of technology: The World Economic Forum’s 2017 report on AI

How do we identify bias in algorithms?

While there’s no denying that algorithmic biases do exist, the problem of how to weed them out is a complex one. If you suspected an algorithm had discriminated against you, this would be very difficult to prove. Companies are so secretive about how their algorithms work, in order to prevent this knowledge falling into the hands of their competitors, that finding detailed information on how decisions have been made is near impossible.

Even being granted access to the algorithms might not help much. Because the algorithms are constantly learning, it can be very difficult to pinpoint the exact logic behind a decision – even for the people who developed the algorithm in the first place. As Nathan Srebro, a computer scientist at the Toyota Technological Institute at Chicago, says, “Even if we do have access to the innards of the algorithm, they are getting so complicated it’s almost futile to get inside them. The whole point of machine learning is to build magical black boxes.”

What can be done about bias in algorithms?

In order to try to combat this, organisations such as Algorithm Watch and The Algorithmic Justice League have been set up to try to educate people on the implications of algorithmic decision-making (ADM), and to lobby for more transparency into algorithms. Some have even argued that an AI watchdog is needed to help regulate ADM.

In the meantime, businesses must remain highly vigilant about algorithmic biases, and in effect self-police its development and implementation. For example, Joy Boulamwini, founder of The Algorithmic Justice League, argues that algorithmic bias can be reduced by “employing more inclusive coding practices”, such as having diverse programming teams to help ensure unconscious biases aren’t creeping into the code; having more inclusive training sets, so the algorithms are exposed to a much wider variety of scenarios; and thinking more conscientiously about the social impact of the technology being developed.

Algorithms can be tools used to improve fairness and promote equality – if they are programed mindfully and regularly audited. As our reliance on algorithms only increases, so to must our awareness about the value judgements we impose on them.

If you’re in the midst of deciding what software is the right choice for your business, download our 7-step checklist: How to choose the right software for your business.

How to choose the right software for your business