Addressing Algorithmic Bias in AI
Technical Paper on Algorithmic Bias
Artificial intelligence (AI) promises better, more intelligent decision-making.
Governments are using AI to make decisions in welfare, policing and many other areas. Meanwhile, the private sector has readily adopted AI in to its business models. However, using AI carries with it the risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow.
You can download a copy of the Technical Paper from our Project website under Previous Project Materials.
Algorithmic bias is a kind of error associated with using AI, often resulting in unfairness. Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI product itself. Sometimes the problem lies with the data set used to train the AI.
Algorithmic bias can cause actual harm. It can lead to a person being unfairly treated or even suffering unlawful discrimination based on characteristics such as race, age, sex or disability.
This project simulated a typical decision-making process and explores how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.