How do AI systems become biased?
Looking at the previous examples of racial bias and gender bias, the issue is not always caused by the AI model itself. COMPAS did not use race as a feature, and was it the explicit intention of Amazon to only recruit men? What’s important to understand is that perhaps the main cause of this bias is data.
Amazon used historical data from the past 10 years which consisted of more male than female résumés, and COMPAS didn’t only use features like criminal history, age and gender, but defendants were also asked questions like; “If you lived with both parents and they later separated, how old were you at the time?” or “Have some of your friends or family been crime victims?”
Essentially, an AI system reflect the values of the human who created it. The model takes this data to make predictions and correlations but could simultaneously build and augment inequality. So, if you use data of bad quality that, for example, doesn’t reflect society, or the data you have is insufficient, it will show in your outcomes. Garbage in, garbage out.
Besides data, there are other factors that could deepen unfair bias such as the choice of algorithms which could favor one group or individual over the other, or human input as part of our cognitive bias.
Which elements should you consider to eliminate bias?
The decision-making cycle of an AI model could be roughly divided in four components: Data, Algorithms, Human oversight and Outcome. We will look at each one of these phases and discuss how they could contribute to eliminating bias in AI while discussing some key questions.
Where does your data come from? And is your data diverse enough? What type of (training) data has been used to train the system?
As mentioned before, having good quality data, sufficient data and diverse datasets is key to have an AI model that can create fair decisions because the model trains on this data. This also means having a good understanding of how to label this data, which labels will be decisive for the outcome, and how this will impact the decision-making process. All things considered, it’s best to work on a robust data strategy or architecture that could help with removing unwanted biases beforehand.
How explainable is your AI model? What type of business model have you chosen for your AI system? Have you considered fairness solutions for your algorithms?
There are algorithmic fairness solutions out there that could help make the model more neutral and unbiased. These solutions could solve specific problems such as overfitting, selection bias, or a model that is too flexible which could lead to unreliable predictions. But incorporating ethics into algorithms is still a complicated task and it doesn’t solve the whole issue of unfair bias. Applying as many solutions as possible does not equal a better model, so it’s recommended to use those solutions to target specific problems in your model. Besides that, it is better to create a model that is more explainable. Understanding your algorithms and the decision-making process on a deeper level will improve transparency and awareness and will ultimately help reduce unfair bias.
3. Human oversight
Who in your organization is responsible for ethical AI? Do you have a mechanism that allows people to flag once there is an issue? Do you have measures of transparency and explainability in place?
Another essential element in eliminating unfair bias is the role of human oversight. This includes an audit process, impact assessments, raising awareness of unfair bias, and having either a human-in-the-loop, human-on-the-loop, or a human-in-command creating oversight and human judgement throughout the AI life-cycle. Human oversight also means having the expertise on bias and sufficient understanding of human bias and how this could seep into the AI system. Despite the possibility of cognitive bias, humans remain responsible for the process and its outcomes, not the machine.
What do you do with an outcome or decision that contains unfair bias? Are you able to explain the cause of this bias for future improvements? Is this decision merely explorative or does it immediately impact a person’s life?
Transparency and explainability go hand in hand in this last phase of the decision-making process. Striving for an explainable model will help with creating explainable outcomes which could not only help data subjects affected by these decisions, but also staff, organizations, the general public and the community that is working towards fair AI. As the cycle will continue, a periodical review of these decisions will greatly benefit the process of eliminating unfair bias.
We are still in the midst of figuring out how to eliminate unfair bias in algorithmic processes. Incorporating ethics into mathematics remains complicated. There isn’t a silver bullet to the issue of unfair bias in AI, yet. But while the community is actively working to find solutions, it helps to consider these four elements in the AI life-cycle altogether and to work towards a multidisciplinary approach for the purposes of fair AI.