Four elements you should consider to eliminate unfair bias in AI

Logo website stories
Schuberg Philis
apr 29, 2021 · 4 min lezen Engels
Four elements article 1

Photo by Markus Spiske on Unsplash

The applications of Artificial Intelligence seem to be endless and it’s changing the way we work and live. But with the widespread use of AI there are also concerns about the ethical implications. “Unfair” or “biased” AI — also mentioned as unfair bias or algorithmic bias — means that the AI system makes decisions that are unfair for individuals or groups of people. There are some examples of unfair bias you might have heard of, such as the Amazon recruiting model that showed bias against women, and the COMPAS tool that was used in U.S. courts in which defendants were scored on the risk of recidivism but showed an inaccurate high risk of recidivism for African Americans.

In this article, I will share with you the four different elements one should consider to eliminate bias.

How do AI systems become biased?

Looking at the previous examples of racial bias and gender bias, the issue is not always caused by the AI model itself. COMPAS did not use race as a feature, and was it the explicit intention of Amazon to only recruit men? What’s important to understand is that perhaps the main cause of this bias is data.

Amazon used historical data from the past 10 years which consisted of more male than female résumés, and COMPAS didn’t only use features like criminal history, age and gender, but defendants were also asked questions like; “If you lived with both parents and they later separated, how old were you at the time?” or “Have some of your friends or family been crime victims?”[1]

Essentially, an AI system reflect the values of the human who created it. The model takes this data to make predictions and correlations but could simultaneously build and augment inequality. So, if you use data of bad quality that, for example, doesn’t reflect society, or the data you have is insufficient, it will show in your outcomes. Garbage in, garbage out.

Besides data, there are other factors that could deepen unfair bias such as the choice of algorithms which could favor one group or individual over the other, or human input as part of our cognitive bias.[1]

Which elements should you consider to eliminate bias?

The decision-making cycle of an AI model could be roughly divided in four components: Data, Algorithms, Human oversight and Outcome. We will look at each one of these phases and discuss how they could contribute to eliminating bias in AI while discussing some key questions.

1. Data

Where does your data come from? And is your data diverse enough? What type of (training) data has been used to train the system?

As mentioned before, having good quality data, sufficient data and diverse datasets is key to have an AI model that can create fair decisions because the model trains on this data. This also means having a good understanding of how to label this data, which labels will be decisive for the outcome, and how this will impact the decision-making process. All things considered, it’s best to work on a robust data strategy or architecture that could help with removing unwanted biases beforehand.

2. Algorithms

How explainable is your AI model? What type of business model have you chosen for your AI system? Have you considered fairness solutions for your algorithms?

There are algorithmic fairness solutions out there that could help make the model more neutral and unbiased. These solutions could solve specific problems such as overfitting, selection bias, or a model that is too flexible which could lead to unreliable predictions. But incorporating ethics into algorithms is still a complicated task and it doesn’t solve the whole issue of unfair bias. Applying as many solutions as possible does not equal a better model, so it’s recommended to use those solutions to target specific problems in your model. Besides that, it is better to create a model that is more explainable. Understanding your algorithms and the decision-making process on a deeper level will improve transparency and awareness and will ultimately help reduce unfair bias.

3. Human oversight

Who in your organization is responsible for ethical AI? Do you have a mechanism that allows people to flag once there is an issue? Do you have measures of transparency and explainability in place?

Another essential element in eliminating unfair bias is the role of human oversight. This includes an audit process, impact assessments, raising awareness of unfair bias, and having either a human-in-the-loop, human-on-the-loop, or a human-in-command creating oversight and human judgement throughout the AI life-cycle. Human oversight also means having the expertise on bias and sufficient understanding of human bias and how this could seep into the AI system. Despite the possibility of cognitive bias, humans remain responsible for the process and its outcomes, not the machine.

4. Outcome

What do you do with an outcome or decision that contains unfair bias? Are you able to explain the cause of this bias for future improvements? Is this decision merely explorative or does it immediately impact a person’s life?

Transparency and explainability go hand in hand in this last phase of the decision-making process. Striving for an explainable model will help with creating explainable outcomes which could not only help data subjects affected by these decisions, but also staff, organizations, the general public and the community that is working towards fair AI. As the cycle will continue, a periodical review of these decisions will greatly benefit the process of eliminating unfair bias.

Concluding remarks

We are still in the midst of figuring out how to eliminate unfair bias in algorithmic processes. Incorporating ethics into mathematics remains complicated. There isn’t a silver bullet to the issue of unfair bias in AI, yet. But while the community is actively working to find solutions, it helps to consider these four elements in the AI life-cycle altogether and to work towards a multidisciplinary approach for the purposes of fair AI.