Machine Learning (ML)

Biases in Artificial Intelligence


Artificial intelligence (AI) is the ability of a computer, or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.

Machine learning (ML) training tasks seek to replicate human judgments, and those judgments may be based on existing conscious or unconscious biases. Bias in AI has long been a critical area of research and concern in machine learning circles and has grown in awareness among general consumer audiences over the past couple of years as knowledge of AI has grown. It’s a term that describes situations where ML-based data analytics systems show bias against certain groups of people. These biases usually reflect widespread societal biases about race, gender, biological sex, age, and culture.

There are two types of bias in AI. One is algorithmic AI bias or data bias, where algorithms are trained using biased data. The other kind of bias in AI is societal AI bias. That’s where our assumptions and norms as a society cause us to have blind spots or certain expectations in our thinking. Societal bias significantly influences algorithmic AI bias, but we see things come full circle with the latter’s growth.

We often hear the argument that computers are impartial. Unfortunately, that’s not the case. Upbringing, experiences, and culture shape people, and they internalize certain assumptions about the world around them accordingly. AI is the same, it doesn’t exist in a vacuum but is built out of algorithms devised and tweaked by those same people – and it tends to “think” the way it’s been taught.

 Society AI bias occurs when an AI behaves in ways that reflect social intolerance or institutional discrimination. It is difficult to identify and trace but exists everywhere. At first glance, the algorithms, and data themselves may appear unbiased, but their output reinforces societal biases.

In 2020, Twitter has apologized for a “racist” image cropping algorithm, after users discovered the feature was automatically focusing on white faces over black ones. AIs trained in news articles show a bias against women. Those trained on law enforcement data show a bias against Black men. AI products show a bias against women and applicants with foreign names. AI facial analysis technologies have higher error rates for minorities. Google search engine also involved in a lot of societal bias.

How to address AI Bias

‘Your AI is only as woke as you are’

Artificial intelligence has the potential to do good in the world. But when it’s built on biased data and assumptions, it can harm how people live, work and progress through their lives. We can fight back against these biases by being attuned to the biases of the world we live in and challenging the assumptions that underpin the datasets we’re working with and the outcomes they offer.

Humans and algorithms are both fallible. No matter how much effort is put in, machine learning should be viewed with some suspicion. But any human process also deserves that suspicion. Debiasing should be looked at as an ongoing commitment to excellence, not a single step. This means actively looking for signs of bias, building in review processes for outlier cases, and staying up to date with advancements in the machine learning field. In the current state of the art, hybrid systems that use both human judgment and artificial intelligence tend to outperform either on their own.

We can start by reading widely, engaging with progressive ideas, and sharing helpful articles and research that can be used to educate others.