Bias and ethics in regards to AI systems.

Djegnene Penyel
3 min readDec 29, 2020

Artificial Intelligence, usually abbreviated as AI, is everywhere. And as much that I like some of the good parts it brings to my life. I can not stop myself from thinking about the multitude of ways AI can wrongly be used, and some of the ethical concerns its implementation can bring to our day-to-day life.

You might think that there is no way that AI can be bad. well, there is a multitude of ways AI can be used in a way that is harmful and it does not need to be an over-the-top scenario, like Netflix content recommendation system plotting to overthrow the government. My goal in this article is to explain how AI can be harmful. But before doing so we will explore the field of AI ethics

AI Ethics.

Ethics a primer.
Ethics is a subfield of philosophy that examines what is right and wrong. The main goal of ethic is to create a system of moral policy that determines how every individual makes a decision and lead their lives. The field of ethics is also concerned with what is good for people and society in general.

Bias, what is it?

A bias can be described as an anomaly that happens in the output of a given AI or machine learning algorithm. This anomaly can cause a given AI to discriminate against a particular group of people. This can be the result of prejudice in the training data or some assumptions that were made during the development of the algorithm.

There are mainly two types of AI bias. Wichs are algorithmics bias and data bias.

Algorithmic bias and data bias.

Algorithmic bias or data bias. Is when a giving AI system is trained with biased data or that is algorithm is bias.

In the same way that a person is the product of their past expertise and education, an artificial intelligence system is the product of its algorithms and its data. So if we train an AI system with biased data or we make an algorithm that is bias the output it will give us will be bias.

Amazon’s recruiting tool as an example of algorithmic biases.

In 2014 amazon started an AI project to will automated their recruiting process. Amazon realized that their new AI recruiting system was not rating candidates fairly because it showed bias against women. The problem was based on the data they used. Amazon had used historical data from the last 10 years to train its AI model. Those data contained biases against women since there was a male dominance across the data used for training the model. The system incorrectly learned that male candidates were preferable and it penalized resumes of women candidates.

Social bias.

Compared to algorithmic bias societal bias is less-obvious to distinguish. Because societal bias occurs when AI behave in ways that reflect deep-rooted social discrimination or intolerance. In these cases, the algorithms or data being used may seem un-biased. But the output that I will give will reinforces societal biases and practices that are discriminatory.

Facebook ads as an example of societal bias.

In 2019, Facebook let its advertisers target advertisements according to gender, race, and religion. This creates some societal bias because women were mainly giving job adverts for roles in nursing or secretarial work. Whereas job ads for janitors and taxi drivers had mostly been shown to men, in particular men of minority backgrounds.

--

--