Biais in machine learning and artificial intelligence.

Djegnene Penyel
2 min readAug 5, 2020

A Biais can be described as a situation where a AI or an machine-learning based systems discriminate in favor of or against a group of people this discrimination usually follow our owns social biases toward race, gender, nationality or age and much more other factor. Those biais can be the result many factor I will write you about two of them in this text.

algorithmic biais: the same way human are the product of our environnement, experience and education. AI and machine learning system are the product of algorithm and data it learn from if those dataset are biais the output will be biased .

society AI bias: Can be seen when an AI system work in a ways that reflect firmly established social intolerance or institutional discrimination. In these special occasion, the algorithm and data themselves may seem to not have any biais. But the output or usage of the system will reinforce the societal biases and discriminatory practice that our society already have. Solving these type of biais can be difficult because social biais in AI or machine-learning system are difficult to identify and trace because they are based on our own societal biais so from our own perspective it does not seem biais

exemple: In 2018 Amazon had an AI recruiting system designed to streamline the recruitment process by going trough resumes and selecting the best-qualified candidate. Unfortunately the system seemed to have a serious problem with female candidate, and it seemed to replicate existing hiring practices that mean the it also replicated their biases. The AI picked up on uses of ‘women’ such as ‘women chess club captain’ and marked the resumes down on the scoring in other word Amazon system taught itself that male candidate were preferable, rather that helping to solve the biases present in the recruitment process algorithm simply automated them. Amazon confirmed that they hade decided to scrapped the system, witch was developed by a team of their in 2014.

How we can try to solve the biais: to solve the biais we have in machine learning we have to be aware of the problem so we can diversify the dataset that we use, we also have to be more aware of our social biais and finally we have to stay vigilant and spread awareness around us.

source and other article to see:

www.logically.ai

--

--