Machine Bias: A Survey of Issues
Abstract
Some recent applications of Artificial Intelligence, particularly machine learning, have been strongly criticised in general media and professional literature. Applications in domains of justice, employment and banking are often mentioned in this respect. The main critic is that these applications are biased with respect to so called protected attributes, such as race, gender and age. The most notorious example is the system COMPAS which is still in use in the American justice system despite severe criticism. The aim of our paper is to analyse the trends of discussion about bias in machine learning algorithms using the COMPAS as an example. The main problem we observed is that even in the field of AI, there is no generally agreed upon definition of bias which would enable operational use in preventing bias. Our conclusions are that (1) improved general education concerning AI is needed to enable better understanding of AI methods in everyday applications, and (2) better technical methods must be developed for reliably implementing generally accepted societal values such as equality and fairness in AI systems.References
Alelyani, S. (2021). Detection and Evaluation of Machine Learning Bias. Applied Sciences, 11(14). https://doi.org/10.3390/app11146271.
Angwin, A., Larson, J., Mattu, J. & Kirchner, L. (2016). Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. ProPublica.
Artificial Intelligence Act, European Parliament, 14 June 2023; current updated unofficial version January 2024.
Berk, R., Heidari, H., Jabbari, S., Kearns, M & Roth, A. (2021) Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociological Methods & Research, 50(1), 3-44. https://doi.org/10.1177/0049124118782533.
Blanzeisky, W. & Cunningham, P. (2021). Algorithmic Factors Influencing Bias in Machine Learning. arXiv preprint. https://doi.org/10.48550/arXiv.2104.14014.
Cestnik, B., Kononenko, I. & Bratko, I. (1987). ASSISTANT 86: A knowledge-elicitation tool for sophisticated users: In: Bratko, I., Lavrač, N. (eds) Progress in Machine Learning: Proc. of European Working Session on Learning EWSL 87. Sigma Press, 1987, 31-45.
Chakraborty, J., Majumder, J. & Menzies, T. (2021). Bias in Machine Learning Software: Why? How? What to do? arXiv preprint. https://doi.org/10.48550/arXiv.2105.12195.
Corbett-Davies, S., Gaebler, J. D., Nilforoshan, H., Shroff, R. & Goel, S. (2018). The Measure and Mismeasure of Fairness. arXiv preprint. https://doi.org/10.48550/arXiv.1808.00023.
Courtland, R. (2018). Bias detectives: the researchers striving to make algorithms fair. Nature, 558, 357-360. https://doi.org/10.1038/d41586-018-05469-3.
Dastin, J. (11.10.2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
Dressel, J. & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1).
Flores, A. W., Bechtel, K. & Lowenkamp, C. T. (2016). False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals And It’s Biased Against Blacks.” Federal Probation Journal, 80(2).
Gordon, D. F. & Desjardins, M. (1995). Evaluation and Selection of Biases in Machine Learning. Machine Learning, 20, 5-22. https://doi.org/10.1023/A:1022630017346.
Hardt, M., Price, E. & Srebro, N. (2016). Equality of Opportunity in Supervised Learning. arXiv preprint. https://doi.org/10.48550/arXiv.1610.02413.
Hellström, T., Dignum, V. & Bensch, S. (2020). Bias in Machine Learning – What is it Good for? arXiv preprint. https://doi.org/10.48550/arXiv.2004.00686.
Holsinger, A. M., Lowenkamp, C. T., Latessa, E. J., Serin, R., Cohen, T. H., Robinson, C. R., Flores, A. W. & vanBenschoten, S. W. (2018). A Rejoinder to Dressel and Farid: New Study Finds Computer Algorithm Is More Accurate Than Humans at Predicting Arrest and as Good as a Group of 20 Lay Experts. Federal Probation Journal, 82(2), 51-56.
Hüllermeier, E., Fober, T. & Mernberger, M. (2013). Inductive Bias. Encyclopedia of Systems Biology. https://doi.org/10.1007/978-1-4419-9863-7_927.
Kleinberg, J., Mullainathsn, S. & Raghavan, M. (2016). Inherent Trade-Offs in the Fair Determination of Risk Scores. arXiv preprint. https://doi.org/10.48550/arXiv.1609.05807.
Levin, S. (8.9.2016). A beauty contest was judged by AI and the robots didn’t like dark skin. The Guardian. https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people.
Mehrabi, A., Morstatter, F., Saxena, N., Lerman, K. & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6), 1-35. https://doi.org/10.48550/arXiv.1908.09635.
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E. … Staab, S. (2020). Bias in data-driven artificial intelligence systems – An introductory survey. WIREs Data Mining and Knowledge Discovery, 10(3). https://doi.org/10.1002/widm.1356.
Rudin, C. (2019). Stop explaining black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x.
Spielkamp, M. (2017). Inspecting Algorithms for Bias. MIT Technology Review, July 2017.
Staff, R. (7.12.2016). New Zealand passport robot tells applicant of Asian descent to open eyes. Reuters. https://www.reuters.com/article/us-newzealand-passport-error-
idUSKBN13W0RL.
Sun, O., Nasroui, O. & Shafto, P. (2020). Evolution and impact of bias in human and machine learning algorithms interaction. PLoS ONE, 15(18). https://doi.org/10.1371/journal.pone.0235502.
UNESCO Recommendation on the Ethics of Artificial Intelligence, 2021. https://unesdoc.unesco.org/ark:/48223/pf0000381137.
Yu, H., Shen, Z., Miao, C., Lesser, V. R. & Yang, Q. (2018). Building Ethics into Artificial Intelligence. arXiv preprint. https://doi.org/10.48550/arXiv.1812.02953.
DOI:
https://doi.org/10.31449/inf.v48i2.5971Downloads
Published
How to Cite
Issue
Section
License
Authors retain copyright in their work. By submitting to and publishing with Informatica, authors grant the publisher (Slovene Society Informatika) the non-exclusive right to publish, reproduce, and distribute the article and to identify itself as the original publisher.
All articles are published under the Creative Commons Attribution license CC BY 3.0. Under this license, others may share and adapt the work for any purpose, provided appropriate credit is given and changes (if any) are indicated.
Authors may deposit and share the submitted version, accepted manuscript, and published version, provided the original publication in Informatica is properly cited.







