Algorithmic Discrimination: Problem Areas, Mitigation, and Legal Discussion

Author: Öznur Uğuz, MSc in European Economy and Business Law at the Tor Vergata University of Rome, 2021-2023

Legal Editor: Bader Kabbani, LLM International Commercial and Economic Law, SOAS, University of London, 2020-2021

Abstract

Recent advancements in technology and mass-scale digitalization have made the use of algorithms in decision-making possible. Algorithms have already been adopted by private organisations and public bodies to make decisions in recruiting, advertising, policing and sentencing. Given that algorithms do not have personal preferences or prejudices, algorithmic decision-making is often assumed to be free from bias. However, algorithmic systems can be biased by virtue of who develops them and how they are used and thereby could give rise to discrimination. This article first explains how and in what contexts algorithmic discrimination occurs, and then discusses potential ways for mitigation, with a particular focus on non-discrimination law.

What Is Algorithmic Discrimination And How Does It Occur?

While machines are often considered to make smarter and more objective decisions, algorithmic decision-making can be impaired by biases, resulting in discrimination against certain groups of people. Algorithmic bias can occur through human bias in the designing process of an algorithm or due to discriminatory data fed to algorithms. Since it is humans who design an algorithm and determine the interpretations, prioritisations and exclusions contained in it, their personal biases may well be trickled into the algorithm, whether consciously or not. It is also possible for a particular set of data to fail to represent a particular group or class, which could cause the algorithmic system fed the dataset to disregard or mistreat that class or group in its decision-making. Once biases are within an algorithmic system, the system will reflect those biases in its decisions and will result in discrimination of certain groups, classes or ethnicities, giving rise to algorithmic bias.

Problem Areas

As the application areas of algorithms are expanding, the negative effect of algorithmic bias is extending to new spheres and environments we interact in our daily life. Algorithms are already being used in the criminal justice system of some countries to make decisions about risk assessment, detention, and sentencing. While algorithms are marketed in the criminal justice context as being more likely to make objective decisions, they, too, can reproduce class-based and race-based inequalities. This is because algorithms usually base their predictions about a criminal defendant or an offender on past offences and ex-offenders that share similar features and characteristics with the current case and its subject. A further example of algorithmic discrimination in the criminal justice context comes through facial recognition technology. Facial recognition systems are often trained on data sets that are disproportionately white and male, which results in the systems learning to perceive these traits as normal and desirable. The use of these technologies in law enforcement, therefore, creates a risk of injustice for women and minorities, who are far more likely to be misidentified as a threat than white men.

Algorithms are also used in policing to predict where, when, and by whom crimes might be committed in the future. This practice is called predictive policing and refers to an algorithmic analysis of places where police should patrol in the future, based on data from past arrests. The practice, however, may lead to an unfair focus on particular groups that are overrepresented in the criminal justice system due to the over-policing of their communities. Since the data fed to the algorithm will be mostly related to those over-policed minorities, the algorithm will tell the police to keep focusing on those groups and perpetuate discrimination and inequality. The ambiguity of how these algorithms work makes it difficult for individuals, who are negatively affected by algorithmic decisions, to challenge those decisions.

In commercial and civil contexts, algorithmic discrimination is even more prominent as there is wider discretion and less transparency in terms of the use of algorithms in decision-making. In employment, algorithms are increasingly being used for selecting job applicants and determining the salaries and working conditions of employees. The criteria algorithms use when making employment decisions are usually set based on the features of current and past employees. Such that, sensitive information from past and current employees such as race and gender are embedded in the datasets that are used to train algorithms, even if not intentionally collected. This means that if a particular class or group is overrepresented in a database, the algorithm fed with that data is likely to decide that an ideal candidate is a person who belongs to that category. Thus, as long as the algorithm is not explicitly designed to avoid discrimination, it can reproduce bias in employment decisions it makes, reflecting existing inequalities in society.

In the digital sphere, automated tools have increasingly been adopted in the digital advertising industry for targeting audiences and delivering advertisements. Although this has made it easier for advertisers to reach customers interested in their products and services, it has exacerbated discriminatory and harmful outcomes and reinforced societal biases. Researchers and civil rights groups have found that some audiences, including black people and women, are excluded from seeing specific ads because of potentially unlawful choices made by advertisers or automated systems they use.

Mitigation of Algorithmic Discrimination; Legal Discussion

Mitigating algorithmic bias can be done in a number of ways and one of them is increasing transparency. Algorithmic systems are black boxes as they are often protected by trade secrets, intellectual property rights, or firms’ terms and conditions. Such opacity makes it difficult to detect discriminatory decisions and their cause, mitigate their negative consequences, and prevent future instances of discrimination. Mitigation of algorithmic bias requires greater transparency from bodies using algorithmic decision-making regarding data selection and decision-making processes so that it will be easier to monitor algorithmic systems and call out instances of unjust bias. Another consideration for mitigating algorithmic bias is enhancing diversity in the technology industry as the industry lacks sufficient numbers of women and people of colour. Since human bias in algorithms is rooted in the internal biases of individuals who create and train them, a more diverse representative set of individuals might help with the elimination of human bias.

Legal instruments are also crucial mechanisms for fighting against algorithmic bias. The main legal instrument that can be applied in the context of algorithmic decision-making is the nondiscrimination law. Discrimination is prohibited by many constitutions and human rights treaties. In the European context, while there is still no EU non-discrimination law that specifically addresses algorithmic discrimination, the general principle of non-discrimination is protected under the EU law. Under Article 2 of the Treaty on the European Union, respect for human dignity, equality, and respect for human rights are the founding values of the European Union. The European Court of Justice confirmed in Mangold that non-discrimination constitutes a general principle of EU law.

Article 21 of the Charter of Fundamental Rights of the European Union prohibits any discrimination on the grounds such as sex, race, colour, ethnic or social origin, disability, religion or sexual orientation. Discrimination on such grounds, including sex, race, colour, language, religion, opinion, origin, and status is also banned under Article 14 of the European Convention on Human Rights. In addition to those, there is various secondary EU legislation that addresses discrimination on individual grounds, such as Racial Equality Directive (2000/43/EC ), The Gender Equality Directive (Recast Directive) (2006/54/EC), and Employment Equality Directive (2000/78/EC). However, none of these directly address algorithmic discrimination.

In the context of US Law, the Civil Rights Act of 1964 (CRA) contains a general prohibition against all types of discrimination. Title VI of the CRA prohibits any discrimination on the basis of race, colour, or national origin in programs or activities that receive Federal funds or other Federal financial assistance. This provision might apply to state-funded hospitals that use algorithms displaying discriminatory performances due to algorithmic bias, given that such hospitals are likely to fall under the category of “federally funded programs or activities.”

In the employment sphere, Title VII of the CRA forbids employers to fail or refuse to hire a job candidate because of their race, colour, religion, sex, or national origin and provides for a compensation guarantee for those who receive that kind of “disparate treatment.” In terms of algorithmic decision-making, the provision might be interpreted to mean that algorithmic systems must not consider protected characteristics and stereotypical patterns as deciding factors in their decision-making and must make impartial decisions not influenced by such factors.

A more specific example of non-discrimination law in the US legal system is the Senate and House Bills for Algorithmic Accountability Act, which was introduced on April 10, 2019. Under the bill, public and private bodies using algorithmic decision-making would be required to conduct impact assessments on their automated decision-making systems that are considered “high-risk” in order to verify whether they are in line with the general principles of accuracy, fairness, privacy and security. The bill also requires organizations to work with independent third parties and record any bias or data security threat identified through impact assessments. The Algorithmic Accountability Act would operate at a federal level, meaning that private operators would also have to comply with any applicable state law on the subject.

At the state level, there is a growing trend toward the regulation of the use of algorithms. A standing example is the Stop Discrimination by Algorithms Act of 2021 (SDAA), which was introduced in December 2021 by the Council of the District of Columbia. The bill proposes to prohibit organisations from using algorithms that prevent access to critical opportunities such as employment and insurance by generating discriminatory results. The SDAA also aims to enhance transparency around personal data collection and algorithmic decision-making processes. Other states that have introduced laws concerning algorithmic decision-making include New Jersey, which introduced the New Jersey Algorithmic Accountability Act, and the State of Washington, which proposed two laws providing protection against algorithmic discrimination by government entities.

Conclusion

In today’s digital age, algorithms are a great part of our lives and their decision-making powers are increasing as technology develops. Algorithms might have a compounding effect with respect to discrimination. However, with the right mindset and adequate regulations, these effects could be mitigated and algorithms could even be used as means to fight discrimination. Legal instruments have an important role in both the prevention and detection of algorithmic discrimination. While there are currently not many legal frameworks that specifically deal with algorithmic discrimination, significant progress has been made in that regard, particularly in the context of US law. Nevertheless, increasing transparency in relation to the algorithmic decision-making process and working towards the creation of a more diverse technology industry are imperative for a more permanent solution to the issue.

References

Bacchi, U, ‘AI bias: How do algorithms perpetuate discrimination?’ (Thomson Reuters Foundation, 18 June 2021) < https://news.trust.org/item/20210618133831-21l6r > date accessed 19 February 2023

Borgesius, F, ‘Strengthening legal protection against discrimination by algorithms and artificial intelligence’ (The International Journal of Human Rights, 25 March 2020) <https://www.tandfonline.com/doi/full/10.1080/13642987.2020.1743976> date accessed 19 February 2023

Capuzzo, G, ‘A Comparative Study on Algorithmic Discrimination between Europe and North-America’ (Italian Equality Network, 1 January 2022) < https://www.italianequalitynetwork.it/a-comparative-study-on-algorithmic-discrimination-between-europe-and-north-america/ > date accessed 19 February 2023

Dave P, ‘IBM explores AI tools to spot, cut bias in online ad targeting’ (24 June 2021) < IBM explores AI tools to spot, cut bias in online ad targeting | Reuters> date accessed 19 February 2023

Freeman, T; McKain, A, ‘The Legal Implications of Algorithmic Decision-Making’ (The Nebraska Lawyer, May 2020) < https://www.researchgate.net/publication/341921002_The_Legal_Implications_of_Algorithmic_Decision-Making > date accessed 19 February 2023

Gerards, J; Xenidis, R, ‘Algorithmic discrimination in Europe: Challenges and Opportunities for EU equality law’ (European Futures, 3 December 2020) < https://www.europeanfutures.ed.ac.uk/algorithmic-discrimination-in-europe-challenges-and-opportunities-for-eu-equality-law/ > date accessed 19 February 2023

Jackson M, ‘Artificial Intelligence & Algorithmic Bias: The Issues With Technology Reflecting History & Humans’ (2021) < Artificial Intelligence & Algorithmic Bias: The Issues With Technology Reflecting History & Humans (umaryland.edu)> date accessed 19 February 2023

Kossow, N; Windwehr, S; Jenkins, M, ‘Algorithmic transparency and accountability’ (Transparency International, 5 February 2021) < https://knowledgehub.transparency.org/assets/uploads/kproducts/Algorithmic-Transparency_2021.pdf > date accessed 19 February 2023

LibertiesEU, ‘Algorithmic Bias: Why and How Do Computers Make Unfair Decisions?’ (18 May 2021) < ​Algorithmic Bias: Why and How Do Computers Make Unfair Decisions? | liberties.eu> date accessed 19 February 2023

Najibi A, ‘Racial Discrimination in Face Recognition Technology’ (24 October 2020) < Racial Discrimination in Face Recognition Technology – Science in the News (harvard.edu)> date accessed 19 February 2023

Orwat, C, ‘Risks of Discrimination through the Use of Algorithms’ (Federal Anti-Discrimination Agency, September 2019) < https://www.antidiskriminierungsstelle.de/EN/homepage/_documents/download_diskr_risiken_verwendung_von_algorithmen.pdf?__blob=publicationFile&v=1 > date accessed 19 February 2023

Photopoulos, J, ‘Fighting algorithmic bias in artificial intelligence’ (physicsworld, 4 May 2021)  <https://physicsworld.com/a/fighting-algorithmic-bias-in-artificial-intelligence/> date accessed 19 February 2023

Singh, S, ‘In the Absence of Federal Regulation, State and Local Movements are Pushing for Algorithmic Accountability’ (New America, 8 June 2022) < https://www.newamerica.org/oti/blog/in-the-absence-of-federal-regulation-state-and-local-movements-are-pushing-for-algorithmic-accountability/ > date accessed 19 February 2023

Turner Lee, N; Resnick, P; Barton, G, ‘Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms’ (Brookings, 22 May 2019) < https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/ > date accessed 19 February 2023

Turner Lee, N; Resnick, P; Barton, G, ‘Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms’ (Brookings, 22 May 2019) < https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/ > date accessed 19 February 2023

Williams, B; Brooks, C; Shmargad, Y, ‘How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications’ (Journal of Information Policy, 2018) < https://www.jstor.org/stable/pdf/10.5325/jinfopoli.8.2018.0078.pdf > date accessed 19 February 2023

 

This article is written within the Academic Essay Project (AEP) organised by LAWELS. AEP aims to increase the number of quality academic writings on legal topics, encourage young lawyers to participate in academic writing, and lay the foundation of an online database on legal science. The team of legal editors and legal writers share their knowledge through high-end essays that we are publishing on our website and social media accounts for the world to read and learn from.

The articles on the LAWELS platform are not, nor are they intended to be legal advice. You should consult a lawyer for individual advice or assessment regarding your own situation. The article only reflects the views of the author.