How to Prevent AI Systems from Perpetuating Existing Biases

Bias can be introduced into algorithms in a variety of ways. AI systems learn to make decisions based on training data, which may include biased human decisions or reflect historical or social inequalities, even if sensitive variables such as gender, race or sexual orientation are eliminated. Amazon stopped using a hiring algorithm after discovering that it preferred candidates based on words such as “executed” or “captured”, which were more commonly found on men's resumes, for example. Another source of bias is faulty data sampling, in which groups are overrepresented or underrepresented in training data.

For example, Joy Buolamwini, from MIT, in collaboration with Timnit Gebru, discovered that facial analysis technologies had higher error rates for minorities and, in particular, for women belonging to minorities, which could be because the training data was not representative. The data industry can begin the process of mitigating bias by viewing AI systems from the perspective of the manufacturing process. Machine learning systems receive data (raw materials), process data (work in progress), make decisions or predictions, and generate analysis (finished products). We call this process flow a “data factory” and, like other manufacturing processes, it must be subject to quality controls.

The data industry must treat AI bias as a quality problem. In this hypothetical example, even if none of the authors of the algorithm had any biases, they forgot to evaluate the historical data set to determine if there were problems and, if so, correct them. We also describe a set of self-regulatory best practices, such as the development of a biased impact statement, inclusive design principles, and cross-functional work teams. Therefore, algorithm operators should not rule out the possibility or prevalence of biases and should seek to have a diverse workforce that develops the algorithm, integrate inclusive spaces into their products, or employ “diversity in design”, where deliberate and transparent measures will be taken to ensure that cultural biases and stereotypes are directly and adequately addressed.

Finally, the use of AI to improve decision-making can benefit traditionally disadvantaged groups, such as researchers Jon Kleinberg, Sendhil Mullainathan and others call the “disparate benefits” of better prediction. Formal and regular auditing of algorithms to check for bias is another good practice for detecting and mitigating bias. Organizations will need to stay up to date to see how and where AI can improve equity and where AI systems have had problems. These actors constitute the audience of the series of mitigation proposals that will be presented in this document because they create, license, distribute, or have the task of regulating or legislating the making of algorithmic decisions to reduce discriminatory intent or effects.

Left uncontrolled, biased algorithms can lead to decisions that can have a collective and disparate impact on certain groups of people, even without the programmer's intention to discriminate. Because of historical racism, disparities in police practices, or other inequalities within the criminal justice system, these realities will be reflected in training data and will be used to make suggestions as to whether an accused should be detained. As a result, AI software penalized any curriculum that included the word “women's” in the text and downgraded the resumes of women who attended universities for women, leading to gender bias. In recent years, society has begun to wonder to what extent these human biases can reach artificial intelligence systems with harmful results.

Natural language processing (NLP), the branch of AI that helps computers understand and interpret human language, has been found to demonstrate racial, gender and disability biases. When making the decision to create and commercialize algorithms, the ethics of possible outcomes must be taken into account, especially in areas where governments, civil society or policy makers see the potential to harm and where there is a risk of perpetuating existing biases or making protected groups more vulnerable to existing social inequalities. To ensure that AI systems do not perpetuate existing biases or create new ones in decision-making processes it is essential that organizations take proactive steps towards mitigating bias in their algorithms. This includes viewing AI systems from a manufacturing perspective and treating AI bias as a quality problem; developing self-regulatory best practices such as biased impact statements; employing inclusive design principles; creating cross-functional work teams; conducting formal audits; staying up-to-date on how AI can improve equity; and taking into account ethical considerations when creating algorithms.

John Dee
John Dee

John Dee is the man behind The Ai Buzz, your one stop online resource for all things related to artificial intelligence. He's been fascinated by AI since its early days and has made it his life's mission to educate people about this incredible technology.John is a witty guy with a quick wit and sharp sense of humor. He's also an encyclopedia of knowledge when it comes to AI, and he loves nothing more than sharing his insights with others. He's passionate about helping people understand AI and its potential impact on the world.