The Intersection of Human Bias and Artificial Intelligence

Understanding how and why AI reflects human bias like a mirror helps to shape balanced AI regulatory governance

SNU AI
SNU AIIS Blog

--

Photo by Михаил Секацкий on Unsplash

by Sue Hyun Park

AI is changing many of our decision-making processes. It is involved in hiring and promotion practices, credit scoring, and life-risking actions in autonomous driving. But if a decision-making AI causes harm to mankind, some kind of normative intervention would be necessary.

A study on a risk assessment software used for criminal sentencing alarmed the society about the perils of machine bias. Developed by Northpointe (now Equivant), COMPAS is a software used in many jurisdictions around the U.S. to assess recidivism risk — the likelihood of a convicted criminal to commit future offense. After COMPAS subdivides the individual recidivism risk into 10 levels based on 137 factors including age, gender, and criminal history of the defendant, judging upon the predicted risk score, the court draws the final sentence. It was in 2016 when ProPublica, a nonprofit organization that produces investigative journalism, claimed that COMPAS was biased against black defendants. Their report was unsettling because it revealed how algorithms can conflict with our standards of justice, aggravating the negative outcomes of unfairness. According to their analysis conducted on data of Broward County’s criminal defendants, COMPAS’s risk scores showed a disparity in score distribution between black and white defendants.

Scores for white defendants (Right) were skewed toward lower-risk categories while scores for black defendants (Left) were not. (Source: ProPublica)

The risk score distribution gives an impression that structural racism has been woven into the COMPAS system. Northpointe made a retort, claiming Propublica had not taken into account the different base rates of recidivism for blacks and whites; actually, the scores of blacks were well-aligned with the norm group used in Broward County. To put it another way, the algorithm was not unfair as it only reflected the biased results in reality.

“Even if AI decision-making is biased, the algorithm is merely a representation of reality as it is, so it is not normatively problematic.”

This statement highlights the “bias mirror problem”, a view that AI decision-making reflects human bias like a mirror.

We are in a transition to a world with advanced AI systems and it is imperative to delve into the bias mirror problem. In our paper published in Seoul Law Journal, we explore this research question:

If a decision-making AI reflects human bias like a mirror, do we need a new regulatory framework concerning AI bias?

To surface an in-depth discussion of AI regulatory governance, we will move on to demystify the truths and lies of the bias mirror problem:

  1. Does AI only capture and mimic human bias? Or is any additional harm produced from the action?
  2. Other than reproducing bias, does AI have the ability to suppress bias or even offer some kind of benefit to us?

How the Bias Mirror Problem Happens

In her book Weapons of Math Destruction, Cathy O’Neil comments that big data and AI are seemingly neutral and fair, but in reality, they repeat the bias and discrimination widespread in the society. There are two ways in how AI mirrors human bias.

1. AI is a projection of human intentions

Today’s AI does not have self-consciousness, but it can happen to “consciously” portray biases injected into its design, logic, and knowledge provided by the creator or user. Financial incentives to maximize profit are the primary driver. A 2019 UNESCO report criticized how leading voice assistants “overwhelmingly speak with female voices and are projected as women”. Gender bias is revealed on a commercial basis, as companies try to ramp up sales by attracting and pleasing customers, and consumers show the greatest favor when their sexual stereotypes and behavioral patterns of voice assistants match.

Similarly, there is an incentive for AI engineers to downscale debiasing efforts due to enormous additional costs. Since developing, training, and running large AI models incur steep costs, there can be a budget constraint for applying bias filters. Then AI would have no choice but to mirror human intentions.

2. AI reflects implicit bias

Even when conscious bias is not embedded into the objective function of AI, unconscious bias can indirectly emerge during the optimization process. A typical cause of this problem is biased historical data in the training dataset. Suppose that some company is using AI to predict the best-performing executive candidate, and female executives were rare in the past. Then the AI trained on the company’s records would negatively score the chances of a female employee getting promoted to an executive position. The output would turn out to be a reproduction of the bias latent in historical examples despite the objective function being neutral.

Beside this, there are multiple sources in which implict biases reside. Training data collected on a biased sample of the population may lead AI to systematically disadvantage those who are under- or overrepresented in the dataset. Or the subjectiveness in the selection and definition of target variables could go wrong. We also have technical bias, such as unknowingly leaving out left-handers in hardware design considerations.

The Need to Regulate AI

As the bias mirror problem of AI is prevelant and unstoppable, regulatory intervention to address this problem seems natural — or is it? Before discussing the normative bases for regulating biased AI, we want to make two issues clear:

The bias mirror problem is different from the black box problem of AI.

AI decision-making is becoming ever more complex like a black box. But the inexplicabilities that come with it only exacerbate, not produce, the biases of AI. Thus the black box problem is connected to transparency and explainability of AI. But we will focus on the bias mirror problem and investigate methods for monitoring, detecting, and correcting bias reflected in AI.

To establish a new, distinct regulatory system of biased AI, we should see if biased AI increases the total amount of harm in the society.

If human is the root of bias mirror problem, it is likely that we should concentrate on human bias and the existing regulatory system for it. However, if biased AI poses significant risks of harm inconsistent with the scope of existing anti-discrimination laws and ethical standards, we should consider an independent regulatory framework for it. In this section, we will demonstrate that AI can expand and reproduce human bias other than merely capturing it.

1. Omitting human’s preference to avoid bias

Human beings have a moral sense to correct rather than blatantly expose bias established in the depths of their minds. We call this the “bias aversion preference”. The reason why the French “Anonymous Résumé” law was aborted was that removing personal information from resumes made firms less likely to interview and hire minority candidates. Employers had been less harsh in judging a candidate if they could identify that the candidate belongs to a minority.

To add, the bias aversion preference can be interpreted as part of rational choice. Humans have bias to stick to choices that worked in the past. But at the same time, they try to break the status quo and make an adventurous choice, because such action has enabled them to better adapt to the continuously changing environment.

Unfortunately, AI cannot have the free will like human to choose to avoid biased decisions. AI instead has the potential to blindly reveal human bias or distort human’s intention to correct bias. This creates a kind of statistical bias that ultimately overestimates human bias.

2. Emergent bias and the feedback loop

Just like how biases are reinforced through people’s interactions in online communities, AI also produces positive feedbacks in biased decision-making. But the scale and frequency is much larger. AI can process more information within the same time and make more coherent decisions than human can do. We are concerned that positive feedbacks in AI decision making can generate additional harm — “emergent harm”— that exceeds the subtotal of initial harm made by humans.

Specifically, machine learning algorithms often work on a feedback loop, where the output of the algorithm becomes part of its input. After human bias intervenes in the training data or the algorithm, AI will lean in to the assumed correctness of its own initial determinations, accelerating the formation of emergent bias.

Furthermore, AI reduces the deviation of human bias. AI recommendation systems have been criticized for limiting users’ contact to contradicting viewpoints (a phenomenon called the filter bubble) and discrediting voices of the other side (a phenomenon called the echo chamber). Being trapped in intellectual isolation reinforces biases, which interferes with human’s long-term adaptability to changes in the environment.

Understanding the Ethical Upsides of AI

The problem that AI can be a source of new biases is a necessary but not sufficient condition for regulatory governance. What if the absolute magnitude of the harm or risk of AI bias is extremely insignificant from a normative perspective? We should be careful not to stress the need for regulation based on only a few biased AI decision-making cases, as such cases might fall into acceptable risk. To introduce new regulatory goverance, we should weigh the risks and benefits of the subject. Through an ethical lens, here we evaluate the good things AI can bring to us.

1. Promoting benefits

The fact that many AI-driven services are flooding into the market explains how much we appreciate the benefits and promises of AI. It should be emphasized, however, that AI making more accurate decisions and producing benefits in place of humans is not necessarily true in every specific context. Recent study has demonstrated that COMPAS is no more accurate or fair than nonexperts in criminal justice or a standard linear predictor with only two features.

2. Enhancing morality

While human beings have free will, their decisions may not entirely be more moral than AI’s decisions. Just as heuristics lead to systematic errors, people tend to exploit moral heuristics that lead to erroneous moral judgments. A real-world study on judicial rulings suggests that even experienced judges increasingly tend to rule in favor of the status quo as they make repeated rulings. The extraneous factor in the decisions is the human tendency to lessen cognitive efforts — which can be overcome by taking a break to eat a meal, as the experiment evidences. Individuals suffer from limited mental resources in decision making processes and feel the urge to behave in their best interests or get a free ride in a group setting.

Overall, there is room for AI to have comparative advantage over humans in terms of morality. One possibility is that AI can easily detect bias. AI can grasp the types of parameters used in decision-making and the values of their weights, while there is no way to determine these in human decision-making. In particular, people have a tendency toward confirmation bias believing that their past decisions were fair, and they may even deliberately hide the existence of biases. In addition to finding biased parameters and weights, AI can improve the biased results using a much more objective and transparent mechanism. By paying attention to the quantitative characteristics of AI, we could make advance in the objective understanding of ethical values like fairness.

The starting point of developing moral AI is to define the qualitative value of “fairness” as a quantitative indicator. If we allow AI to mathematically define heuristics such as equality of opportunity or equality of outcome, AI could prevent individuals’ attempts to distort the concept of fairness favorably to oneself. It is also expected that the discourse of fairness could be carried out more rigorously and less ambiguously. Researchers are striving to develop fairness metrics effective in practice so that AI would positively contribute to the moral enhancement of human.

Getting the Right Balance between the Good and Bad

It is unhealthy to buy into the hype of omnipotent AI and belittle the free will of human. On the other hand, overestimating human morality can also cause a backlash from the public who have been involved in unfair decision-making. There seems to be a stereotype that AI is better in factual judgments and humans are better in value judgments, and our preceding discussion does acknowledge the existence of such tendency. However, the associations can be reversed or have minimal difference depending on the specific context.

  1. If AI expands and reproduces human bias, additional regulation will be needed.
  2. If AI merely reflects human bias, existing regulation on human will suffice.
  3. If AI acts as a deterrent to human bias, the mechanism should be encouraged.

All three options are possible in reality. Therefore, instead of applying a specific option to all AI and human decision-making contexts, mixing different options in light of the specific context will guide to the optimal solution. The EU Artificial Intelligence Act announced in April 2021 adopts a risk-based approach that encompasses all three of these options as well.

Conclusion

“AI merely reflects human biases as they are and adds no new harm, so we don’t need a separate regulatory framework for AI.”

The bias mirror problem of AI was first asserted to make AI accountable for spreading bias already far too common in the society. Conversely, from a normative perspective like the statement above, AI regulation does not seem necessary in the first place. Yet previous studies on AI bias have only shared silence about the link between AI bias and AI regulation.

To validate the need for some kind of normative approach to AI, we must contemplate on what harm does AI add to existing human behavior. With this awareness in mind, we outlined cases of AI reflecting human bias and reasoned why AI can expand and reproduce human bias. Meanwhile, the added harm can be categorized into acceptable risk, or AI can be beneficial in the ethical dimension. Wrapping up, we suggest that a balanced regulatory system and governance be built in consideration of these two-sided effects.

Still, many practical questions remain unanswered. What should be the details of the regulation? Some people support differentiation in disciplines in line with different expected levels of AI risks, but some may deem uniform discipline desirable. Or, as AI is moving toward a higher degree of freedom, some discuss the possibility of granting legal personality to AI.

Multilateral efforts are needed to shape the AI regulatory system and governance. We hope our exploration on the intersection of human bias and AI provide key questions and answers to propel the journey.

Acknowledgement

This blog post is based on the following paper:

  • Park, D. H. (2022). The Intersection of Human Bias and Artificial Intelligence, Seoul Law Journal, 63(1). 139–175. (paper)

Thanks to Do Hyun Park for helpful comments on this blog post.

--

--

SNU AI
SNU AIIS Blog

AIIS is an intercollegiate institution of Seoul National University, committed to integrate and support AI related research at Seoul National University.