The Bias Against Bias: Using Bias Intentionally in Artificial Intelligence

Array

In the last few hours alone, you’ve likely encountered AI, probably some positive experiences as well as frustrating ones. From streaming services recommending your next favorite show to product suggestions on retail sites, almost every digital interaction revolves around AI technology, making the task faster. And if you run into a problem or have a question, you can usually reach out to chat with a customer service representative, which is actually an AI-based chatbot. Because AI integration is relatively seamless and easy for business, many people don’t even realize that their wants and needs are predicted by software.

Although AI can be beneficial to business, concerns about bias are continually raised because of its association to fairness and non-discrimination. This stems from the assumption that bias is always negative. This is not true. In fact, intentionally using bias is a key element of effective data-driven AI systems.

Take for example the retail sector. Many retailers instantly invite you to apply for a store credit card when you make a purchase and meet certain credit-scoring criteria. Of course, the retailer wants to make sure customers can repay debt, so the algorithmic decision making turns down customers who are likely unable to pay down credit.

“We expect that the algorithm is biased against people who cannot pay, which most people would not consider a problem. But if the bias were against males or females, then that would be unwanted bias,” says Christopher Clifton, Vice Chair of the Algorithmic Bias Working Group. “However, getting rid of all bias completely means making random decisions, similar to flipping a coin, which isn’t how we want these systems to operate.”

However, AI readily perpetuates human biases. If a company has predominantly hired men for specific positions and then uses AI technology trained on past hires to evaluate applicants, the data fed into the algorithm will be biased toward men. Because the current hiring managers are likely not aware of the bias in the algorithm, they unintentionally pass over qualified female candidates, which unfairly discriminates against women.

As the IEEE P7003 Working Group began addressing the impact of bias in technology, the team quickly realized that their goal was not eliminating bias in AI systems but rather addressing the bias against bias to then help organizations use bias intentionally.

Defining Bias in AI

Under the umbrella of bias in AI, the working group determined that there are basically three sources of bias:

  • Bias by the algorithm developers: This happens based on the choice of an optimization target, for example, if a worker-management algorithm optimizes processes for maximum worker output, but doesn’t maximize worker health.
  • Bias within the system itself: The system itself expresses differences in performance levels for certain categories, such as higher failures in facial recognition based on race and gender.
  • Bias by users of the system: Users can interpret and act upon the output generated by the algorithm in a biased manner. For example, confirmation bias happens when the output of ChatGPT says something that you already believe, or want to believe, and you accept it without checking whether it is true.

Additionally, the context of a biased decision makes a big difference. It may be fair in one situation, but unfair in another, for example while the bias against people who cannot pay is critical to the success of the business, a bias based on gender or race, whether intentional or unintentional, would be a serious issue.

Ansgar Koene, former Chair of the Working Group, says the key is ensuring that bias in the system is performing a function for what the system is designed to do.

“It’s important that the differentiation for whether it decides on option one or option two be based on factors in the input data that are actually relevant to the task,” says Koene. “If there is a difference in the outputs for each decision that can not be traced back to task-relevant differences in the input, it means it’s likely unwanted bias.”

Evaluating Intentionality of Bias

The goal of IEEE P7003 is to generate awareness about AI systems and bias so that users and creators can evaluate if a system is actually performing the tasks that they want it to perform. To reduce unintentional bias, organizations need to fully understand the context in which the system is being created and how the stakeholders are impacted by the system. Additionally, you should know the actual task you’re asking the system to do and make sure the results fit with the desired tasks.

“It’s not the algorithm’s fault that there are problems with bias. Humans are biased, and bias can be a good thing or a bad thing—not because bias is good or bad in itself, but because of our intention. For example, an app designed to help women manage their health, should be biased towards women,” according to Working Group member Clare James.

Understanding Bias Risk

The P7003 Working Group introduced the concept of a Bias Profile as part of the draft standard to help stakeholders evaluate and consider their processes to determine the impact and risk of bias. The profile helps to clarify the conceptualization of the system’s intention, as well as understand all the stakeholders who either use the system or would be impacted by it. Based on the resulting bias profile, an organization can assess if they need to fix the algorithm or stop using the system entirely.

“Some companies may initially find the process of the P7003 standard challenging to their self-image because it requires them to look closely at the impact their systems are having. Are they really serving people with their product or service?” says James.

Many organizations create an algorithm during the initial development of a system and evaluate it for bias, but do not go back to revisit it. However, the P7003 process should be repeated at regular intervals throughout the system life cycle, especially during the operation and maintenance stage, because how people use the system, as well as the population involved, naturally evolves over time. Additionally, organizations should revisit the profile if they deploy the system into a new context. If you train a system on 10-year-old data, it’s probably not going to be accurate or a reflection of the current situation.

Diverse Teams Reduce Unwanted Bias

As part of ensuring that systems do not have unwanted bias, organizations need diverse teams with multiple perspectives. Harvard University found that the accuracy of facial recognition software varies significantly by race, with the lowest accuracy for people who are female, Black, and 18-30 years old. Often systems are trained with photos of white males, which makes it less likely to recognize people of color. Errors in facial recognition systems can have significant consequences, especially with law enforcement heavily using the technology.

Gerlinde Weger, Chair of the Working Group, says that the bias in facial recognition systems was likely not intentional. Still, the result was that the system did not have the capability to recognize all types of stakeholders. She says it’s likely that a more diverse development team would have discovered this bias before the launch.

“Systems need to recognize attribute intersectionality, meaning how systems work together, to create a true stakeholder overview, not just a collection of bits and bytes,” says Weger. “Humans will always be impacted by and influence bias. With P7003 we aim to help minimize harms arising from unwanted bias.”

As AI becomes even more integrated into our systems and processes, algorithmic bias will have critical and widespread effects. By educating developers and developing a standard to increase the understanding of the intentionality of bias, IEEE SA can help transform issues of AI into positive assets.

While developing the standard, the Working Group sought input from a wider audience and community, through raising awareness of the project at conferences, working with legal experts, and running multiple surveys of the working group composition. The team established a balance not just with regard to geography or gender, but also academia, industry, and civil society. Additionally, they ensured that the working group included participants from different disciplines, including computer science, social sciences, and law.

The Working Group continues to need professionals in all industries and roles to focus on this issue. Learn about how to join the IEEE Algorithmic Bias Working Group.

Share this Article