Insight

How boards can help to debias AI

An interview with Miriam Vogel, president and CEO of EqualAI on the challenges related to debiasing artificial intelligence.

The trifold challenge of the COVID-19 outbreak, sudden economic fragility, and increased focus on social justice in the US has placed a spotlight on the use of artificial intelligence (AI) as an enabler of decision-making.

As AI moves to the frontlines of how businesses and government interact with customers and communities, understanding the process for how AI is developed and deployed becomes even more critical for boards. In particular, to what extent is bias—conscious or unconscious—built into the strategy, development, deployment, and outcomes of AI-enabled processes? What assurance does the board have that the company’s AI process effectively evaluates the potential for bias in the data set, on the programming team, or even in the task that the AI is designed to enhance? 

For insight on the challenges related to debiasing AI, the KPMG Board Leadership Center (BLC) interviewed Miriam Vogel, president and CEO of EqualAI, a nonprofit focused on reducing unconscious bias in the development and use of AI. Vogel is also an adjunct professor at Georgetown Law, where she teaches technology law and policy. She spent several years in US government, including as associate deputy attorney general and a senior policy advisor to the White House.

Miriam Vogel

BLC: Bias in AI exists across the value chain—from strategy to talent to data to outcomes. From your perspective, where is bias most pernicious or hardest to observe?

Vogel: An algorithm is like an opinion—bias enters from the moment a person frames the question that the AI program is intended to solve, and each step thereafter. Bias enters through each of the human touchpoints. 

It can enter through the product design, training data, the work of the development team, and in the testing. Unfortunately, bias can be equally pernicious and hard to detect at each of these touchpoints. Perhaps the hardest challenge is that it can emerge—and be sustained—even with good intentions. We’ve seen this in healthcare, for example, with AI-based training data leading to race-based inequities in care for patients. When designing products for the boomer generation, products have often failed, according to the MIT Age Lab, because they were targeted to the ‘elderly,’ while according to The Pew Research Center, only 35 percent of people age 75 or older consider themselves ’old.’ These are mistakes that caused harm to patients and companies’ bottom lines, respectively, that would have benefitted from alternate perspectives in the design and testing stages.

On the bright side, bias can also be tested for or detected at each of these touchpoints. Doing that requires the right mindset and a diverse set of perspectives to widen the lens of your perspective. 

 

How boards can help to debias AI

Continue reading or download PDF

BLC: From your experience, how can corporate boards challenge or assess how the company is tackling bias in its use of AI across the enterprise?

Vogel: Board members are in the best position to help ensure that their company avoids the harms of biased AI. I would argue that it is embedded in their fiduciary duty as part of their responsibility to help ensure that the company avoids harms and liabilities. In this realm, harms could range from the physical harms of an AI-driven machine—for example, an unmanned vehicle that cannot detect persons of color with reliable precision—to the harms that could impact their brand, employee morale, or legal liability. Boards should understand how the company’s leadership is ensuring that AI has been tested—and that it has done so repeatedly, which is necessary given that AI learns new patterns and, thus, biases over time. The board’s support—or motivation—for leadership’s attention to this issue is also critical because we have seen that this process only works when the top levels of leadership are committed to rooting out bias in AI.  

Board members can start by asking questions: What are we doing to assess the bias in our AI systems? How are we ensuring diversity of perspectives in the design, development, and testing of these programs? How often do we test for bias? What is our strategy for identifying and addressing bias in our AI programs? You can also find third parties, like EqualAI, to help your company take on these challenges. 

BLC: AI and related technologies, such as machine learning and robotic process automation, are often based on proprietary algorithms and data sets. How would you build greater transparency into AI processes while protecting intellectual property?

Vogel: Ideally, we would have a standard set of requirements, for instance in the form of a nutrition label, so that users of the data and/or AI program on which it is built would know the gaps in representation and would be able to account for them. This is also an area where government or nonprofits can play a role in creating guidelines so that boards and corporate leadership are clear on expectations and protections.

BLC: What industries or sectors do you see as most challenged by bias in AI? What are some discussions or levers that boards can pull to help move these companies forward? 

Vogel: AI has become omnipresent in our personal and work lives, and as a result, I think most industries are now facing the challenge of bias in their AI. 

This is a clear, known challenge for companies creating AI programs, but it is likewise a problem for those consuming the AI programs. Bias in AI is a well-documented concern in the credit and mortgage lending space, but can equally inhibit efforts to build diverse teams at companies using AI for hiring and employee evaluation. Boards can continue to demand diverse candidates on slates for hires and especially for promotions to help ensure there are broader perspectives in each of the human touchpoints of the AI lifecycle. 

A 2016 report by McKinsey found that women made up 37 percent of entry-level roles in technology but only 25 percent reached senior management roles and 15 percent reached the executive level. And these percentages are even smaller for racial and ethnic minorities. So focusing on diversity in retention and promotion is even more important. AI in the hiring space could be growing this problem exponentially. 

Ensuring there are candidates of different gender, age, geographic origin, and so forth can help ensure a broader set of questions are asked at each of the human touchpoints of AI creation. The board can help to ensure that AI is built by and for a broader cross-section of our population to reduce liability but equally important, to ensure that products are as popular in rural and middle states as on the coasts, and that women, who by some estimates drive 70–80 percent of all consumer purchasing, through a combination of their buying power and influence, are being considered in the design and marketing of products. 

BLC: The Federal Trade Commission (FTC) recently published a notice regarding the use of AI in consumer-facing business decisions. What do you see as the role of government and other standard setters regarding the use of AI?

Vogel: Government should be industry’s partner on this. The FTC guidance was on point. It is critical that the government outline the expectations and clarify liabilities now, while AI is being built. Five years down the road when AI systems have been more integrated into systems and organizations, it will be much more complicated. We should advise companies as they build and buy AI programs now because it will be a much larger burden to expect companies to unravel the systems they’ve built years down the line.

BLC: Final thoughts?

Vogel: Bias in AI stems from bias in humanity. It is rooted in our survival, helping us decide when to cross a busy street or trust a potential partner, but the unconscious biases are what need to be checked—in both humans and machines—to help ensure they are serving our needs and values, not undermining them. I have found that the best way to address unconscious bias is to bring diversity of thought to the table. In the future, there will be AI programs that will help us identify and address bias in AI and related challenges, but these programs will also require diversity of thought and perspectives to build them effectively. And, at the end of the day, they will not eliminate the need for humans to routinely check to help ensure the new patterns that the AI program learns are in line with expectations and legal compliance. We are all constrained by the limits of our imagination, so the more perspectives we include at each stage of the process, the better our AI and related products will be.

For more from KPMG LLP,  read:

AI transforming the enterprise

Controlling AI

The views and opinions expressed herein are those of the interviewee and do not necessarily represent the views and opinions of KPMG LLP.

Download PDF

How boards can help to debias AI
Miriam Vogel, president and CEO of EqualAI, spoke with the KPMG Board Leadership Center on the use of artificial intelligence (AI) as an enabler of decision-making, including the risk of bias and what the board should know.

Receive the latest from KPMG Board Leadership Center

Board Leadership Weekly, Directors Quarterly, and more

Board Leadership Weekly, Directors Quarterly, and more