By Patrick A. Lee and Jonathan Dambrot
The era of AI has begun with startling speed. AI and machine learning are increasingly driving business decisions and activities, and pressures continue to mount—from customers, regulators, and other stakeholders—for greater transparency into how these data-driven technologies and algorithms are being used, monitored, and managed. In particular, they want to understand how companies are addressing the risks associated with AI systems—risks such as algorithmic biases in healthcare scoring and access to healthcare services; job application vetting and recruiting and hiring practices; loan credit decisions; privacy violations; cybersecurity; disinformation and deepfakes; worker monitoring; and more recently, the risks posed by generative AI.
Despite the explosive growth in the use of AI systems and increasing concerns about the risks these systems pose, many organizations have yet to implement robust AI governance processes. In a recent global survey of more than 1,000 executives by BCG and MIT Sloan Management Review, an overwhelming majority—84 percent—said that responsible AI should be a top management priority. Yet, just 16 percent of their companies have mature programs for achieving that goal.1 Notably, a recent KPMG survey found that relatively few C-suite executives are directly involved in, or responsible for, strategies to manage AI risk and data/model governance, including establishing new processes or procedures (44 percent), reviewing AI risks (23 percent), and developing and/or implementing governance to mitigate AI risk (33 percent).2
Given the legal and reputational risks posed by AI, many companies may need to take a more rigorous approach to AI governance, including (i) monitoring and complying with the patchwork of rapidly evolving AI legislation, (ii) implementing emerging AI risk management frameworks, (iii) securing AI pipelines against adversarial threats; and (iv) assessing their AI governance structure and practices to embed the guardrails, culture, and compliance practices that will help drive trust and transparency in tandem with the transformational benefits of AI. The goal is often referred to as “ethical” or “responsible” AI—that is, making AI systems transparent, fair, secure, and inclusive. Below, we offer comments on these four areas of board focus.
In addition to general data privacy laws and regulations, we are now seeing the emergence of AI-specific laws, regulations, and frameworks globally. For example, the EU’s Artificial Intelligence Act appears to be on the path to becoming law, perhaps by the end of 2023. The act may set a precedent for future risk-based regulatory approaches, as it would rank AI systems according to their risk levels, and ban or regulate AI systems based on those risk levels. There is no similar legislative framework in the U.S.; however, in October, the White House released the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, which could be the basis for future AI legislation. While nonbinding, the Blueprint identifies five principles “to guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” A number of other countries are developing similar non-binding principles or frameworks. Finally, various federal regulators have proposed AI-specific regulations, and there is a growing patchwork of AI-specific state and local laws. Monitoring and complying with evolving AI legislation and regulation will be a key priority for companies over the next year.
1. Fairness
Ensure models are free from bias and equitable.
2. Explainability
Ensure AI can be understood, documented, and open for review.
3. Accountability
Ensure mechanisms are in place to drive responsibility across the lifecycle.
4. Security
Safeguard against unauthorized access, corruption, or attacks
5. Privacy
Ensure compliance with data privacy regulations and consumer data usage.
6. Safety
Ensure AI does not negatively impact humans, property, and environment.
7. Data integrity
Ensure data quality, governance, and enrichment steps embed trust.
8. Reliability
Ensure AI systems perform at the desired level of precision and consistency.
AI risk management has been a particular challenge for many companies, and the potential use of generative AI has now created a sense of urgency. While there are various standards and best practices to help organizations manage the risks of traditional software or information-based systems, the risks posed by AI systems present new challenges. To help companies address these challenges, in January, the National Institute of Standards and Technology (NIST) published its AI Risk Management Framework, which is intended for voluntary use to help organizations address risks in the design, development, deployment and use of AI systems, and evaluation of AI systems to increase the trustworthiness of AI systems. Given the critical importance of AI risk management, boards should have their management teams assess whether the AI Framework can provide helpful guidance in building or enhancing the company’s AI risk management structure and processes.
Given the current AI arms race, companies need to have processes in place for securing and hardening AI pipelines against adversarial threats. In addition to ethical and bias considerations that may inadvertently come from developing AI systems, consider the threats and impacts from adversarial attacks, including data poisoning, model poisoning, back doors, insider threats, and other ways that attackers might damage the company’s decision-making systems. Indications are that adversaries are arming themselves with tools to attack AI systems and profit from a lack of humans in the loop. Frameworks like MITRE ATLAS identify threats and mitigations that can be leveraged to better prepare the organization for these attacks.
Delivering on the promises of AI while managing the risks requires robust AI governance structures and processes, aligned with the company’s broader risk management, data governance, and cybersecurity governance processes. To that end, in addition to the topics discussed above, we recommend that boards discuss with management the following issues:
Footnote
Patrick A. Lee is a senior advisor with the KPMG Board Leadership Center. Jonathan Dambrot is CEO of Cranium and a former principal of KPMG LLP.
A version of this article appears in NACD Directorship magazine.
Sign-up to receive Board Leadership Weekly, Directors Quarterly, and more.
Sign-up to receive Board Leadership Weekly, Directors Quarterly, and more.