Board Responsibility for Artificial Intelligence Oversight

Robert G. Eccles is Visiting Professor of Management Practice at Oxford University Said Business School, and Miriam Vogel is President and CEO of EqualAI and Adjunct Professor at Georgetown University Law Center.

Artificial Intelligence (AI) is quickly taking over. But not in the robot-coup type of scenario that inspires multimillion dollar box office hits. Rather, AI protects our credit cards from fraudulent activity, helps employers to hire and maintain a remote workforce during a global pandemic, and enables doctors to deliver care to patients thousands of miles away. AI is and will be a powerful tool to advance our lives, economy, and opportunities to thrive, but only if it does not perpetuate and mass produce discrimination and physical harm to individuals—and massive liability to corporations. Board members are in an optimal position to ensure that companies under their purview are prepared to avoid the harms, and litigation risks that AI could invite.

In particular, environmental, social, and governance (ESG) considerations that safeguard against risks and ensure good corporate stewardship provide a natural home and framework to guard against these harms.

As focus on climate change grows through COP26 has made clear, the environment or “E” will be a future headline involving AI, as the outsized carbon footprint of AI inflicts increasing damage to our environment. In the meantime, corporate leadership must immediately turn to the “S,” or societal implications of AI, where harms are pervasive and liability is imminent. On the upside, if we get this right, AI can instead by an ally in addressing these harms and best practices for board “G” or governance. For instance, instead of blindly deploying AI systems that have been built and trained on data sets mostly populated by Caucasian male users, we can enhance user safety by operating transparently and noting when an AI program has been tested and trained on limited populations. Better yet, we can open our aperture and use AI to broaden our consumer base by ensuring that AI products are not just safe but also beneficial to broader swaths of the population. Likewise, AI is often considered a trigger for job loss, but if we are thoughtful and forward thinking, implementing measures like upskilling programs, workers can benefit, along with the companies and ultimately our economy, from providing a broader population to service and support AI deployment.

The movement to include ESG factors in investment processes and decision-making is increasingly gaining traction and market share. The term “ESG” was first coined in 2005 and today, is estimated to influence over $30 trillion in investments annually. Thousands of global professionals now hold the job title “ESG Analyst” and companies are increasingly building ESG departments. Why? Not only is a focus on ESG appealing as a sign of social consciousness, investors are increasingly recognizing that ESG metrics are revealing of a corporation’s management quality and an indicator of its future projected success. ESG efforts are increasingly understood to both reduce risk and benefit a company’s bottom line given the correlation with higher equity returns.

Board directors are best situated to ensure that their company is on track to reap the benefits of AI while avoiding its harms and litigation risks. Further, as in cybersecurity, current litigation trends indicate that directors are more likely to face personal liability for AI-supported mishaps as the potential impact on companies becomes clearer. Directors will be exposing the company and themselves to legal liabilities if they fail to uphold their fiduciary duty and mitigate preventable harms from AI systems created or deployed by the companies they govern.

How do we know that is the likely trajectory? A key tenant of corporate law 101 is board members’ fiduciary duties of care and loyalty to the corporation. These duties require the members of the board to make informed decisions in the best interest of the company. Two subsidiary duties to loyalty and care are the duties of oversight and non-delegation. These duties respectively require the implementation of an effective monitoring system to detect potential risks to the company and disallows delegation of such essential tasks. As we will discuss below, the use of artificial intelligence in pivotal functions presents the opportunity, if not the likelihood, that risks and liabilities will transpire. It follows, then, that the responsible deployment of AI within a company will fall squarely within the board purview.

Three Steps to Prepare for AI Governance

There are three key steps that board members, and their legal advisors, should take into consideration in preparing to install an AI governance program. First, to address this issue and comply with their responsibility, board directors must understand how pervasive AI bias already is. We’ve seen AI-enabled harms pop up in most sectors. In healthcare, biases in AI training data have led to class and race-based inequities in care offered to patients. When designing products for baby boomers, products often fail because they are targeted to the “elderly,” while only 35 percent of people 75 or older consider themselves “old.” These mistakes caused harm to patients and companies’ bottom lines respectively and would have benefited from AI governance to root out biases.

Our country’s long history of housing discrimination is being replicated at scale in mortgage approval algorithms that determine credit worthiness using proxies for race and class. Studies have found that black loan applicants were 80 percent more likely to be denied than their white counterparts. And then there is GPT-3, a promising innovation in AI deep learning language modeling. It is highly regarded for its innovative potential but it is demonstrating problematic biases such as generating stories depicting sexual encounters with children as well as biases against people based on their religion, race and gender. And algorithms are scaling gender bias in hiring systems at a time we cannot afford to weed out top talent. The harms are pervasive and real, and we have only seen the tip of the iceberg of discrimination stemming from artificial intelligence.

Next, directors need a basic understanding of how bias infiltrates AI. Contrary to popular belief, AI is not neutral or infallible. Rather, an algorithm is like an opinion. Bias can embed in each of the human touch points throughout the AI lifecycle—from determining and framing the problem deemed worthy of an AI solution to product design to data collection, development, and testing. Each stage is limited by the experience and imagination of the designated team and reinforced by historical and learned biases in the data. But each touchpoint is also an opportunity to identify and eliminate harmful biases. As such, risk management should occur at each stage of the AI lifecycle.

Working with internal and external teams to develop this basic understanding will help board members better fulfill their duty by understanding the necessary checks and precautions that they should check are in place at the various stages of the AI design and use.

Finally, directors need a game plan. When they fail to monitor or institute AI governance, we can expect shareholders to point to those who were in a position to act during this window when the harms are increasingly visible, especially as regulators clarify the rules of the AI road. And studies indicate that most executives are not currently in a good position to respond to this call from board members. A recent report released by FICO and market intelligence firm Corinium found that most companies are deploying AI at significant risk. Sixty-five percent of respondents’ companies could not explain how specific AI model decisions or predictions were made, 73 percent struggled to get executive support for prioritizing AI ethics and Responsible AI practices, and only one-fifth actively monitor their models in production for fairness and ethics. Boards will need to ensure that corporate executives get up to speed on responsible AI governance, and quickly.

Legal Context: Navigating the Unknown

Even though the term “artificial intelligence” was coined in the 1950’s, the implementation of our current pervasive use of AI is relatively recent. The combination of machine learning algorithms, significant decline in prices for data storage and improvements in computing power has allowed AI use to spread at scale, and that was before the COVID-emergence that made AI deployment spread even faster as the necessity for tech solutions to hire, evaluate, transport, and compute increased exponentially. As a result, there is little legal precedent for guidance that is directly on point. But that will change; there is no question that litigation determining liability for AI harms will become more common.

New laws and legal frameworks are under construction across the globe, and as our legal system was meant to do, established legal precedent will be instructive in navigating the path forward in the meantime. The challenge will be that AI systems will have been built, deployed and commingled with other programs when the legal expectations and responsibilities are more certain and, at that point, unpacking the source of bias from intermingled or antiquated AI systems will be significantly more challenging.

New AI Laws on the Horizon

The European Union (EU) has demonstrated its intent to take the lead in AI regulation. The EU recently released a proposed legal framework for “trustworthy” AI, following a risk-based approach (unacceptable risk, high risk, limited risk, or minimal risk). Systems deemed an “unacceptable risk” would be banned outright. Those systems ranked as “high risk,” where AI is used in sensitive or essential areas, (e.g., employment, critical infrastructures, education, law enforcement) would be subject to strict regulations such as demonstrating the use of high quality datasets and appropriate human oversight. Limited risk systems, such as chatbots, would require specific transparency obligations, such as ensuring consumers are aware that they are interacting with a machine. While this regulation will likely be slow moving and not transpire for several years, we can expect it to come to fruition in light of similar significant EU regulation, such as GDPR, their General Data Protection Regulation of privacy.

On this side of the pond, the U.S. Congress, not historically known for its tech savvy, recently demonstrated increased AI awareness. We have seen bicameral hearings on how best to regulate and safeguard against AI-related harms. Congress also passed a seminal legislation, the National Defense Authorization Act (NDAA) of 2020, that provided billions of dollars in AI investment to support US competitiveness in this space, and also mandated new roles and functions across government. For instance, the bill mandated that the National Institute of Standards and Technology (NIST), housed in the Department of Commerce, establish an AI Risk Management Framework. Similar to their past seminal frameworks that have established the industry standard for privacy, cybersecurity, and several other areas, NIST’s current AI efforts will provide a set of guidelines and standards to which companies can expect to be held accountable.

The executive branch has been increasingly active in this space. Recently, Dr. Eric Lander and Dr. Alondra Nelson of the White House Office of Science and Technology (OSTP) proposed the creation of an “AI bill of rights” to ensure the public knows how AI is influencing a decision that affects their civil rights and civil liberties. In addition to giving notice when using AI that has not been audited for implicit biases or trained on sufficiently representative data sets, they propose imposing meaningful recourse for individuals harmed by such algorithms.

The Federal Reserve Board (Fed) has also stepped up its interest in monitoring and promoting responsible AI. At the AI Academic Symposium earlier this year, Governor Lael Brainard highlighted the Fed’s commitment to supporting the responsible uses of AI to promote equitable outcomes in the financial sector. Specifically, Governor Brainard identified the significant risks of employing AI models built on historical data that can “amplify rather than ameliorate racial gaps in access to credit” and “result in discrimination by race, or even lead to digital redlining, if not intentionally designed to address this risk.”

The FTC announced its intent to use all legal tools at its disposal (section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act) to hold companies accountable for misleading or harmful AI. Likewise, our new CFPB Director, Rohit Chopra, has indicated an intent to regulate algorithms, including a recent warning that companies cannot dodge fair lending laws through the use of algorithms.

The U.S. Equal Employment Opportunity Commission (EEOC) also just launched an initiative to ensure that AI tools used in hiring and other employment decisions comply with federal civil rights laws under their purview and “do not become a high-tech pathway to discrimination.” In the meantime, we have started to see frameworks published for relevant stakeholders across the U.S. government (e.g., GAO Framework; DoD Ethical Principles for AI, AI Ethics Framework for the Intelligence Community).

States are also getting in on the AI regulation action. Most recently, the New York City Council passed a bill requiring companies that sell AI technologies for hiring to obtain ‘bias audits’ assessing the potential of those products to discriminate against job candidates.

Applicable Laws on the Books

There is significant legal precedent that can instruct how liability will be demonstrated in cases that are brought against companies developing or using AI, and how boards should plan accordingly. For instance, the Fair Credit Reporting Act (FCRA), 15 U.S.C. § 1681 et seq. provides that, upon request, a consumer reporting agency must provide the consumer with a statement and notice that includes “all of the key factors that adversely affected the credit score of the consumer in the model used.” The Equal Credit Opportunity Act (ECOA), 15 U.S.C. 1691 et seq. prohibits creditors from discriminating against credit applicants on the basis of race, color, religion, national origin, sex, marital status, age, because an applicant receives income from a public assistance program, or because an applicant has in good faith exercised any right under the Consumer Credit Protection Act. ECOA likewise requires creditors to notify applicants of when an adverse action is taken and to provide a statement of specific reasons for the action taken. Yet, it is not clear how creditors can comply with these basic requirements and provide the required adverse action notices in compliance with the ECOA and the FCRA when the creditor relied on AI models in making the adverse action decision.

In addition to these federal protections, the well-established legal frameworks of contract and tort law are areas where corporate executives and their board members should be on the lookout for potential liability involving AI technologies. For example, under contract law, there is an expectation of explicit and implicit warranties guaranteeing the quality of products. Breaches in these warranties can give rise to liability. This could play out in the AI realm in several ways, for example, an allegation that the contract was breached when an algorithm licensed by a bank led to discriminatory outcomes in credit lending. Likewise, one can imagine the case where an AI-supported health program failed to put users on notice that results were only tested at the stated level of accuracy for a subset of the population, leading to unsupported recommendations and harmful results for underrepresented populations without notice to the physicians, health care workers or unrepresented, vulnerable patient populations.

In situations where there are barriers to contractual liability for AI-enabled harms, we can expect to see the application of tort law to bridge this gap. In particular, suits could be brought under tort theories of product liability including negligence, design defects, manufacturing defects, failure to warn, misrepresentation, and breach of warranty. Of note, some of these theories, such as failure to warn, would present strict liability and as a result, the defendant company and/or its leadership would be held responsible for their actions or products, without the plaintiff needing to prove negligence or fault. While some legal scholars argue for a restructuring of our legal models of corporate liability to better address AI-enabled harms, others argue, and our research supports, that long established legal doctrines provide sufficient guidance for boards, executive leadership and plaintiffs alike to understand and prepare for litigation stemming from AI related hazards in the meantime.

Indeed, cases have demonstrated how the courts will apply traditional legal notions of product and employer liability cases involving AI products, such as in cases involving GPS systems, autonomous vehicles, and robots in the workplace. For instance, Toyota was sued for a software defect in the vehicles it produced, allegedly causing vehicles to accelerate notwithstanding the drivers’ efforts to stop. The court denied Toyota’s attempt to terminate the case in a motion for summary judgment, which alleged that there could be no liability given that the plaintiffs were unable to identify a precise software design or manufacturing defect. Instead, the court found a reasonable jury could conclude that the vehicle continued to accelerate and failed to slow or stop despite the plaintiff’s application of the brakes. In re Toyota Motor Corp. Unintended Acceleration Mktg., Sales Practices, & Prod. Liab. Litig., 978 F. Supp. 2d 1053, 1100-01 (C.D. Cal. 2013).

And if we look further back for analogous precedent, as courts will do, we see that decades ago, in Nelson v. American Airlines, Inc., 70 Cal. Rptr. 33 (Cal. Ct. App. 1968), the Court found an inference negligence by American Airlines relating to injuries suffered while one of its planes was on autopilot. This negligence theory was rooted in the doctrine of res ipsa loquitur, meaning “the thing speaks for itself” or the mere fact of an accident can open the door to the inference that a defendant was at fault.

Game Plan: Five Best Practices for Boards to Ensure Responsible AI Governance

In short, this is a critical moment for companies to take proactive mitigation measures to avoid harmful biases from becoming discriminatory practices that are the subject of litigation and front page stories in the Wall Street Journal. It is not a stretch to surmise that companies, and their governing boards, can expect to see increasing liability in the near future if they are using AI in sensitive or critical functions. There is no time to waste in setting up AI governance to support safety and oversight of its legal compliance.

Here are five best practices of “good AI hygiene” to reduce risks and liability from AI use:

  1. Establish an AI governance framework. There are an increasing number of frameworks to guide efforts to identify and reduce harms from AI systems (e.g. BSA Framework, GAO Risk Framework, and NIST’s forthcoming AI Risk Management Framework). Our EqualAI Framework provides five “pillars” for responsible AI governance, which follows a timeline from long-term planning to diversify the pipeline to testing algorithms already in use.
  2. Identify the designated point of contact in the C-suite who will be responsible for AI governance. This person will own the coordination of the team handling incoming questions and concerns (internal and external), oversee coordination and responses, and ensure new challenges are identified and addressed with access and assurances of management team level attention, as and when necessary.
  3. Designate (and communicate) the stages of the AI lifecycle when testing will be conducted (e.g., pre-design, design and development, deployment). This process should include the expected cadence for testing. AI will continually iterate and learn new patterns. Your checks must follow suit.
  4. Document relevant findings at the completion of each stage to promote consistency, accountability, and transparency. In particular, document populations who are under or overrepresented in underlying datasets and for whom the AI system may have different success rates. This information will put on notice those re-testing for gaps and harms at later stages as well as downstream users of the AI systems (think nutritional labels and ingredient lists, often described as AI model cards, that document what/who is baked in datasets).
  5. Implement routine auditing. Like bi-annual dentist visits and skin cancer screenings, boards should mandate that AI used in pivotal functions is subject to routine audits where AI systems are queried with hypothetical cases. There is a growing body of outside experts who can perform this for you under the discretion of the board and legal team. This is not only good practice, it may soon be required, such as is the case under the New York City AI audit bill noted above as well as the federal proposed Algorithmic Accountability Act. In addition, this would be helpful to establish a good record of intent to mitigate bias and harms from AI systems, as part of a lawsuit or response to regulatory body.

AI is quickly permeating our daily lives but currently on a highway without speed limits, road signs or even marked lanes. The liabilities, litigation, and regulation (highway patrols) are coming. Smart companies will prepare by establishing AI governance with the aligned goal of reducing risk, harm, and liability.

Effective ESG has become shorthand for corporate management that understands and is proactively acting to mitigate potential risks and liabilities and must include AI governance. AI is now a known risk and increasingly will be a source of liability, discrimination, and harm if not monitored. Established ESG programs are a natural home for AI governance and mitigation of the known and increasing risks just as corporate boards will be increasingly accountable for the legal compliance and safety of AI systems.

Both comments and trackbacks are currently closed.