Generative Artificial Intelligence and Corporate Boards: Cautions and Considerations

Lawrence A. Cunningham is Special Counsel, Arvin Maskin is a Partner, and James B. Carlson is Senior Counsel at Mayer Brown LLP. This post is based on a Mayer Brown memorandum by Mr. Cunningham, Mr. Maskin, Mr. Carlson, Joseph Castelluccio, Andrew J. Noreuil and Paul C. de Bernier.

Generative AI (i.e., AI creating original content using machine learning and neural networks) has captivated people everywhere, producing a range of responses from doomsday warnings of machines rendering humans extinct to rosy dreams where machines possess magical properties. In corporate boardrooms, however, a more sober conversation is occurring. It seeks a practical understanding of how boards might evaluate this powerful, but error-prone new tool, and comes with both cautions about its downsides and considerations for potential upsides.

Companies are racing to harness the benefits of generative AI while trying to develop policies to protect against reputational and regulatory risks—all of which creates a clear role for boards of directors.  The generative AI industry continues to debate and refine its offerings as well, which have become more effective with each subsequent iteration of generative AI. Policymakers are weighing in with a flurry of regulatory initiatives and recommendations in the face of concern about the ethical implications and other risks of widespread adoption of this new tool.

In this alert, we offer corporate boards insight about generative AI along with practical cautions, noting both its perils and promise. We begin with current regulatory initiatives and legal issues for directors. We provide considerations on the relevance of generative AI to standing board committees such as audit, compensation, and nominating/governance committees, including suggesting ways that generative AI can help boards address thorny challenges, from setting their own pay to planning for their own succession. We also provide illustrations for how the full board or designated risk or strategy committees might at some point be able to use generative AI to add value to strategic planning and enterprise risk management.

Regulatory Initiatives in the US and Globally

Generative AI has been the subject of multiple regulatory and political initiatives worldwide, focused on potential risks in the use of AI and achieving a balance between innovation, accountability and transparency. While there is not a comprehensive legal framework for the regulation and oversight of AI in the United States, legislative efforts around AI indicate an increasing drive for Washington to assume a significant position in the regulation of AI.  For example, in the United States:

  • the White House issued a fact sheet outlining a series of executive actions addressing generative AI, including a blueprint for a generative AI “bill of rights,” and its Office of Science and Technology issued a request for information on oversight of generative AI systems
  • the Department of Commerce’s National Institute of Standards and Technology (NIST) released a framework for voluntary use and to promote trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems, including notably suggesting that companies “establish policies that define the artificial intelligence risk management roles and responsibilities . . . including board of directors . . .”
  • the Federal Trade Commission (FTC), the Justice Department’s Civil Rights Division and the Equal Employment Opportunity Commission issued a joint statement focusing on generative AI’s risks of bias [1]
  • the FTC has also separately warned that certain generative AI usage could violate federal laws the FTC enforces
  • widely-publicized hearings on generative AI recently occurred before the Senate Judiciary Committee and the House Judiciary Subcommittee on the Courts, Intellectual Property and the Internet and Sub Committee on Cybersecurity, Information Technology and Innovation, providing an opportunity to discuss trends, implications and risks associated with AI and potential regulatory and oversight frameworks.
  • state and local government initiatives are underway nationwide, including in California, Colorado, Illinois, Vermont, Washington, and New York City

Outside the United States, wide-ranging regulatory initiatives are being considered, including:

  • the European Union’s proposed AI Act and AI Liability Directive specifying obligations for provider of generative AI models [2]
  • the United Kingdom government’s AI regulation policy paper and AI white paper
  • Brazil’s proposed Legal Framework for Artificial Intelligence
  • Canada’s proposed Artificial Intelligence and Data Act  
  • China’s Cyberspace Administration of the proposed Administrative Measures for Generative Artificial Intelligence Services

This intense global focus on the potential uses and misuses—and related responsibilities and obligations—point towards the need for corporate boards to be establishing policies and processes to address generative AI risk management while evaluating how generative AI may be properly used to gain strategic and competitive advantages.

Evolving Scope

Generative AI produces content based on natural language inputs, such as memos, queries, or prompts. Output varies in quality, accuracy, and objectivity.

The more widely-available popular generative AI tools tend to be designed for general audiences. At this point, many lack the technical specifications and precision that companies or professional groups will find desirable from the relevant databases and guardrails to depth of analysis, tone or diction, and references to authority.

Some industries are likely to be touched by the technology in more obvious ways than others—publishers and software firms possibly more at the moment than building contractors or mining companies for instance.  Oversight will correspondingly vary as will required training, supervision and restrictions or permissible uses.

Many companies are developing policies and procedures specifically applicable to the use of generative AI by officers and employees. They are updating their corporate policies to address concerns about potential risks and harms in the context of generative AI, such as bias/discrimination, confidentiality, consumer protection, cybersecurity, data security, privacy, quality control, and trade secrets.

Director Duties and Recommended Precautions

Generative AI does not change the bedrock fiduciary duties of corporate directors and using or otherwise incorporating AI into board decision making is certainly no substitute for the traditional means of discharging them. For example, directors must, consistent with their duty of care, act in an informed manner, with requisite care, and in what they in good faith believe to be the best interests of the corporation and its shareholders. They must act loyally, including by protecting the confidentiality of corporate information.

If generative AI evolves into a tool that poses challenges to corporate policy or effectiveness or creates material risk, it is reasonable to assume that related oversight function would fall within the fiduciary duties of corporate boards. That would require the board to exercise good faith and act with reasonable care to attempt to assure that management maintains appropriate systems of control over generative AI.

For public companies using generative AI in financial reporting and securities filings, boards may need to confirm with management that the company appropriately uses generative AI’s capabilities in connection with its internal control over financial reporting as well as disclosure controls and procedures.

As generative AI tools proliferate and are incorporated into search and data products already in wide use, directors should consider both (1) the degree to which information they receive from management, auditors, consultants, or others may have been produced using generative AI and (2) whether they can and should use generative AI tools as an opportunity to support their duties and activities as directors.  For both purposes, directors must be mindful, like company officers and employees, of risks associated with the company’s use and reliance on generative AI. Three of the key considerations are:

First, generative AI are machines, not people. They have no knowledge, expertise, experience, or qualifications—in any field whatsoever, not least corporate governance or business administration. Unlike directors, generative AI owes no fiduciary duties and faces no liability for beach.

Second, generative AI results may be inaccurate, incomplete, or biased (with bogus AI information or output commonly called “hallucinations”). Generative AI can be a valuable tool to generate ideas, provide generally-available factual information, spot issues, and create lists. But, at least at present, there are limits on these tools’ capabilities.  Accordingly, outputs must be scrutinized and tested for trustworthiness – that is, for things like accuracy, completeness, lack of bias and explainability (i.e., explain how and why AI made a particular recommendation, predication or decision). Only then should the output be drawn upon to incorporate into the activity, discussion, or material of interest.

Third, generative AI processes and retains user interactions as training data, which is intended to improve the quality of its output in future versions, but also implicates privacy and cybersecurity risks and considerations, including the unintended disclosure of confidential information and other data.  Corporate directors must therefore take care to avoid generative AI being used in ways that could compromise such confidentiality or create legal exposure.

For example, in the case of confidential or sensitive company information, it is possible that data or document input and output might leak and be incorporated into the wider generative AI model, exposing it to being machine read, trained by or synthesized into the generative AI models. Accordingly, directors should consider some practical self-limitations, whether or not formalized in corporate policies. For example:

  • not mentioning the company name or other company specific or identifying information in inputs or chats with generative AI
  • not mentioning any non-public or proprietary information or specific individual names or data in inputs or chats with generative AI
  • reviewing generative AI output for accuracy and completeness and not simply passing on generative AI output without a thorough review and modifications as necessary
  • using generative AI output internally and not projecting publicly
  • when appropriate, identifying the generative AI output component of any product that involved the use of generative AI

Of course, this practical guidance for directors may evolve as market practices and company generative AI policies evolve.

For now, in the case of companies that have not done so, boards may want to ask management for a high-level initial report on generative AI and discuss the subject with management, preferably with there being a management point-person for AI oversight, usage and risk management. The goal would be to assess the extent to which generative AI tools create opportunities—competitive, innovative, or strategic—and/or present risks—whether operationally disruptive, compliance, or financial.

To explore these possibilities, a board might begin by asking management to put the topic on an upcoming board meeting agenda and receive both management’s views and perspectives from outside advisors.  As part of the process, directors could learn about generative AI by posing a series of questions to generative AI asking about these issues, consistent with the foregoing common sense precautions, which may add to the framework for discussion.

In the case of US companies that have made significant—or “mission-critical”—investments in AI – boards should consider being able to demonstrate board-level oversight of AI risks. This is particularly important due to potential claims based on standards from the Caremark case, which involve directors’ failure to oversee corporate compliance risks.  While bringing Caremark standard cases has traditionally not been easy, the ability of some recent claims to survive motions to dismiss highlight the ongoing significance of this claim for directors responsible for overseeing critical company compliance operations. Therefore, even if a company is not in breach of its regulatory obligations, directors could still face legal claims if they were not sufficiently attentive to important “mission-critical” risks at the board level.

As such and without detracting from the suggestions above, for companies where AI is associated with mission-critical regulatory compliance/safety risk, boards might want to consider: (a) showing board-level responsibility for managing AI risk (whether at the level of the full board or existing or new committees), including AI matters being a regular board agenda item and shown as having been considered in board minutes, (b) the need for select board member AI expertise or training (using external consultants or advisors as appropriate), (c) a designated senior management person with primary AI oversight and risk responsibility, (d) relevant directors’ familiarity with company-critical AI risks and availability/allocation of resources to address AI risk, (d) regular updates/reports to the board by management of significant AI incidents or investigations, and (d) proper systems to manage and monitor compliance/risk management, including formal and functioning policies and procedures (covering key areas like incident response, whistleblower process and AI-vendor risk) and training.

Boards should use these management discussions and reports to help to determine the appropriate frequency and level of board engagement and oversight. This will range from board-only periodic reviews to more regular discussions, including involving one or more board committees.

Audit Committees

Public company board audit committees are responsible for identifying, monitoring, and assessing financial, legal, and regulatory risks. An audit committee could determine that these include generative AI risks.  Over time, the audit committee can determine whether its oversight of generative AI risks should be formalized into modified guidelines or charters for itself or recommend similar steps for other board committees.  At a minimum, audit committees will want to work closely with a company’s independent auditors to understand how generative AI is being used in the preparation and auditing of financial statements and with management in connection with generative AI’s role and impact on the company’s system of internal controls.

Nomination & Governance Committees

Public company board nominating and governance committees may likewise need to understand how consultants and other advisors use generative AI in their processes and analysis. This may range from how advisors identify best practices in corporate governance to how recruiters identify, vet and propose board candidates.

Nominating and governance committees may find generative AI useful for a variety of tasks. Consider, for example, another commonly challenging area for all boards: evaluating current directors’ skills and experience against the optimal collection of board skills and experience to pursue corporate strategy. The ideal process begins with a statement of corporate strategy. This is followed by an objective articulation of the collective skills and experience the board should have to achieve strategic objectives. Next, the hardest part, is going around the table discussing each directors’ skills and experience, identifying gaps, and proceeding to search to fill them.

Such an arduous and candid undertaking is notoriously difficult. Giving the inherent conflicts and difficulties underlying these assessments, it is possible to imagine how generative AI could mitigate some difficulties. For instance, with corporate strategy articulated by the board, generative AI could articulate relevant skill sets and experience. Then one could give generative AI a summary of existing skill sets and experience and receive back a statement of the remaining gaps. While such an exercise cannot replace the board’s judgment, it could produce additional useful information.

Compensation Committees

Public company board compensation committees should appreciate the extent to which generative AI is used by consultants or others in developing compensation models or proposals. Moreover, oversight by some compensation committees extends to human capital management and other human resources practices where generative AI considerations are increasingly coming into play.

Recent regulatory initiatives from New York City and from the federal EEOC indicate that the proposed regulations will generally prohibit applications of generative AI that would result in bias or unfair decisions in connection with hiring, evaluating, and compensating employees. If adopted, related regulations could become standard features of corporate human resources and management practices.

Generative AI may help with one of the prickliest tasks any board faces: setting its own compensation. This presents an inherent conflict of interest that cannot be avoided by delegation to unconflicted participants. That is why such board decisions have been subjected to strict judicial scrutiny for fairness.

The compensation committee or the nominating and governance committee, as applicable, could use generative AI for information and recommendations on director pay based on a series of specified parameters, such as board size, committee size, company size and industry, meeting frequency and so on. While the directors would have to review the information and make the final decision, and courts might still review them, using the tool in an appropriate manner could add a degree of independence to the process that is otherwise elusive.

Risk/Strategy Committees or Full Board

While not required, many boards have established special committees chartered with overseeing general areas, such as risk or strategy, or specific ones, such as cybersecurity or technology. For boards with such committees, generative AI would likely come within their scope; and for those that have not formed such committees, generative AI is certainly a topic of such breadth that the full board would benefit from exploring how this tool may make a board more efficient and more effective.  (For companies where AI is seen as mission-critical, see our comments noted above regarding Caremark standards that guide board action.)

In each case, rapidly changing and proliferating generative AI presents both risks and opportunities. Boards are asked to anticipate and address diverse risks—such as climate change, cybersecurity, human capital management, supply chain monitoring, and now generative AI. Pressure comes from numerous powerful and important constituents including shareholders, asset managers, proxy advisors, employees, customers, suppliers, regulators, and diverse activists.

Corporations increasingly seek to build and maintain “early warning” systems to bring into focus and enable to the company to anticipate areas and trends that are expected to be relevant to the company down the road, whether in terms of risk management, corporate opportunity or operational impact. Boards do this in pursuing their mission of good corporate governance to enable executives to ascertain, analyze, and remediate in an effective, cost-efficient, and timely manner. Doing so also creates competitive advantages by enhancing a company’s risk management value.

Generative AI may present an opportunity to improve decision-making: decisions that will inevitably be subject to second-guessing and intense scrutiny from many quarters.  Generative AI may be able to help align the level of risk, structural incentives, internal policies, procedures and controls, emergency preparedness, communications strategies, commitment of resources, and business goals. For example, directors could:

  • pose general market or strategic queries to generative AI that impact their industry, perhaps in various jurisdictions
  • pose queries about regulatory, litigation or other risks that impact their industry, also in various jurisdictions
  • ask how companies or industries confront and respond to various other specified risks, such as climate, cyber, human capital, political, or supply chain.

The raw results will require careful scrubbing and a critical eye, as with any other resource. But, carefully and cautiously used, the tool may add value in other areas,   but can produce an alternative set of perspectives, such as the following:

Threat Profile. Generative AI could scour and monitor relevant news and geopolitical events which can shape and influence the ongoing threat profile of a company. It could follow independent resources, such as rating agencies, competitors, and other relevant groups.  Of course, the key issues are how these signals are assessed and contextualized and the process of converting the data into strategic and operational options for decision-makers.  Another key issue is that a generative AI model may provide information only through up to its most recent training date and only on the types of data used in its training databases.

Complaints, Feedback and Trends. Generative AI might help analysts collate massive data to assess problem areas. For example, generative AI could be an effective tool to monitor indirect complaints against a company’s or industry’s products or operations, including those made to regulators and Congress. It might be able to monitor indirect feedback from websites, blogs, posts, and trends in litigation, proposed legislation or regulations, and government investigations. It might be used to monitor trends in litigation funding and online solicitation and aggregation of potential claimants.

Reality Check. Besides using generative AI for listening to customers, regulators, competitors, independent groups, activist organizations, and even academics, generative AI may also offer a reality check to a company—a tool to defeat denialism.  It may provide a compiled analysis on how competitors are forecasting and preparing for potential threats. It might provide information that companies can use to create a panoramic view of the global threat environment and support the notion that enterprise risk must be interwoven into all strategic and operational decision-making.

Enterprise Risk Management. Enterprise risk management is a complex process. Forecasting potential threats requires considering a broad range of potential forces, including political, technological, climate, cybersecurity, social, regulatory, economic, legal, competition, and terrorism. Generative AI might provide insight into all such current issues and holds the promise to help boards cope with this daunting mission—with eyes wide open.

Further Visibility and Experimentation

Boards can expect to see corporate management deploying generative AI tools in multiple applications and will add value when they understand the associated contexts, uses and limitations. For instance, generative AI may be used to:

  • develop a communications strategy by defining potential audiences, articulating corporate positions, refining messaging, and articulating strategies for taking charge of the messaging
  • collate lessons-learned and best practices across business sectors and from diverse origins in mapping a range of historic, recent, and potential events of interest
  • assess various scenarios in terms of potential impact to finance, operations, and reputation to help evaluate and calibrate risk/reward trade-offs
  • help stress testing, scenario planning, and forecasting those remote scenarios which may be overlooked in conventional modeling

As boards observe and experiment with using generative AI, it will become clearer where risks tend to congregate and how they manifest as well as what kinds of tasks generative AI can help perform. Depending on how generative AI evolves, it is possible that when properly used it may add value to everything from building meeting agendas and keeping meeting minutes to evaluating potential CEO successors and merger partners.

Skeptics and Response

Generative AI skeptics question the unintended side-effects of adapting generative AI, such as displacing human capabilities by delegation to machines that lack human attributes such as commonsense, judgment, empathy, and wisdom. Optimists counter that generative AI use can be limited to the generation of information, outlines, lists and summaries to save people time. People can devote the time saved to applying and thereby refining the uniquely human skills, particularly, for boards, discernment, reasoning, debating, collaborating, and decision making.

Like any tool, generative AI requires understanding and judgment in its use. For example, using generative AI in measuring or extrapolating from past or recent events in corporate governance and enterprise risk management requires a multidisciplinary, real world, pragmatic approach. Boards should feel duty-bound to optimize all available sources of information to ensure that their company is equipped to respond to any risk event, with its finances, operations, reputation, good will, and culture intact. That means taking a cautious but curious approach to generative AI.

Endnotes

1US FTC, DOJ, EEOC, and CFPB Release Joint Statement on AI, Discrimination and Bias | Perspectives & Events | Mayer Brown(go back)

2EU Commission Proposes New Liability Rules on Products and AI | Perspectives & Events | Mayer Brown; The European Union Proposes New Legal Framework for Artificial Intelligence | Perspectives & Events | Mayer Brown(go back)

Both comments and trackbacks are currently closed.