The Governance of Corporate Use of Artificial Intelligence

Leo E. Strine, Jr. is the Michael L. Wachter Distinguished Fellow at the University of Pennsylvania Carey Law School; Senior Fellow, Harvard Program on Corporate Governance; Of Counsel, Wachtell, Lipton, Rosen & Katz; and former Chief Justice and Chancellor, the State of Delaware. This post is based on his recent article, forthcoming in the Journal of Corporation Law.

Artificial intelligence, or “AI,” is evolving fast and becoming embedded in the operations of many for-profit corporations.  As with any novel, transformative technology, AI can tempt us to lose sight of the fact that large corporations have deployed society-changing innovations in the past, and to ignore hard-earned lessons of that experience that might help us better ensure that AI improves human life and does not create harm.

In my article, “Using Experience Smartly to Ensure a Better Future: How the Hard-Earned Lessons of History Should Shape The External and Internal Governance of Corporate Use of Artificial Intelligence”, occasioned by the Rome Conference on AI, Ethics, and the Future of Corporate Governance and the 50th anniversary of the Journal of Corporation Law, I reflect on what our experience with prior corporate development and employment of transformative technologies might teach about how we approach AI’s accelerating role in corporate profit seeking.  To be constructive, I underscore the need for a strong, external system of regulation that takes into account that AI operates across borders and that the corporations leading the AI movement can exercise global power.  For those reasons, I argue that international cooperation is necessary to ensure that wherever AI is deployed, the corporations who use it are required to do so with respect for workers, consumers, and others that the AI affects, and that we be careful not to permit the use of asset partitioning akin to that used for tax avoidance purposes, to enable corporations to escape fair accountability if they create harm.  To this end, I note the utility of soft law in the form of international understandings, and highlight that the similar concerns in the United States and European Union about the potential for AI to create unintended harm (e.g., by compounding historical invidious discrimination against black people, women, and others) if it is not developed and used with care, suggest the basis for common regulatory expectations and understandings.

Looking within internal corporate governance, I note that cutting-edge technology can challenge the capacities of boards of directors comprised primarily of older, independent directors, most of whom are no longer active participants in the company’s industry, and many of whom never were.  To address this, I note some useful practices for boards and company managers that might enable them to take better advantage of the potential of AI and avoid harm to company stakeholders.  In particular, I argue that directors and managers must understand how their companies use AI to make money and take the time to “touch and feel” the AI, but in a very practical way that focuses on the material ways in which the company is using AI.  For example, if a bank has its loan officers use AI to help determine who is eligible to receive a loan, then the board and top management see a loan officer do just that function, see how the AI factors into the ultimate decision on a loan, and demystify the process.  Similarly, if the corporation uses AI to winnow those eligible for employment, the same hands-on approach should be used.  Pervading the article’s recommendations are a focus on ensuring that the humans who manage corporations remain responsible for understanding how the corporations are using AI and ensuring that the AI is used only with due regard for the legitimate expectations of workers, consumers, and society as a whole.