The Perennial Quest for Board Independence: Artificial Intelligence to the Rescue?

Dr. Akshaya Kamalnath is Lecturer in Corporate Law at Deakin Law School. This post is based on her recent article, forthcoming in the Albany Law Review.

The question of the ideal composition of company boards is unlikely to have the perfect answer. While the need for independent directors was emphasized in the early nineties and continues to be emphasized even today, additional new ideas have crept in. The idea of board diversity and especially gender diversity has become popular in recent times. The rationale, at least in part, for most of these proposals is to ensure that the board is active, acts independently of management, and is able to consider various perspectives that might affect the company while making decisions. Could Artificial intelligence (AI) help solve some of these problems?

In a recent article I argue that AI can help enhance board independence by reducing agency costs. At first, AI can be used to help directors discharge their duties and as AI for boards becomes more reliable, corporate law will have to evolve to ensure that duties of officers are meant to ensure the safe and efficient use of AI. In proposing practical safeguards to the use of AI on boards, the article draws from processes followed in the context of cancer treatment where AI is currently being used.

Board independence (along with disclosures) seems to be the chief tool thus far in the arsenal of corporate law to counter agency costs. Since these tools have not always succeeded, AI might be a significant solution provided that the right set of incentives are put in place to ensure its effective and ethical use.

Boards often have to make important decisions on very short notice and thus, independent directors, being outsiders to the company, might not be able to digest all the required information in a short period of time. Even apart from this, other problems impede the board from exercising independent judgement. It has been argued that the problem lay in the way relevant legal rules defined “independence” which mostly focused on “financial” independence but not that of independence of “the mind”. Further, independent directors serve long terms which foster what has been called “fictive friendships” amongst directors. This sometimes leads to independent directors hesitating to challenge their “friends” on the board. In other words, most boards are susceptible to “groupthink” (i.e. a mode of thinking afflicting members of a cohesive group when the members’ striving for unanimity clouds their judgement). Different solutions like educating board about the phenomenon, having a director play devil’s advocate, and board diversity, have been proposed to help boards overcome groupthink.

AI can be a useful aid to counter groupthink. Even if it is merely used as a tool to analyse information and provide an opinion that the board of directors then considers, it will be able to provide its input without being influenced by groupthink. The AI would not be susceptible to human biases, unless it is programmed into the system. Importantly, it will not be afraid to upset its friends. The board, in a scenario where it has failed to consider alternate courses of action (either because there was no time to read all the relevant information or because they were hesitant to challenge management), will have to then evaluate the input suggested by the AI system. Any board members that are initially hesitant to voice a dissenting view might be encouraged to use the AI recommendation as the basis for voicing an opinion. Of course, it is still up to the directors to act upon these recommendations.

Thus, at the first instance, when AI is developed for corporate governance, it must be used as an aid or tool to board decision making. A similar model is followed by oncologists who use AI. A group of oncologists together make decisions regarding treatment options, and consider the recommendation provided by the AI, along with their own.

The current architecture of corporate law conceives of directors on the corporate board as natural persons and thus imposes a framework of duties and liabilities. Even if a liability regime for AI is worked out, it is also important to remember that AI platforms are not likely to be completely accurate when they are first designed. Further, corporate law does not always have answers for what is “accurate” in a situation. While AI is unlikely to suffer from conflicts of interest, it will not have the business instincts and entrepreneurial flair of business persons. Leaving decisions entirely to AI might result in a lack of consideration of interests of stakeholders like employees or perhaps society in general in instances of decisions having adverse environmental impact. Thus, at the first instance, when AI is developed for corporate governance, it must be used as an aid or tool to board decision making like in the case of oncologists using the AI platform.

For this model to work, it will be important deciding what information would be relevant for board decision making is important. Relatedly, care must be taken not to code biased perspectives into the AI. It is also crucial to ensure that the information within the AI is adequately secured. Finally, for directors to be able to use AI developers must ensure that the AI’s decision-making process is transparent. As an example, the AI used in Oncology provides the medical studies on which its recommendation is based.

On a futuristic note, both corporate law and AI governance frameworks must also prepare for an eventuality where companies appoint AI as board directors.

The complete article is available here.

Both comments and trackbacks are currently closed.