A New Governance Paradigm is Necessary for AI-Powered Boards

Alissa Kole is Managing Director at GOVERN. This post is based on her GOVERN memorandum.

Surprisingly, last month’s announcement regarding the addition of an Artificial Intelligence (AI) member to the board of Abu Dhabi’s International Holding Company (IHC) does not appear to have galvanized global attention. Co-developed by a local Emirati AI company G42 and Microsoft, Aiden Insight, the first AI board member in the Middle East, is positioned to be a game changer for corporate boards and their regulators worldwide.

In fact, it is not the first time an AI board member has been appointed to a corporate board. Exactly a decade ago, Hong Kong’s Deep Knowledge Ventures had assigned Vital as the sixth AI member of its board of directors, marking the first attempt to bring AI to the board not as an enabling mechanism but rather as a decision-maker. That experiment appears to have been ahead of its time and has until last month not been replicated. Over the past decade however, tables have shifted.

In the past three years in particular, the interest in AI’s role to support corporate strategy development and implementation has grown to the point that 36% of S&P 500 companies mentioned AI in their earnings calls last quarter. Yet, only 13% of these companies have AI expertise on the board. Even in these companies – primarily IT firms – AI expertise is held by one board member, yet the board itself does not have an AI representative.

IHC, the largest listed company in the UAE poised to grow through an extensive acquisition spree in international markets, represents a break with this. It is not only the first company in an emerging market to introduce an AI board member, but also the first sovereign one to do so. As such, Aiden Insight marks the beginning of a tidal wave that will generate a novel nexus between governance and strategy through AI.

Appointing AI board members has the potential to propel boards, especially of firms operating in complex regulatory environments or executing diversified investment strategies. Given its potential repercussions on shareholders and stakeholders, the regulatory implications of AI board members need to be considered. So far, international standard setters such as the OECD and the EU have looked at AI from consumer protection, trustworthiness and cross-border collaboration prisms.

This thinking is suited for situations where AI merely serves as a tool supporting human directors or board committees charged with responsibility of AI oversight, or in companies that have established an ethics board or similar to address AI-specific risks. However, this approach is not particularly suited to situations where AI is rather driving the decision-making process.

So far, there has been little effort made by securities, banking, or other corporate regulators to consider the nexus of AI and governance in the boardroom, notably in this latter case where AI is an active participant in the decision-making. This is largely because the few existing AI board members operate in the grey zone: they are neither full voting board members and nor are they strategy-enabling AI.

Both Aiden Insight and Vital were introduced as non-voting members not legally bound by fiduciary duty. In the case of Vital, this was not permissible under Hong Kong corporate law. At the same time, due to its ability to consider large amounts of data, Vital was given a critical role. At the time, Deep Knowledge Ventures’ Managing Director was quoted as saying that “as a board, we agreed that we would not make positive investment decisions without corroboration by Vital.”

In governance terms, this veto-like power is akin to the power of lead independent directors in the UK, who have the prerogative to veto specific decisions such as related party transactions not made on arm’s length terms. While such veto right might seem as a technical issue, the London Stock Exchange’s idea to remove this requirement in order to facilitate Saudi Aramco’s listing resulted in a significant investor uproar.

In considering the function and responsibilities of AI board members, their role needs to be envisioned not only from the perspective of AI ethical values, but also through the prism of corporate law. Presently, national legislation in countries such as the US or Australia does not preview the possibility of board responsibilities being discharged by anyone but a “natural person”.

In other countries, legal representatives who are not natural persons are effectively allowed on boards, which – at least in principle – opens the possibility to elaborate legal responsibilities of AI board members. At the same time, such responsibility raises a question of their liability, which would presumably rest with their developers, effectively upending the entire concept of board liability insurance.

A multitude of other questions remain unanswered in ongoing governance debates where AI still does not feature prominently, despite a growing expectation that it play an increasing role in corporate boardrooms. Almost a decade-old World Economic Forum survey revealed that already then nearly half of respondents believed that AI directors will be appointed to boards by 2025. This has happened but not as frequently as believed earlier.

As a result, global and national governance standard-setters have until now not been forced address the implications of AI board members. Neither the OECD governance principles nor national standards address the role of AI apart from the expectation that boards consider technology risks. In order for market regulators not to be caught by surprise – as their peers have been in the face of cryptocurrency or car-sharing innovations – the potential role of AI board members needs to be considered now.

This consideration should focus not only on legal responsibility of AI board members but also on aspects of their work that can help companies create value. AI board members may, for instance, be required to participate in board risk or technology committees. The latter are still not required by most regulators and are consequently rarely present even in IT companies which is a governance risk in itself.

The role of AI board members such as Aiden or Vital would need to be clearly defined not only in a sense of fiduciary duty but also from a broader philosophical perspective that would allow for creation of a governance framework in which they would be embedded. The relevant questions, ranging from AI directors’ potential contribution to board diversity to their role in board committees, are only now starting to surface. The latest announcement from Abu Dhabi highlights that there is no time to waste.

At the same time, evidence emerging around corporate disclosure of the use and the risks of AI highlights that companies will be likely reluctant to share this information, which many may consider a source of competitive advantage. The battle between Disney’s shareholders who have requested disclosure of the company’s use of AI and board oversight thereof in the company’s 2024 proxy materials and the company’s management, foreshadows a broader corporate drama coming to theatres soon.

Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>