Selecting Directors Using Machine Learning

Michael S. Weisbach is Ralph W. Kurtz Chair in Finance at The Ohio State University Fisher College of Business, and Research Associate at the National Bureau of Economic Research. This post is based on a recent paper by Professor Weisbash; Isil Erel, Distinguished Professor of Finance at The Ohio State University Fisher College of Business; Léa Stern, Assistant Professor of Finance and Business Economics at the University of Washington Foster School of Business; and Chenhao Tan, Assistant Professor at the University of Colorado Boulder.

In this paper, we present a machine-learning approach to selecting the directors of publicly traded companies. In developing the machine learning algorithms, we contribute to our understanding of governance, specifically boards of directors, in at least three ways. First, we evaluate whether it is possible to construct an algorithm that accurately forecasts whether a particular individual will be successful as a director in a particular firm. Second, we compare alternative approaches to forecasting director performance; in particular, how traditional econometric approaches compare to newer machine learning techniques. Third, we use the selections from the algorithms as benchmarks to understand the process through which directors are actually chosen and the types of individuals who are more likely to be chosen as directors counter to the interests of shareholders.

There are a number of methodological issues we must address before we can construct such an algorithm. First, we must be able to measure the performance of a director to predict which potential directors will be of highest quality. Measurement of directors’ performance is complicated by the fact that most directors’ actions occur in the privacy of the boardroom where they are not observable to an outside observer. In addition, most of what directors do occurs within the structure of the board, so we cannot isolate their individual contributions. Our approach is based on the fraction of votes a director receives in the shareholder elections. This vote, which is shown to be highly informative about directors’ quality in the prior literature, reflects the support the director personally has from the shareholders and should incorporate all publicly available information about the director’s performance.

In addition, while we can observe the fraction of support an existing director has from shareholders, we cannot observe the votes a potential director who was not chosen would have received, nor whether a potential director for a firm would have been willing to accept the directorship. We address this issue by constructing a pool of potential directors from those who around that time accept a directorship at a smaller nearby company, so presumably would have been attracted to a directorship at a larger, neighboring company. To evaluate the performance of our algorithm, we use the fraction of votes he received at the company where he was a director as our measure of this potential director’s performance.

We find that our machine-learning algorithms fit the data well. The realized performance following the appointment of a director is a monotonic function of the predicted performance. Using publicly available data on firm, board, and director characteristics, our XGBoost algorithm can accurately predict the success of individual directors, and in particular, can identify which directors are likely to be unpopular with shareholders. In comparison to the machine-learning models, standard econometric models fit the data poorly out of sample. Specifically, the observed performance of individual directors is not related to the predictions of performance of an OLS model. The fact that the machine learning models dramatically outperform econometric approaches is consistent with the arguments of Athey and Imbens (2017) and Mullanaithan and Spiess (2017) that machine learning is a promising approach for prediction problems in social sciences.

The differences between the directors suggested by the algorithm and those actually selected by firms allow us to assess the features that are overrated in the director nomination process. Comparing predictably unpopular directors to promising candidates suggested by the algorithm, it appears that firms choose directors who are much more likely to be male, have a large network, have a lot of board experience, currently serve on more boards, and have a finance background.

In a sense, the algorithm is saying exactly what institutional shareholders have been saying for a long time: that directors who are not old friends of management and come from different backgrounds are more likely to monitor management. In addition, less connected directors potentially provide different and potentially more useful opinions about policy. For example, TIAA-CREF (now TIAA) has had a corporate governance policy aimed in large part to diversify boards of directors since the 1990s for this reason.

Our finding on the predictability of which directors will or will not be popular with shareholders has important implications for corporate governance. Observers since Smith (1776) and Berle and Means (1932) have been concerned about whether managers intentionally select boards that maximize their own interests rather than those of the shareholders. In addition, a psychology literature started by Meehl (1954) has found that because of behavioral biases, even simple algorithms can outperform humans in deciding on personnel decisions. Consequently, it is easy to imagine that a machine-learning algorithm, which is much more sophisticated than the algorithms relied on by psychologists, would allow firms to improve their board selection process.

A natural question concerns the applicability of algorithms such as the one we developed in practice. The algorithms we present should be treated as “first pass” approaches; presumably more sophisticated models would predict director performance even better than the ones presented in this paper. In addition, our algorithms rely on publicly available data; if one had more detailed private data on director backgrounds, performance, etc., one could improve the algorithm’s fit as well. If algorithms such as these are used in practice in the future as we suspect they will be, practitioners will undoubtedly have access to much better data than we have and should be able to predict director performance more accurately than we do in this paper. An important benefit of algorithms is that they are not prone to the agency conflicts that occur when boards and CEOs together select new directors. Institutional investors are likely to find this attribute particularly appealing and are likely to use their influence to encourage boards to rely on an algorithm such as the one presented here for director selections in the future.

The complete paper is available for download here.

Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

  • Subscribe or Follow

  • Cosponsored By:

  • Supported By:

  • Programs Faculty & Senior Fellows

    Lucian Bebchuk
    Alon Brav
    Robert Charles Clark
    John Coates
    Alma Cohen
    Stephen M. Davis
    Allen Ferrell
    Jesse Fried
    Oliver Hart
    Ben W. Heineman, Jr.
    Scott Hirst
    Howell Jackson
    Wei Jiang
    Reinier Kraakman
    Robert Pozen
    Mark Ramseyer
    Mark Roe
    Robert Sitkoff
    Holger Spamann
    Guhan Subramanian

  • Program on Corporate Governance Advisory Board

    William Ackman
    Peter Atkins
    Richard Brand
    Daniel Burch
    Jesse Cohn
    Joan Conley
    Isaac Corré
    Arthur Crozier
    Ariel Deckelbaum
    Deb DeHaas
    John Finley
    Stephen Fraidin
    Byron Georgiou
    Joseph Hall
    Jason M. Halper
    Paul Hilal
    Carl Icahn
    Jack B. Jacobs
    Paula Loop
    David Millstone
    Theodore Mirvis
    Toby Myerson
    Morton Pierce
    Barry Rosenstein
    Paul Rowe
    Marc Trevino
    Adam Weinstein
    Daniel Wolf