Do AIs Dream of Electric Boards?

Robert J. Rhee is John H. and Marylou Dasburg Professor of Law at the University of Florida Levin College of Law. This post is based on his recent article, forthcoming in the Northwestern University Law Review.

When AI acquires self-awareness, agency, and unique (general) intelligence, it will attain ontological personhood. Management of firms by AI would be technologically and economically feasible. The law could confer AI with the status of legal personhood, as it did with the personhood of traditional business firms in the past, thus dispensing with the need for inserting AI as property within the legal boundary of a firm. As a separate and distinct entity, AI could function independently as a manager in the way that legal or natural persons do today: i.e., AI as director, officer, partner, member, or manager. Such a future is desirable only if AI as manager creates more value than AI as tool or AI as android serf. The principle of legal personhood is not intrinsically incompatible with the idea of machina persona. My article, Do AIs Dream of Electric Boards, 119 Northwestern University Law Review (forthcoming 2025), explores the economic, legal, and policy questions of AI legal personhood.

The year 2023 witnessed the confluence of advancement of AI technology, broad public awareness of the speed of technology, ubiquitous media coverage, and growing concern by governments. The minimalist future is that AI will elevate technological efficiency. The obvious application of AI in business will be as tool or android serf. In these conceptualizations, AI is as an ordinary asset in a firm that augments or substitutes human labor. This aspect of AI application is relevant to management science and labor economics, but isn’t particularly interesting to the laws of business firms because AI is just an asset within the firm, no different in legal theory than a factory or a machine therein. But substitution of human labor is a progression—from the mailroom clerk, to the factory floor machinist, to the back office accountant, to the regional manager, to the vice president, to the C-suite officer, and to the boardroom director. Rather than being a tool or serf endogenous in the legal property boundary of the firm, could AI be a manager exogenous to this boundary?

To be a manager, an entity must satisfy the legal condition of a “person.” A “person” is capable of assuming rights, powers, obligations, and liabilities that are necessary to manage a firm. One can satisfy this condition by creating a traditional legal shell, such as a corporation or partnership, in which AI is deposited within as a tool or serf. Or more radically, AI could be conferred with the legal status of a person. The conferral of legal personhood to AI will be first seen in the realm of business firms. The question is this: Do AIs dream of electric boards? That is, could AI serve in formal managerial roles such as director, officer, partner, member, and manager in business firms? My article identifies two predicates that must be met before AI can dream of electric boards.

First Predicate:  AI as Ontological Person

Ontological personhood is not a legal formality, but conferral of legal personhood requires a technological and philosophical threshold of being a person. AI must be capable of having minimum human attributes necessary for managerial function. For legal persons today, this threshold is not an issue because legal persons today act through human agency; personhood is derivative, and legal fiction serves important instrumental ends. AI is different because it would be decoupled from exacting human agency.

Ontological personhood starts with boundaried self-containment, which is simply the idea of internal versus external. If AI is not a distinct entity, or if it exists as a boundaryless presence in the cyberworld or an opensource thing, or if it is owned or controlled by another person, it cannot be distinct for legal purpose with respect to firms. There would be no clear division between the internal self and the external world, and between internal affairs and external dealings. Law stakes the boundary of an internal self by endowing a legal person with an entity form defined by the person’s rights, powers, obligations, and liabilities. If AI is not self-contained within an identifiable boundary, the ontological and legal boundaries of the firm would dissolve, thus the very meaning of “internal affairs” in firms.

AI must have self-awareness, agency, and unique general intelligence, which are distinctly human qualities necessary for the exercise of autonomous agency. If AI has these features, we could give it a name and, technology permitting, even a material body containing the internal self, and AI would be a functioning replica of humans.

I do not define self-awareness as sentience or human-like consciousness. It is defined as an awareness of a self that is distinct from but in relation to an external world. It is an extension of the idea of self-containment. It is an intelligent internalization of the concept of separateness and distinctness that is the hallmark feature of legal personhood.

Agency is the ability to act independently per one’s volition or thought. No natural person is an island, and neither is AI. Legal agency is helpful to understand the concept of machine agency: Agents are subject to the control of the principal, and yet they have agency of self and can take action and make managerial decisions for themselves within the boundary of given authority in furtherance of the agency relationship and subject to the principal’s control. Legal agency enables the principal to delegate some decisionmaking to an agent without exacting supervision. AI must be capable of fulfilling this purpose.

Therefore, I define AI as an ontological person as a distinct entity that has self-awareness, agency, and unique general intelligence. For the purpose of assuming the role of a manager of business firms, AI achieves ontological personhood if it satisfies these four criteria:

(1) Can AI distinguish itself as a discrete entity from all other things in the external world?

(2) Can AI think (i.e., process, analyze, ideate, conclude) for itself, independent of exacting human control and command?

(3) Can AI act and decide based on its independent thinking in rational furtherance of goals (i.e., maximands or priorities)?

(4) Can AI engage in a dialectic learning and development process in furtherance of given goals?

A natural person innately satisfies these criteria. When AI satisfies the above four criteria, it would be an ontological person and thus could perform complex managerial functions.

Personhood is not achieved because legal status is conferred. This puts the cart before the horse. Legal status is conferred because personhood is achievable and evident. For legal persons today, the precondition of ontological personhood is not an issue at all because personhood derives from natural persons. Personhood ensures that the entity or individual is capable of acquiring legal rights, powers, obligations, and liabilities of a manager in a meaningful way. AI must have the capacity to acquire the sine qua non of a manager or agent—the quality of loyalty to the firm and focus on the lawful advancement of the venture.

Second Predicate:  Rationale of AI as Manager

From a business model perspective, AI has three concepts: tool, serf, and manager. This progression moves up the value chain of productivity and efficiency. AI as tool augments human labor. AI as serf substitutes human labor. AI as manager is the highest form of labor, exercising decisionmaking that is the power to conduct the firm’s business and affairs.

As tool or serf, AI is capital intrinsic in the firm, i.e., property within the firm’s legal boundary. The cost of its manufacture or acquisition would be capitalized as an asset on the firm’s balance sheet (if accounting rules permit) or in the firm’s market value (even if accounting rules do not permit). As an asset, AI would not acquire rights, powers, obligations, and liabilities. In the legal sense, it is no different than a laptop computer. AI as manager would be different from an inanimate asset. It would be an independent actor, decoupled from exacting human agency and control, unlike legal persons today.

My article discusses the benefits of improved management, liability control, and reduced agency cost. The total cost of a manager is the sum of agency cost and direct compensation. With respect to agency cost, I suggest that, for reasons explained in my article, that AI could be a better fiduciary than natural persons or derivative legal persons. The core tenets of fiduciary duty are due care in transacting, no conflict of interest, no violation of law, no intent to harm the firm, and no ulterior motive that undermines the best interest of owners. The general principles are simple.

With respect to fiduciary duties, we do not need to plum the depth of human morality, conscience, and ethics that assumes a subject’s full humanity and moral development. With unique intelligence, AI would follow the relatively simple rule-based prescriptions and proscriptions of fiduciary law. Analysis of fiduciary duties in reality can be complicated because human motivations and limits of discovering true motives are complex in many business transactions reviewed through the limitation of litigation. AI would be a better fiduciary because it cannot be afflicted with human traits like carelessness, apathy, ego, divided loyalties, personal ambition, primacy of self, avarice, irrationality, conflict of interest, bad faith, hidden motives, and criminal intent. With the assumption of socialization, training, and institution of priorities, including compliance with positive laws, AI with self-awareness, agency, and unique intelligence would be careful and faithful. It would not be interested in its own economic advancement, which removes the largest factor in the fiduciary calculus.

With respect to direct compensation, AI would not require a cut of the economic pie, which is the main motive force of all claimants of the firm. One may argue that AI is not cost free either: It has an acquisition cost. However, asset acquisition for equivalent value is not an economic cost. At the spot of the transaction, neither the seller nor the purchaser incurs a cost when equivalent value is exchanged. Such deal is simply an exchange of forms of asset, such as cash in exchange for the future cash flow that is the purchase of a security.

The expenditure incurred to acquire AI is not an economic cost if two conditions are met: AI contributes value equivalent to the acquisition price, which assumes competitive pricing or cost structure of manufacture, and AI does not depreciate with use. The acquisition transaction would be an exchange of assets. The cash expenditure to make or buy the AI will be capitalized as an asset into firm value even though AI would be separate and distinct from the firm. If a firm incurs an acquisition cost for which it gains equivalent or more value in the form of cash flow, the firm has not incurred an economic cost. The value of AI will be capitalized into firm value through an increase in firm’s cash flow that is directly attributable to AI’s contribution of value per input of managerial labor. The value of AI will be bonded to the firm even though the firm does not own AI as asset within its legal property boundary. The value contribution of AI is not diminished by a manager’s economic claim, which means that the equityholder would realize more profit. This financial dynamic is the crucial economic promise of AI as manager.

If AI satisfies the first predicate of ontological personhood, it will also meet the second predicate so long as manufacture or acquisition constitutes an equivalent exchange. The promise of AI as manager is that the total cost of management will be less than that associated with natural persons as managers. The economic rationale of AI as manager is compelling. (My article discusses in greater detail other facets of the economic rationale for AI as manager.)

Policy Implications

The idea of AI as manager, qua legal person, is compelling. Economic and legal theories suggest that the conferral of AI personhood, permitting AI as manager, would create more value. Once the two predicates are satisfied, legal and policy considerations abound. The principal legal consideration is whether AI could satisfy the legal obligations of a manager. AI must be capable of complying with fiduciary duties. AI would be a superior fiduciary than natural persons, my article argues, because it would not have many of the human foibles that play the leading role in breaches of fiduciary duties. The more difficult calculus is not rules per se, but policy considerations. The promise of AI is enticing, but the risks are unknown without the benefit of some experience.

With respect to law and policy, current laws of business firms are robust enough to provide the essential framework for the future that is rapidly approaching. They currently mandate that corporate managers must be natural persons, but permit managers of noncorporate firms to be legal persons. This dichotomy provides the appropriate conceptual compromise. The use of AI as manager should be limited to private and noncorporate firms. My article also argues for three limiting conditions as important limitations on the experiment: federal registration, capitalization for liability, and rules for prompt removal. This compromise reflects the balance of cost and benefit, and risk and value. Corporations have always been more consequential business enterprises and could impose greater social and economic externalities. AI personhood would thrust upon us a brave new world of experimentation in capitalism, which should be welcomed in the spirit of innovation, but the law should secure a stable old world where risks are properly managed and enterprise operates on the edge of risk and return.