Investment Advisers’ Fiduciary Duties: The Use of Artificial Intelligence

Amy Caiazza and Rob Rosenblum are partners and Danielle Sartain is an associate at Wilson Sonsini Goodrich & Rosati. This post is based on their WSGR memorandum.

Artificial intelligence (AI) is an increasingly important technology within the investment management industry. AI has been used in a variety of ways—including as the newest strategy for attempts to “beat the market” by outperforming passive index funds that are benchmarked against the S&P 500, despite the long-standing finding that index funds consistently win that contest.

Investment advisers who use AI should consider the unique issues the technology raises in light of an adviser’s fiduciary duty to its clients. In this client alert, we provide an overview of how AI is being used by investment advisers, the fiduciary duties applicable to investment advisers, and particular issues advisers should consider in designing AI-based programs, to ensure they are acting in the best interests of their clients.

How Artificial Intelligence Is Being Adopted by Investment Advisers

AI is currently used by investment advisers in a variety of innovative ways:

  • Some investment funds use forms of AI (such as deep or machine learning) to finetune algorithms used to make trade decisions. While traditional computer-based algorithms are guided by set principles and managed by human programmers, AI utilizes deep and machine learning to quickly analyze large amounts of data and define new rules based on connections among the data points that humans cannot see. The machine learns from the information and connections, with the goal of making more accurate trading decisions as time goes on. AI-based systems can also execute trades autonomously and without human confirmation.
  • Some AI systems analyze and learn from “alternative data.” These data can include a wide range of information such as data from credit card transactions, social media posts, satellite images, and questions posed to Siri or Alexa, all of which is typically scraped and cleaned by vendors and sold to investment firms. AI can also scan publicly available information such as press releases and financial reports for keywords that could predict whether a stock might rise or fall.
  • Some “robo-advisers” use AI, for example to track clients’ account activity and automatically apply the tracked behavior to the advice they deliver. This can lessen the need to solicit information directly from the client through investor questionnaires.
  • Advisers can also use AI to monitor client accounts. These systems can also notify advisers when allocations fall outside certain parameters.

Issues Raised by an Investment Adviser’s Fiduciary Duties

Under federal law, an investment adviser is a fiduciary to its clients. An adviser’s fiduciary duty involves a duty of care and a duty of loyalty, which, although not defined specifically in the Investment Advisers Act of 1940 (Advisers Act), have been addressed and developed through U.S. Securities and Exchange Commission (SEC) interpretive releases and guidance, as well as case law. As discussed below, these duties have implications for an adviser’s use of AI. The specific obligations required by an adviser’s fiduciary duty will depend upon what functions the adviser has agreed to assume for the client. While the SEC has not provided specific guidance for advisers using AI, current guidance raises unique considerations for advisers to consider.

Duty of Loyalty

The duty of loyalty requires investment advisers not to place their own interest ahead of their clients’ interests. An adviser must make full and fair disclosure to its client of all material facts relating to the advisory relationship and employ reasonable care to avoid misleading clients. Information provided to clients must be sufficiently specific so that a client is able to understand the investment adviser’s business practices and conflicts of interest.

An adviser’s duty of loyalty raises, among others, the following issues with respect to AI-based investment management programs:

What facts does an adviser need to disclose about its use of AI?

Advisers should consider disclosing information such as the following:

  • Information about the adviser’s specific uses of AI to manage client accounts—e.g., that trades are identified and executed by the AI system; that individual client accounts are rebalanced by the AI system; and other facts.
  • Descriptions of the particular risks inherent in the use of the AI-based system to manage client accounts—e.g., that particular decisions may be different than would be made using traditional investment management methodologies; that the system might not operate as planned; and other risks.
  • A description of any circumstances that might cause the adviser to override the AI-based system.
  • An explanation of the degree of human involvement in the oversight and management of individual client accounts.
  • For advisers that use both AI and human financial advisers to make decisions, a description of the potential conflicts that can occur when both are making trade decisions on behalf of clients simultaneously, such as the AI selling stock at the same time a financial adviser is buying the stock.

How should advisers think about the tension between disclosure obligations and confidentiality regarding proprietary technologies?

It is important for advisers to disclose enough information for investors to make an informed decision about engaging, and then managing the relationship with, the investment adviser. Advisers should be careful to not mislead clients, and information provided to clients should be sufficiently specific so that a client is able to understand the investment adviser’s business practices. However, highly technical information about the process behind the AI’s decisions might not be beneficial to a client’s understanding of the adviser’s platform.

Does an adviser need to disclose the historical success rate of returns from using artificial intelligence?

Historically, funds that employ AI have not outperformed the S&P 500. Investment advisers might therefore be expected to provide disclosures indicating that an adviser has not conclusively proven AI’s ability to predict securities prices and may not “beat the market.”

Duty of Care

The duty of care requires, among other things, the duty to provide advice appropriate for the client and the duty to monitor a client’s investments, and the ongoing suitability of those investments, over the course of the relationship. An adviser must develop a reasonable understanding of the client’s objectives and have a reasonable belief that the advice it provides is in the best interest of the client, based on the client’s portfolio and objectives.

An adviser’s duty of care raises, among others, the following issues with respect to AI-based investment management programs:

Can an adviser replace traditional suitability assessments with alternative data or other AI-based tools?

If an AI-based system makes investment choices on behalf of clients using deep or machine learning that develops on its own, by tracking client behavior, or by using alternative data, the adviser should pay particularly close attention to how the recommendations generated by those data might differ or be in conflict with a client’s explicit preferences and investment objectives. It is possible that an adviser using AI-based tools will make different assessments of what is best or appropriate for the client than if the adviser uses more traditional tools like suitability questionnaires that ask a client about her risk profile, investment objectives, and other characteristics. As a result, an adviser using AI-based systems to generate an investment management system may be at cross-purposes with the client, which would raise issues based on the adviser’s duty of care.

How frequently should an investment adviser evaluate its AI program?

Because AI programs create their own rules based on the data they analyze, and autonomously make trading decisions, advisers should develop internal procedures for ensuring their programs are operating correctly. For example, advisers should adequately test their AI before and periodically after it is integrated into the investment platform. In addition, advisers should develop strategies for procedures they can implement to adjust their AI programs if they do not produce favorable results. Advisers should also monitor for possible cybersecurity threats.

How should an adviser review investment decisions directed by AI to ensure the decisions still fit within a client’s investment goals?

Advisers using AI should adopt and implement procedures that will periodically review the performance of their AI, to ensure that performance is within expected parameters and that decisions are not being made to the detriment of clients’ investment goals. Ultimately, the adviser is responsible for all decisions made by its AI-based program and therefore cannot let an AI-based program simply run without the adviser’s active monitoring.

The complete publication, including footnotes, is available here.

Both comments and trackbacks are currently closed.