Tom Baker is William Maul Measey Professor at the University of Pennsylvania Law School, and Benedict C.G. Dellaert is Professor of Business Economics at Erasmus University Rotterdam. This post is based on their recent paper.
The growth of investment robo-advisors, web-based insurance exchanges, on-line credit comparison sites, and automated personal financial management services creates significant opportunities and risks for consumers that regulators across the financial services spectrum have yet even to assess, let alone address. Because of the scale that automation makes possible, these services have the potential to provide quality financial advice to more people at lower cost than humans, and to do so with greater transparency. But the fact that this potential exists hardly guarantees that it will be realized.
People design, model, program, implement, and market these automated advisors, and many automated advisors operate behind the scenes, assisting people who interact with clients and customers. Even setting fraud and other unsavory activities to the side, the riches to be won by those who succeed in “disrupting” the financial services industry provide more than enough incentive to rush technology to market. In addition, there are concerns that automation may entrench historical unfairness and promote a financial services monoculture with new kinds of unfairness and a greater vulnerability to catastrophic failure than the less coordinated actions of humans working without automated advice.
Automated advice poses significant challenges for regulators seeking to preserve the integrity of financial markets. Along with the well-known privacy and security challenges that accompany the digitization of personal financial data, there are new regulatory challenges that are more specific to automated advice. These include developing the capacities to assess: the algorithms and data incorporated in the automated advisors; the choice architecture through which the advice is presented and acted upon; the underlying information technology infrastructure; and the downside risk from the scale that automation makes possible. Developing these capacities will require financial service authorities—the paradigmatic expert administrative agencies—to invest in new kinds of expertise.
The benefits to developing these capacities almost certainly exceed the costs, because the same returns to scale that make an automated advisor so cost-effective lead to similar returns to scale in assessing the quality of automated advisors. An expert administrative agency is well situated to realize those returns to scale. Moreover, the potential solvency and systemic risks posed by hundreds of thousands, or even millions, of consumers choosing their financial products based on the same or similar models are sufficiently large and different in kind from those traditionally posed by consumer financial product intermediaries that some regulatory attention is justified on those grounds alone.
At the same time, however, it is important not to over-react and not to set a higher bar for automated advisors than for human advisors. For now, the standard against which automated advisors should be compared is that of humans, whom we know are much less than perfect. A large body of research in diverse fields demonstrates that even simple algorithms regularly outperform humans in the kinds of tasks that robo advisors perform. There is ample reason to believe that the same could be true for automated financial advisors. Although it may be appropriate to hold automated advisors to a super-human standard someday, their market share is too small and regulators have too much to learn to do so today.
In this paper we identify the aspects of current financial services regulation that apply most directly to robo advice: the regulation of intermediaries such as securities brokers, insurance agents, and mortgage brokers. We set out the traditional goals of that regulation: promoting competence (to provide appropriate advice and associated services), honesty (of that advice and associated services), and suitability (of the financial products sold to, or recommended for, the specific consumer). We then explain why any well-designed robo advisor should meet those goals at least as well as a typical human advisor, most likely better, with the emphasis appropriately placed on the caveat, “well designed.” At the same time, however, robo advice raises new challenges for regulators, most immediately to develop the expertise to assess whether robo advisors in fact are well designed.
In beginning with these traditional goals, we have two objectives: first, to review why robo advisors are at least potentially superior to unassisted humans on these dimensions for most consumers; and, second, to create a conceptual link between existing regulatory goals and the new regulatory concerns. That conceptual link supports regulators’ efforts to proceed under their existing legal authority to develop the capacities they need to address these new concerns, recognizing that they will need to operationalize this authority in new ways.
We then identify the core technical components of robo advisors that regulators need to understand and develop procedures to assess: the algorithms and processes that generate personalized rankings of financial products for consumers; the consumer and financial product data that the algorithms ingest; the choice architecture through which that advice is delivered; and the associated information technology infrastructure. Our objective is to sketch the early stages of a regulatory trajectory that regulators can follow as robo advisors develop in sophistication and scale.
Our analysis is conceptual and not specific to any specific governmental agency, private regulatory organization, or ex post liability regime; nor is it specific to any sector of the financial services market. At a conceptual level, our analysis applies to most, if not all, consumer financial products (if not now, then in the future) and to all the regulators of these products in all financial services sectors. Some agencies have taken preliminary steps to learn about robo advice as part of their larger efforts to engage with “FinTech,” but to date they have done so largely within their own regulatory silos and within their own countries. There is no formal inter-agency coordination in the U.S., only modest informal efforts; and international coordination is even less well developed. While there is no evidence that this lack of oversight and coordination has yet caused harm, it almost certainly will in the future, as the market simply cannot be counted upon to be self-correcting when robo advisors grow in scale to the point that they reshape financial product markets.
In concluding, we explore steps that authorities might take beyond demanding a minimum level of competence and honesty. We present some provisional ideas about how financial services regulation could facilitate quality-based competition and diversity among robo advisors, so that the performance of intermediaries who use robo advisors increasingly exceeds that of their unassisted competitors. In addition, as regulators gain confidence in their capacity to assess, monitor, and hold robo advisors accountable, and as robo advisors become a major force in the market, there may be less need for direct regulation of the forms and features of consumer financial products, provided that robo advisors have access needed to the data needed to adequately incorporate innovations in those forms and features into their personalized evaluation and ranking systems. Of course, these regulatory benefits cannot be counted upon to appear automatically. As any robo advisor entrepreneur can attest, innovation takes work.
The complete paper is available for download here.