Beyond Efficiency in Securities Regulation

The following post comes to us from Yesha Yadav of Vanderbilt Law School.

In my paper, Beyond Efficiency in Securities Regulation, recently made available on SSRN, I argue that the emergence of algorithmic trading calls into question the foundation underpinning today’s securities laws: the understanding that securities prices reflect all available information in the market. Securities regulation has long looked to the Efficient Capital Markets Hypothesis (ECMH) for theoretical validation to ground its most central tenets like mandatory disclosure, the Fraud-on-the-Market presumption in Rule 10b-5 litigation, as well as the architecture of today’s system of interconnected exchanges. It is easy to understand why. Laws that make markets more informative should also make them better at communicating with investors and in allocating capital across the economy. In this paper, I suggest that this connection between informational and allocative efficiencies can no longer be so readily assumed in the age of algorithmic trading. In other words, even as algorithmic trading pushes markets to achieve ever-greater levels of informational efficiency, able to process vast swathes of data in milliseconds, understanding what this information means for the purposes of capital allocation seems ever more uncertain. Recognizing that notions of informational efficiency are growing disconnected from the market’s ability to also interpret what this information signifies for capital allocation, this paper proposes a thoroughgoing rethinking about the centrality of efficiency economics in regulatory design.

While algorithmic trading is not new to securities markets, having taken root as markets began their move towards electronic trading in the 1980s and 1990s, its expansion into virtually every aspect of securities trading today is both striking and innovative. Algorithms—pre-programmed computerized instructions to place and execute orders in the market—account for at least around 70% of all equity trading volume in the U.S. The emergence of high-frequency trading (HFT) in recent years exemplifies the heavy reliance that traders place in algorithms for everyday trading. It is now commonplace for markets to transact at speeds measured in microseconds, far exceeding the bounds of human cognition, such that algorithms have become practically essential to trading. Importantly, at such speeds, algorithms must be sufficiently sophisticated to receive, review and react to information in real-time. That is, algorithms must be programmed to be intelligent, able to decide for themselves how best to trade and how to interact with other traders throughout the day. Modern algorithms perform these functions deploying vast quantities of data, statistics and sophisticated computational analysis. At one level, then, algorithms help bring high informational efficiencies to securities trading, reacting rapidly to news, using enormous depth of analysis and ensuring that “discrepancies” in prices are noted and “corrected” in the blink of an eye, if not quicker.

Looking more deeply, however, this paper identifies new risks that raise questions about the ability of algorithmic markets to provide a window into what prices really mean for the purposes of more fundamental value efficiency and capital allocation. First, algorithmic markets foster a novel form of separation between the trader and her ability to fully control the operation of the algorithm. Traders develop complex, intelligent algorithms, but they cannot always predict, or indeed fully control, how such algorithms will behave. Algorithms showcase high intelligence in independently weighing incoming news and trading upon such events. They must also be able to out-compete other similarly sophisticated algorithms in the market. While such intelligence makes algorithms immensely useful in the marketplace, it also leaves their behavior more unpredictable and thus difficult to prepare for ex ante. Programmers can make mistakes in designing algorithms, fail to fully anticipate how their creations will impact other traders, or they may deliberately develop algorithms that cause disruption to competitors, for example, by flooding the market with dummy orders such that others are precluded from trading. Indeed, traders have every incentive to make algorithms as sophisticated, complex and adaptable as possible. This intelligence ensures that their algorithms have the best chance of out-smarting the competition. More problematically, it also increases the costs to market actors, investors and regulators of separating the value-relevant information in prices from the background noise of the complex trading practices and programming intrinsic to algorithmic markets.

Secondly, those actors that we traditionally rely upon to decode complexity in markets—namely, informed, fundamental traders—have ever fewer incentives to enter algorithmic markets. Institutional traders face an especially intractable tension with algorithmic traders. High-speed algorithms provide benefits by offering a steady supply of liquidity to the market. But, algorithmic actors are also especially adept at deciphering how fundamental traders are likely to trade and, owing to their superior speeds, to get to this trade before anyone else. By being consistently front-run, informed traders can see their gains eroded over time—and their time and investment in market research diminished in value. As a result, fundamental traders may only enter the markets when their returns are likely to be sufficiently big to justify losing some of their bounty to a high-speed trader. Or, they may divert resources into more tactical trading that disrupts and diverts the attention of high-speed traders to reduce the chances of being front-run. In all cases, where informed traders become distracted or reluctant to trade, the market loses a key source of its strength as a communicator of economic value.

These insights pose a challenge to regulation’s long reliance on efficiency economics as lynchpin for its basic design. The Article moves to compare regulation’s traditional belief in markets as interpreters of economic value alongside today’s algorithmic reality. Mandatory disclosure, materiality and notions of reliance on efficient markets are brought into sharper relief by their increasing incompatibility with the operation of algorithmic markets. Companies that make periodic disclosures face unexpected consequences where computerized trading focuses on certain disclosures but not others, or moves markets too quickly in one or other direction. Materiality of certain news-items can be difficult to separate from the total mix of information in public markets, when algorithms can react to all types of news. Similarly, for anti-fraud suits, evidencing efficiency becomes an easy task where high-frequency trading generates velocity and volume in securities trading, even where underlying information may be thin. In concluding, this paper explores pathways for future reform to better equip markets to function as windows into allocative as well as informational efficiencies.

The full paper can be found here.

Both comments and trackbacks are currently closed.