The Failure of Liability in Modern Markets

Yesha Yadav is an Associate Professor of Law of Vanderbilt Law School. This post is based on an article authored by Professor Yadav.

In April 2015, the Justice Department indicted Navinder Sarao—a 36 year-old trader operating out of his parents’ basement—for actions resulting in the Flash Crash in May 2010. [1] According to the complaint, Sarao’s use of fake or “spoof” orders was damaging enough to precipitate a near 1000-point plunge in in the Dow Jones Index. It is telling that, today, a single trader can stand accused of contributing to this extraordinary drop in the value of the stock market. The complaint draws into relief the central challenge facing securities trading. With markets approaching ever-fuller levels of automation and driven by complex algorithms, even small-time traders like Sarao can create costs far in excess of either the seriousness of their conduct—or their capacity to pay for what they do. As I argue in The Failure of Liability in Modern Markets, to be published in the Virginia Law Review, the liability framework anchoring modern, algorithmic markets struggles to both control harmful risks and to punish them satisfactorily. Where instances of mistake, carelessness and fraud can neither be reliably controlled nor adequately punished, the law’s capacity to create a fair, richly informed marketplace must come under serious doubt.

The article puts forward two arguments. First, error is endemic to algorithmic markets, particularly those moving at high-speed. This inherent risk of error limits the ability of the law to constrain traders. Automated markets rely on algorithms—or pre-set computerized instructions—to fulfill virtually every aspect of trading. Particularly where securities trade in milliseconds, algorithms must be precision programmed in advance of the trading day, capable of executing a strategy and navigating the market independently in real time. In the absence of certain knowledge about how future markets will behave, predictive programming will necessarily result in inaccurate or imprecise reactions to trading conditions.

Secondly, the ability of the law to constrain and punish is weak where errors can spread “contagiously” across the system. It is well established that algorithmic trading generates informational gains. Algorithms can react instantly to enormous volumes of information without waiting for human traders to catch up. The impact of these efficiencies is felt broadly. As outlined by Professor Gerig, automation fosters strong informational linkages between exchanges, synchronizing prices where algorithms transact in response to changing prices on various trading venues. But just as information travels across the system, so do errors—exaggerating their impact. [2] Single bad acts can proliferate and amplify where their effect reverberates across multiple markets. The May 2010 Flash Crash is case in point. Whether it was Navinder Sarao’s spoof orders or some other event, a disruption in the futures market spiraled in minutes into the historic collapse of the Dow Jones Index. [3]

The inherent risk of error in algorithmic markets—combined with hyper-efficiencies and exchange interconnection—challenge traditional liability standards and places in jeopardy the workability of the current regulatory framework. To be sure, this framework comprises a deep thicket of interacting rules and regulations. Despite their volume, however, securities laws benchmark compliance to three well-established standards of liability: strict liability, where the very fact of breach is sufficient to ground liability; negligence, where conduct that fails to meet an objectively reasonable standard of care becomes punishable; and intentional misfeasance, notably fraud and manipulation, that sanctions willfully bad behavior.

These core standards of liability sit poorly with the structural features of algorithmic markets. For one, strict liability is unworkable where pre-set, predictive programming necessarily implies imprecision and error. Similarly, reliance on the negligence standard to control misbehavior is unhelpful. Conventionally, the standard creates liability for risk-taking that is unreasonable in nature. In algorithmic markets, even reasonable, low-level risks can result in large harms. Take the infamous case of Knight Capital. Instead of sending 212 orders to the New York Stock Exchange, Knight mistakenly released four million orders to trade in 397 million securities. Owing to a misfiring router, Knight incurred $460 million in costs in just over 45 minutes, pushing the firm towards collapse and causing serious disruption across the market. [4] Where interconnected markets facilitate information cascades and prompt spiraling automated reactions to even the smallest disturbance, acceptable risk-taking can produce damaging, unexpected consequences. Finally, sanctioning only intentional fraudulent or manipulative misconduct will leave vast swathes of costly, careless conduct unpunished.

This failure of liability in today’s markets creates far-reaching social and economic costs. For one, constraints apply weakly, as detailed above. Further, compensatory conventions also fail. As exemplified by Sarao or Knight Capital, traders can create costs in algorithmic markets that are far in excess of what they can realistically bear. Weak constraints and inadequate compensation can result in quite perverse outcome: traders can end up with incentives to take excessive risks where they will not internalize the full costs of this behavior. In turn and in response to these risks, investors should rationally discount the capital they invest in the marketplace.

In concluding, the article explores pathways for reform. In the absence of law, it proposes focus on structural solutions to fill the gap. It suggests that exchanges play a stronger role in supervising algorithmic traders—and be held financially accountable in case of failure. Exchanges are best placed to oversee misbehavior across a swathe of the market, discipline traders and react to spreading harms (e.g. by triggering circuit breakers). When individual traders cannot pay for the damage, exchanges may become liable to cover the shortfall. To bolster accountability and to ensure that investors can be compensated, the article proposes that exchanges contribute to the creation of a “Market Disruption Fund” that pays out for harms caused by structural disruptions. While the solution is not perfect—e.g. exchange liability and an industry fund can result in traders taking more risks—its rationale is grounded in the reality of modern markets. Ever-greater levels of automation, underpinned by sophisticated, predictive algorithms, are transforming market structure and the costs of trading. The law has, so far, failed to keep pace.

The full article is available for download here.

Endnotes:

[1] John Cassidy, The Day Trader and the Flash Crash: Unanswered Questions, New Yorker, April 23, 2015
(go back)

[2] Austin Gerig, High-Frequency Trading Synchronizes Prices in Financial Markets, Working Paper (Nov. 2015).
(go back)

[3] In 2010, the CFTC and the SEC posited that the crash may have been due to the single sell order from a Kansas mutual fund. Staffs of the CFTC and SEC, Findings Regarding the Events of May 6, 2010 45 (2010).
(go back)

[4] Nanex, Knightmare on Wall Street, http://www.nanex.net/aqck2/3522.html.
(go back)

Both comments and trackbacks are currently closed.