The SEC’s Short-Sale Experiment: Evidence on Causal Channels and on the Importance of Specification Choice in Randomized and Natural Experiments

Hemang Desai is Distinguished Professor of Accounting at Southern Methodist University Cox School of Business. This post is based on a recent paper authored by Mr. Desai; Bernard Black, Nicholas J. Chabraja Professor of Finance at Northwestern University Law School; Katherine Litvak, Professor of Law at Northwestern University Pritzker School of Law; Woongsun Yoo, Assistant Professor of Finance at Central Michigan University; and Jeff Jiewei Yu, Associate Professor of Accounting at the University of Arizona Eller College of Management.

In July 2004, the SEC announced a randomized experiment to study the effects of short-sale restrictions on securities markets. The experiment was announced as part of the short-sale regulations in Regulation SHO. In the experiment, the SEC suspended short-sale restrictions (price tests) for one-third of the firms (“pilot” firms) in the Russell 3000 Index (R3000), that traded on the New York Stock Exchange (NYSE), American Stock Exchange (AMEX), or the Nasdaq national market (Nasdaq). For the pilot firms, the SEC suspended the uptick rule for the NYSE and AMEX firms and the similar but less restrictive bid test for Nasdaq firms during a roughly two-year period (May 2, 2005 through July 5, 2007). It left some but not all of the prior short-sale restrictions in place for the remaining firms (“controls”). The uptick rule and the bid test essentially forbade short sales at a price below the last trade. The SEC’s objective in conducting the experiment was to study the effects of removing these short-sale restrictions on market volatility, share prices, and liquidity.

It was unclear, at the time, to what extent the short-sale restrictions affected substantive (valuation-based) short selling. Many market developments had weakened whatever effect the restrictions may once have had. Consistent with doubts about the importance of the short-sale restrictions, initial studies of the experiment found little to no direct impact of removing the restrictions on open short interest, share returns and volatility (SEC Office of Economic Analysis, 2007; Alexander and Peterson, 2008; Diether, Lee and Werner, 2009). As expected, these studies did find some improvement in the volume of short-sales and the speed of execution of short trades, especially for NYSE firms. Based on these findings, the SEC in 2007 removed these restrictions for all firms.

Despite little evidence of direct impact of the Reg SHO experiment on pilot firms, an array of over 60 more recent papers in accounting, finance, and economics report that suspension of the price tests had wide ranging indirect effects on pilot firms, including on earnings management, investments, leverage, acquisitions, management compensation, workplace safety, and more (see Internet Appendix, Table IA-1 for a summary). Some of these papers find that the Reg SHO experiment affected behavior of third parties such as auditors and analysts.

The broad range of indirect effects attributed to the Reg SHO experiment is surprising. First, since the uptick rule and bid tests did not meaningfully constrain short selling, one would not expect lifting these restrictions to generate wide ranging indirect effects. Second, to credibly attribute indirect effects to the Reg SHO experiment, there should be evidence supporting a causal channel through which removing short-sale constraints could generate those indirect effects. However, prior work found no reliable evidence that removing the price tests affected short interest, share returns, price efficiency, share price volatility, etc. Without a clear causal channel, indirect effects are more likely to be false positives.

In this paper we conduct two major analyses. First, we present evidence on the three principal causal channels posited by the indirect effects literature. Using a larger sample than prior studies and a longer time period (December 2003 to December 2007), we reexamine the evidence on short interest and returns for pilot and control firms. Similar to the earlier papers cited above, and contrary to Grullon, Michenaud and Weston (2015), we do not find any difference in short interest or returns for pilot firms relative to control firms either before or during the experiment period. We also examine a third possible channel — managerial fear of “bear raids” by short-sellers (e.g., Fang, Huang and Karpoff, 2016). To address this channel, we undertake a detailed examination of media coverage of the experiment and of managerial comments to the SEC. We find that there was very little coverage of the experiment in the media and minimal evidence of managerial opposition to the experiment. While our analysis cannot disprove the fear channel, it casts doubt on the plausibility and potential magnitude of this channel.

Given the absence of evidence for the principal causal channels, there is reason to be skeptical about the robustness of the results in many of the indirect-effects papers. Many could be false positives, that reflect the particular specification choices chosen by the authors. To assess whether this skepticism is justified, we re-examined four papers that attribute significant indirect effects to the Reg SHO experiment: Fang, Huang, and Karpoff (JF, 2016), Hope, Hu Zhao (JAE, 2017), Lin, Liu and Sun (AER 2019), and Grullon, Michenaud, and Weston (RFS, 2015). We used a pre-specified sample that comes as close as we can to the actual experiment, and a pre-specified research design for how we would address the questions asked in these papers. We assessed core results from each paper and which of their results survived with our pre-specified sample and specification. Across all four papers and multiple outcomes, none of the results are statistically significant with our pre-specified sample and specification.

We then do our best to match the sample and specification in each paper based on what each paper says (best match specifications). We still cannot come close to most of the reported results. Almost all outcomes remain insignificant and the two that achieve mild significance are fragile. Our failure to obtain similar results with either our sample and specification, or with the best-match specifications, has strong implications both for other papers reporting indirect effects of the short-sale experiment, and for natural-experiment based research more generally. If results starting from a true randomized experiment are this sensitive to specification, researchers need to be more careful with, and readers should be more skeptical, about natural-experiment studies generally, including studying of potential causal channels and verifying whether these channels actually exist. We provide suggestions, drawn from our study, on how one can design and implement studies involving natural experiments to be less prone to false positives.

We believe that our study makes two core contributions. The first is to stress the importance, for DiD research using a natural or randomized experiment, of being confident that a causal channel exists, including providing evidence supporting the channel. Our second contribution is to highlight the role of specification choice in generating statistically significant results from natural experiments, when other reasonable choices would not. We also find evidence that two-way clustered standard errors, on firm and year, can be severely downward biased relative to clustering on firm alone. This is a separately important result.

The complete paper is available for download here.

Both comments and trackbacks are currently closed.