Detecting Deceptive Discussions in Conference Calls

The following post comes to us from David Larcker, Professor of Accounting at Stanford University, and Anastasia Zakolyukina of the Department of Accounting at Stanford University.

Considerable accounting and finance research has attempted to identify whether reported financial statements have been manipulated by executives. Most of these classification models are developed using accounting and financial market explanatory variables. Despite extensive prior work, the ability of these models to identify accounting manipulations is modest. In the paper, Detecting Deceptive Discussions in Conference Calls, forthcoming in the Journal of Accounting Research, we take a different approach to detecting financial statement manipulations by analyzing linguistic features present in CEO and CFO narratives during quarterly earnings conference calls. Based on prior theoretical and empirical research from psychology and linguistics on deception detection, we select the word categories that theoretically should be able to detect deceptive behavior by executives. We use these linguistic features to develop classification models for a very large sample of quarterly conference call transcripts.

A novel feature of our methodology is that we know whether the financial statements related to each conference call were restated in subsequent time periods. Because the CEO and CFO are likely to know that financial statements have been manipulated, we are able to reasonably identify which executive discussions are actually “deceptive”. Thus, we can estimate a linguistic-based model for detecting deception and test the out-of-sample performance of this classification method.

Our linguistic classification models based on CEO or CFO narratives perform significantly better than a random guess by 6% – 16%. In terms of linguistic features of the narratives, both CEOs and CFOs use more references to general knowledge, fewer non-extreme positive emotion words, and fewer shareholder value references. However, the pattern of deception for CEOs differs from that for CFOs. Specifically, CEOs use more extreme positive emotion words and fewer anxiety words. In contrast, CFOs use more negation words and for the most restrictive deception criterion (AAER) they use more extreme negative emotion words and swear words. In addition under less restrictive criteria, deceptive narratives of CFOs contain fewer self-references and fewer impersonal pronouns.

In terms of predictive performance, linguistic-based models either dominate or are statistically equivalent to five contemporary models which are based on the accounting and financial variables. Finally, a trading strategy for the representative firm based on the CFO linguistic model produces a statistically significant annualized alpha (estimated using four-factor Carhart [1997] model) between -4% and -11% depending on the deception criterion and portfolio selection method. The results for the CEO linguistic model do not produce a statistically significant alpha. Based on the strength of these exploratory performance results, we believe that it is worthwhile for researchers to consider linguistic cues when attempting to measure the quality of reported financial statements.

As with any exploratory study, our findings are subject to a number of limitations. First, we are not completely certain that the CEO and/or CFO know about the manipulation when they are answering questions during the conference call. This issue will cause our deception outcome to be measured with error. Second, simply counting words (“bag-of-words”) ignores important context and background knowledge. Third, we rely on a general psychosocial dictionary, LIWC, which may not be completely appropriate for capturing business communication. Fourth, although we have a large comprehensive set of conference calls, our sample consists of relatively large and profitable firms. This limits our ability to generalize our results to the whole population of firms. Finally, our sample only covers the time period from September 2003 to May 2007. Because this is shortly after the implementation of Sarbanes-Oxley and many restatements were observed during this period, our results may not generalize to time periods with fewer regulatory changes.

The full paper is available for download here.

Both comments and trackbacks are currently closed.

One Trackback

  1. […] full article………via Detecting Deceptive Discussions in Conference Calls — The Harvard Law School Forum on Corporate Go…. Share OptionsPrintEmailMoreFacebookLinkedInStumbleUponTwitterPinterestRedditDiggTumblrLike […]