Too Big to Fail: Larger Samples and False Discoveries

TitleToo Big to Fail: Larger Samples and False Discoveries
Publication TypeWorking Paper
Year of Publication2009
AuthorsLin, M., H. C. Lucas, and G. Shmueli
Series TitleWorking Paper RHS 06-068
InstitutionSmith School of Business, University of Maryland
Keywordslarge samples, p-values, practical significance, statistical significance
Abstract

The Internet presents great opportunities for research about information technology, allowing IS researchers to collect very large and rich datasets. It is common to see research papers with tens or even hundreds of thousands of data points, especially when reading about electronic commerce. Large samples are better than smaller samples in that they provide greater statistical power and produce more precise estimates. However, statistical inference using p-values does not scale up to large samples and often leads to erroneous conclusions. We find evidence of an over-reliance on p-values in large sample IS studies in top IS journals and conferences. In this commentary, we focus on interpreting effects of individual independent variables on a dependent variable in regression-type models. We discuss how p-values become deflated with a large sample and illustrate this deflation in analyzing data from over 340,000 digital camera auctions on eBay. The commentary recommends that IS researchers be more conservative in interpreting statistical significance in large sample studies, and instead, interpret results in terms of practical significance. In particular, we suggest that authors of large-sample IS studies report and discuss confidence intervals for independent variables of interest rather than coefficient signs and p-values. We also suggest taking advantage of a large dataset for examining how coefficients and p-values change as sample size increases, and for estimating models on multiple subsamples to further test robustness.

URLhttp://ssrn.com/abstract=1336700
Full Text

Biblio Tags: