Posted January 9, 2017

During the course of conducting your analyses and writing up your results, your mentor, committee member, or other reviewer may suggest that you include confidence intervals in your report. Indeed, reporting statistics such as confidence intervals and effect sizes to supplement your p-values is good practice and is often required if you aspire to get your research published in a quality journal.

Reporting confidence intervals is all well and good, but it is important to understand what they are if you do include them in your results. Interestingly, confidence intervals are among the most commonly misunderstood concepts in statistics. These misconceptions have even been demonstrated by scientific research. For example, Hoekstra and colleagues conducted a study in 2014 showing that individuals with varying levels of statistics expertise (from students to expert researchers) frequently endorsed false statements about confidence intervals. This suggests that misconceptions are common not only among statistics novices, but experienced researchers as well! Some of the most common misconceptions about confidence intervals are:

- “There is a 95% chance that the true population mean falls within the confidence interval.” (FALSE)
- “The mean will fall within the confidence interval 95% of the time.” (FALSE)

So what exactly is a confidence interval? A confidence interval is an estimate of the possible values of a population mean; the key word here being estimate. Just as with any statistic estimated from a sample, the upper and lower bounds of the confidence interval will vary from sample to sample. For a given population, the 95% confidence interval from one random sample might be between 2 and 5, but for another random sample it might be between 1 and 4. Some of the intervals calculated from these random samples will contain the true population mean, and some will not. A 95% level of confidence means that 95% of the confidence intervals calculated from these random samples will contain the true population mean. In other words, if you conducted your study 100 times you would produce 100 different confidence intervals. We would expect that 95 out of those 100 confidence intervals will contain the true population mean.

To conclude, confidence intervals can be a bit difficult to wrap your head around, whether you are a beginner in statistics or an expert. But being aware of the misconceptions and avoiding them in your interpretation will help you (and your readers) develop an accurate understanding of your results.

Reference:

Hoekstra, R., Morey, R. D., Rouder, J. N., & Wagenmakers, E. J. (2014). Robust misinterpretation of confidence intervals. *Psychonomic Bulletin & Review*, *21*(5), 1157-1164.