During the course of conducting your analyses and writing up your results, your mentor, committee member, or other reviewer may suggest that you include confidence intervals in your report. Reporting statistics like confidence intervals and effect sizes to supplement your p-values is good practice and often required for getting your research published in a quality journal.
Reporting confidence intervals is all well and good, but it is important to understand what they are if you do include them in your results. Interestingly, researchers commonly misunderstand confidence intervals. Scientific research has even demonstrated these misconceptions. For example, Hoekstra and colleagues conducted a study in 2014 showing that individuals with varying levels of statistics expertise (from students to expert researchers) frequently endorsed false statements about confidence intervals. This suggests that misconceptions are common not only among statistics novices, but experienced researchers as well! Some of the most common misconceptions about confidence intervals are:
A confidence interval estimates a population mean, with upper and lower bounds varying across samples. For example, a 95% confidence interval means 95 out of 100 intervals will contain the true population mean.
In conclusion, understanding confidence intervals can be challenging, but avoiding misconceptions ensures accurate interpretation of results.
Need help conducting your analysis? Leverage our 30+ years of experience and low-cost service to complete your results!
Schedule now using the calendar below.
Reference:
Hoekstra, R., Morey, R. D., Rouder, J. N., & Wagenmakers, E. J. (2014). Robust misinterpretation of confidence intervals. Psychonomic Bulletin & Review, 21(5), 1157-1164.