Posted December 7, 2017

As much as researchers, journals, and newspapers might like to think otherwise, statistics is definitely not a fool-proof science. Statistics is a game of probability, and we can never know for certain whether our statistical conclusions are correct. Whenever there is uncertainty, there is the possibility of making an error. In statistics, there are two types of statistical conclusion errors possible when you are testing hypotheses: Type I and Type II.

Type I error occurs when you incorrectly reject a true null hypothesis. If you got tripped up on that definition, do not worry—a shorthand way to remember just what the heck that means is that a Type I error is a “false positive.” Say you did a study comparing happiness levels between people who were given a puppy to hold versus a puppy to merely look at. Your null hypotheses would be that there is no statistically significant difference in happiness levels between those who held and those who looked at a puppy.

However, suppose that there was no real difference in happiness between groups—which is to say, people are actually just as happy when holding a puppy or looking at one. If your statistical test was significant, you would have then committed a Type I error, as the null hypothesis is actually true. In other words, you found a significant result merely due to chance.

The flipside of this issue is committing a Type II error: failing to reject a false null hypothesis. This would be a “false negative.” Using our puppy example, suppose that you found there was no statistically significant difference between your groups, but in reality, people who hold puppies are much, much happier. In this case, you incorrectly failed to reject the null hypothesis, because you said there was not a difference when one actually exists.

*Image source: unbiasedresearch.blogspot.com*

The chances of committing these two types of errors are inversely proportional—that is, decreasing Type I error rate increases Type II error rate, and vice versa. Your risk of committing a Type I error is represented by your alpha level (the *p *value below which you reject the null hypothesis). The commonly accepted α = .05 means that you will incorrectly reject the null hypothesis approximately 5% of the time. To decrease your chance of committing a Type I error, simply make your alpha (p) value more stringent. Chances of committing a Type II error are related to your analyses’ statistical power. To reduce your chance of committing a Type II error, increase your analyses’ power by either increasing your sample size or relaxing your alpha level!

Depending on your field and your specific study, one type of error may be costlier than the other. Suppose you conducted a study looking at whether a plant derivative could prevent deaths from certain cancers. If you falsely concluded that it could not prevent cancer-related deaths when it really could (Type II error), you could potentially cost people their lives! If you were looking at whether people’s happiness levels were higher when they held versus looked at a puppy, either type of error might not be so important.