Posted May 10, 2017

In your statistics classes, you probably worked with datasets that were more or less “perfect.” Normality could probably be assumed, and there was most likely an adequate sample size. Missing cases were not problematic. In your research design classes, you might have come across literature that talked about low response rates to surveys, but generally that was probably glossed over in favor of what procedures to follow during and after your data collection. So, what do you do if your sample size is not big enough?

During these sessions, students can ask questions about research design, population and sampling, instrumentation, data collection, operationalizing variables, building research questions, planning data analysis, calculating sample size, study limitations, and validity.

In your actual research, you will find the situation is rarely textbook. Many researchers find that they cannot achieve a necessary sample size. Sometimes this is due to missing data. In a perfect world, everybody who participates would answer every question we have. But, people get tired, sick, or bored. Or perhaps you are researching a highly specific population, making it impractical to sample a large number of people. Or perhaps people simply did not want to or were unable to participate.

Whatever the case, you have ended up with an inadequate sample size. When your sample size is inadequate for the alpha level and analyses you have chosen, your study will have reduced statistical power, which is the ability to find a statistical effect in your sample if the effect exists in the population. But do not fret! There are several things you can do to address this issue. Your sample size is related to your selected alpha level, the analysis you are going to perform, the anticipated effect size, and your desired power level. Changing these will affect how large of a sample size you need to achieve appropriate statistical power.

**Sampling. **The most obvious strategy is simply to sample more of your population. Keep your survey open, contact more potential participants, or consider widening the population. Just remember to stay within the bounds of your IRB approval!

A**lpha Level**. This is also known as the significance level, and the standard cutoff is .05 for most social science research. It refers to the probability of a false positive result, or detecting a significant effect when this effect does not truly exist in the population. An alpha of .05 refers to a 5% chance that a significant result is a false positive. An alpha of .01 means there is just a 1% chance that a significant result is a false positive. These are the standard levels of risk that are acceptable. Generally, if you are changing the alpha level, you are making it more stringent (e.g., .05 to .01), indicating more rigor and less chance of Type I error. Type I error involves rejecting a true null hypothesis. In other words, Type I error is a false positive. It is not recommended to increase your alpha level, due to the chances of increasing the Type I error. However, if you initially set an alpha of .01, you could consider increasing it to the standard .05 level if you did not achieve an adequate sample size.

**Analyses. **More complicated analyses require a higher sample size if you are to achieve adequate power for your study. If you are considering a highly complicated analysis, such as a MANCOVA (multivariate analysis of covariance) or multiple linear regression with a large number of predictors, consider simplifying! Remove non-essential variables, and your power should increase. However, do not remove theoretically important variables just because you want to find a significant effect. You may also consider using a nonparametric test. Although these have less power in general, they are more suited to the non-normal distributions you find when you have a small sample size.

**Power. **If you change your desired level of power, you will need fewer participants. Generally in social science research, a power of .80 is desired. If you have set your power level higher during your sample size calculations, try setting it at .80. Or, you can simply accept the fact that you have reduced power, and acknowledge this in your limitations section.

**Effect size. **The previous options are more or less impractical in many situations. Sometimes you cannot sample more. Perhaps you have already set your alpha at .05, have a simple analysis, and your committee will not allow you to move forward with reduced power. What now? Look to the literature! Your power (and thus sample size) is also related to your anticipated *effect size*. Your effect size (i.e., the strength of an association or difference) can be classified as small, medium, and large. To find a large effect, you need fewer participants, and to find a small effect, you need more participants. Think of it like needing glasses to read fine print! Perhaps your original sample size calculation was based on a small or medium effect size. If you can find literature (i.e., peer reviewed, published studies) demonstrating large effect sizes in studies similar to yours, you can use that in your sample size calculation. For example, say you are running a study where you only need to conduct one simple linear regression. Using G*Power (a sample size and power calculator) a simple linear regression with a medium effect size, an alpha of .05, and a power level of .80 requires a sample size of 55 individuals. Perhaps you were only able to collect 21 participants, in which case (according to G*Power), that would be enough to find a large effect with a power of .80. In our case of a simple linear regression, you would look for similar correlational studies that reported an *R ^{2}* of .30 or more, which corresponds to a large effect in G*Power.