Quantitative Results

In statistics, there are many different types of correlations that you can conduct to determine the relationship between two variables. Some of these types of correlations include the Pearson correlation, Spearman correlation, point biserial correlation, and the Kendall correlation. You also may have come across the terms “zero-order,” “partial,” and “part” in reference to correlations. These terms refer to correlations that involve more than two variables. More specifically, these types of correlations are relevant when you have a dependent (outcome) variable, an independent (explanatory) variable, and one or more confounding (control) variables. Here we will explain the differences between zero-order, partial, and part correlations.

Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.

- Bring dissertation editing expertise to chapters 1-5 in timely manner.
- Track all changes, then work with you to bring about scholarly writing.
- Ongoing support to address committee feedback, reducing revisions.

First, a zero-order correlation simply refers to the correlation between two variables (i.e., the independent and dependent variable) without controlling for the influence of any other variables. Essentially, this means that a zero-order correlation is the same thing as a Pearson correlation. So why are we discussing the zero-order correlation here? When conducting an analysis with more than two variables (i.e., multiple independent variables or control variables), it may be of interest to know the simple bivariable relationships between the variables to get a better sense of what happens when you begin to control for other variables. This is why SPSS gives you the option to report zero-order correlations when running a multiple linear regression analysis.

Next, a partial correlation is the correlation between an independent variable and a dependent variable after controlling for the influence of other variables on both the independent and dependent variable. For instance, a researcher studying occupational stress may be interested in the correlation between the length of time a person has worked with a company and their level of stress while controlling for one or more potentially confounding variables, such as age and pay rate. In a partial correlation, the influence of the control variables on both the independent and dependent variables are taken into account. In our example, this would mean that the partial correlation between time with the company and stress would take into account the impact of age and pay rate on both time with the company AND stress.

This brings us to the part correlation, which is sometimes referred to as the “semipartial” correlation. Like the partial correlation, the part correlation is the correlation between two variables (independent and dependent) after controlling for one or more other variables. However, for the part correlation, only the influence of the control variables on the independent variable is taken into account. In other words, the part correlation does not control for the influence of the confounding variables on the dependent variable. In terms of our earlier example, this means that the part correlation between time with the company and stress would only take into account the impact of age and pay rate on time with the company. You might wonder why you would only want to control for effects on the independent variable and not the dependent variable? The primary reason for conducting the part correlation would be to see how much unique variance the independent variable explains in relation to the total variance in the dependent variable, rather than just the variance unaccounted for by the control variables.