Quantitative Results

In past blogs, we have discussed how to interpret odds ratios from binary logistic regressions and simple beta values from linear regressions. Here we will take a leap into the unknown with multinomial logistic regressions! As a quick background, these regressions are only used when we want to predict the odds of falling into one of three or more groups. It is distinctly different from ordinal logistic regression, which assesses odds of being placed in a higher-level group when the groups can be meaningfully ordered from low to high (e.g., high school, college, and graduate levels of education). Instead, multinomial logistic regression uses a set of predictors to determine whether you are more likely to be in a particular group when the groups have no meaningful “low to high” order (e.g., the choice of a food delivery app such as GrubHub, UberEats, or Doordash).

Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.

- Bring dissertation editing expertise to chapters 1-5 in timely manner.
- Track all changes, then work with you to bring about scholarly writing.
- Ongoing support to address committee feedback, reducing revisions.

As with any regression, the first step is to look at the model fitting information. This tells you whether you should look further into the model or accept the null hypothesis. You will need to report the values for the chi-square, degrees of freedom, and *p *(sometimes called sig.), but the *p *really tells you all you need to know about the significance of the model. Some reviewers will also request the -2 Log Likelihood, so it is not a bad idea to at least take note of it just in case. If your *p *value suggests significance (i.e., less than .05), you are in the clear to look at the parameter estimates, which should have a set of outputs for each independent (predictor) variable in the model.

This is where things get interesting, but much easier than they seem. The output will show a set of results for each category of the dependent variable. What is really going on is basically an individual binary logistic regression for each category of the dependent variable, which assesses the likelihood of being in that group compared to being in any of the other groups. For example, if you are looking at the likelihood of participants choosing to use GrubHub, UberEats, or Doordash, you would need to look at the results for each of these categories to get the full picture of the results. The results for UberEats would explain how likely participants were to choose this app over GrubHub or Doordash, while the results for GrubHub would explain how likely they were to choose this app in comparison to the other two. For more detail on how to read these individual results, you can visit this blog on odds ratios!