Running Multiple Linear Regression (MLR) & Interpreting the Output: What Your Results Mean

Quantitative Results
Results
Statistical Analysis

Once the data are prepared and assumptions considered, the next step is to run the Multiple Linear Regression analysis and interpret its output. This stage translates numerical results into meaningful findings relevant to the dissertation’s research questions.

Overview of the Process

Statistical software packages are commonly used to perform MLR. The process generally involves specifying the dependent variable and the set of independent variables within the software’s regression module.

A distinct advantage is offered by services utilizing Intellectus Statistics. This platform is designed to streamline the entire analysis pipeline. It not only performs the regression but also automates crucial assumption checks and, importantly, generates output in plain English. This feature significantly reduces the complexity and potential for misinterpretation often faced by dissertation students, contributing to quicker progress and potentially lowering costs by minimizing extensive consultations for basic interpretation.

Key Output Components for Your Dissertation

The output from an MLR analysis typically includes several key tables and statistics. Understanding these is essential for a comprehensive dissertation results chapter.

Need help conducting your MLR? Leverage our 30+ years of experience and low-cost service to complete your results!

Schedule now using the calendar below.

  1. Model Summary Table: This table provides an overview of the model’s overall fit and predictive power.
    1. R (Multiple Correlation Coefficient): This value indicates the strength and direction of the linear relationship between the set of all predictor variables (taken together) and the dependent variable. It ranges from 0 to 1 (as it represents the correlation between observed and predicted Y values, it’s always positive in this context).
    1. R-Square (R2, Coefficient of Determination): This is a critical statistic representing the proportion of the total variance in the dependent variable that is explained or accounted for by the set of independent variables included in the model. For example, an R2 of 0.45 means that 45% of the variability in the dependent variable can be attributed to the combined effect of the predictors in the model. This is crucial for discussing the practical significance of the findings.
    1. Adjusted R-Square (Adjusted R2): This is a modified version of R2 that accounts for the number of predictors in the model and the sample size. It provides a more conservative estimate of the variance explained, especially when comparing models with different numbers of predictors or when generalizing the model to the population. R2 tends to increase as more predictors are added, even if they don’t genuinely improve the model; adjusted R2 penalizes for the inclusion of unnecessary predictors and can decrease if a new predictor does not add sufficient explanatory power. A substantially smaller adjusted R2 compared to R2 can be a warning sign that the model may contain too many predictors.
  2. ANOVA (Analysis of Variance) Table (F-test for Overall Model Significance): This table tests the overall significance of the regression model.
    1. F-ratio (F-statistic): This statistic tests the null hypothesis that all the regression coefficients for the independent variables are simultaneously equal to zero (H0​:β1​=β2​=…=βp​=0). In simpler terms, it tests whether the model, as a whole, has any predictive capability beyond what would be expected by chance. It assesses if the independent variables, collectively, are effective in predicting the dependent variable.
    1. Sig. (p-value associated with the F-ratio): This is the probability of observing the obtained F-ratio (or a more extreme one) if the null hypothesis (that all true regression coefficients are zero) is true. If this p-value is statistically significant (typically p<.05), the null hypothesis is rejected. This indicates that the regression model is useful and explains a statistically significant amount of variance in the dependent variable.
  3. Coefficients Table (Individual Predictor Contributions): This table provides detailed information about each independent variable in the model.
    1. Unstandardized Coefficients (B): These represent the estimated change in the dependent variable associated with a one-unit increase in the corresponding independent variable, while holding all other independent variables in the model constant. The units of B are the original units of the dependent variable per unit of the independent variable. These coefficients are used to write the regression equation.
    1. Standardized Coefficients (Beta, β): These coefficients are expressed in standard deviation units, meaning they represent the change in the dependent variable (in standard deviations) for a one standard deviation increase in the predictor variable, holding other predictors constant. Standardized coefficients allow for a comparison of the relative strength or importance of predictors that are measured on different scales. The predictor with the largest absolute Beta value has the strongest relative effect on the dependent variable.
    1. t-value and Sig. (p-value) for each coefficient: For each independent variable, a t-test is performed to assess whether its unstandardized coefficient (B) is statistically significantly different from zero, after accounting for the effects of all other predictors in the model. A significant p-value (e.g., p<.05) suggests that the predictor makes a meaningful contribution to predicting the dependent variable.
    1. Confidence Intervals for B (e.g., 95% CI): These provide a range of plausible values for the true population regression coefficient for each predictor. If the confidence interval does not include zero, the coefficient is statistically significant at the corresponding alpha level (e.g., 0.05 for a 95% CI).
    1. Multicollinearity Statistics (Tolerance and VIF): As discussed under assumptions, these values help diagnose whether multicollinearity is a problem among the predictors in the model.

Interpreting these outputs requires moving beyond simply noting statistical significance. For a dissertation, it is important to discuss the direction and magnitude of effects (B and Beta coefficients), the overall explanatory power of the model (R2), and the statistical significance of both the overall model (F-test) and individual predictors (t-tests). This holistic understanding allows for a richer discussion of the findings in relation to the research questions and existing literature.

The following table provides a summary to aid in interpreting common MLR output components:

Table 1: Multiple Linear Regression Output Interpretation Summary

Output SectionStatistic(s)What it Tells YouLook For…
Model SummaryRStrength of the overall linear relationship between all predictors and the dependent variable.Higher value indicates stronger relationship (closer to 1).
 R-Square (R2)Proportion of variance in the dependent variable explained by the model.Higher percentage indicates better explanatory power.
 Adjusted R-SquareR2 adjusted for the number of predictors and sample size; a more conservative estimate of model fit.Value often preferred over R2, especially for model comparison or generalization. A large drop from R2 may indicate overfitting.
ANOVAF-ratio (F-statistic)Tests if the overall regression model is statistically significant (i.e., if at least one predictor is non-zero).Higher F-value suggests a more significant model.
 Sig. (p-value for F)Probability of observing the F-ratio if the null hypothesis (no relationship) is true.p<.05 (typically) indicates the overall model is statistically significant.
CoefficientsUnstandardized Coefficients (B)Change in the dependent variable for a one-unit change in the predictor, holding others constant.Sign (+/-) indicates direction of relationship; magnitude indicates size of effect in original units. Used for regression equation.
 Standardized Coefficients (Beta, β)Change in the dependent variable (in SD units) for a one SD change in the predictor; allows comparison of predictors.Larger absolute Beta value indicates stronger relative predictive power.
 t-valueTests if an individual predictor’s coefficient (B) is significantly different from zero.Larger absolute t-value suggests greater significance.
 Sig. (p-value for t)Probability of observing the t-value if the predictor has no effect (B=0).p<.05 (typically) indicates the predictor is statistically significant.
 Confidence Intervals for BRange of plausible values for the true population coefficient.If the interval does not contain 0, the predictor is statistically significant.
 Tolerance / VIF (Variance Inflation Factor)Indicates multicollinearity among predictors.Tolerance < 0.1 or VIF > 10 suggests problematic multicollinearity.17

This structured approach to output interpretation helps ensure that students extract the most critical information for their results chapter, thereby supporting a robust and well-defended dissertation.

request a consultation
Get Your Dissertation Approved

We work with graduate students every day and know what it takes to get your research approved.

  • Address committee feedback
  • Roadmap to completion
  • Understand your needs and timeframe