# MANOVA in SPSS

Quantitative Results
Statistical Analysis

Multivariate Analysis of Variance (MANOVA) in SPSS is similar to ANOVA, except that instead of one metric dependent variable, we have two or more dependent variables. MANOVA in SPSS is concerned with examining the differences between groups. MANOVA in SPSS examines the group differences across multiple dependent variables simultaneously.

MANOVA in SPSS is appropriate when there are two or more dependent variables that are correlated. If there are multiple dependent variables that are uncorrelated or orthogonal, ANOVA on each of the dependent variable is more appropriate than MANOVA in SPSS. ### Discover How We Assist to Edit Your Dissertation Chapters

Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.

• Bring dissertation editing expertise to chapters 1-5 in timely manner.
• Track all changes, then work with you to bring about scholarly writing.
• Ongoing support to address committee feedback, reducing revisions.

Let us take an example in MANOVA in SPSS. Suppose that four groups, each consisting of 100 randomly selected individuals, are exposed to four different commercials about some detergents. After watching the commercial, each individual provided ratings on his preference for the product, preference for the manufacturing company and the preference for the commercial itself. Since these three variables are correlated, MANOVA in SPSS should be conducted to determine the commercial that received the highest preference across the three preference variables.

MANOVA in SPSS is done by selecting “Analyze,” “General Linear Model” and “Multivariate” from the menus.

As in ANOVA, the first step is to identify the dependent and independent variables. MANOVA in SPSS involves two or more metric dependent variables. Metric variables are those which are measured using an interval or ratio scale. The dependent variable is generally denoted by Y and the independent variable is denoted by X.

In MANOVA in SPSS, the null hypothesis is that the vectors of means on multiple dependent variables are equal across groups.

As in ANOVA, MANOVA in SPSS also involves the decomposition of the total variation and is observed in all the dependent variables simultaneously. The total variation in Y in MANOVA in SPSS is denoted by SSy, which can be broken down into two components:

SSy = SSbetween + SSwithin

Here the subscripts ‘between’ and ‘within’ refer to the categories of X in MANOVA in SPSS. SSbetween is the portion of the sum of squares in Y which is related to the independent variable or factor X. Thus, it is generally referred to as the sum of squares of X. SSwithin is the variation in Y which is related to the variation within each category of X. It is generally referred to as the sum of squares for errors in MANOVA in SPSS.

Thus in MANOVA in SPSS, for all the dependent variables (say) Y1,Y2 (and so on), the decomposition of the total variation is done simultaneously.

The next task in MANOVA in SPSS is to the measure the effects of X on Y1,Y2 (and so on). This is generally done by the sum of squares of X. The relative magnitude of the sum of squares of X in MANOVA in SPSS increases as the difference among the means of Y1,Y2 (and so on) in categories of X increases. The relative magnitude of the sum of squares of X in MANOVA in SPSS increases as the variation in Y1,Y2 (and so on) within the categories of X decreases.

The strength of the effects of X on Y1,Y2 (and so on) is measured with the help of η2 in MANOVA in SPSS .The value of η2 varies between 0 and 1. η2 assumes a value of 0 in MANOVA in SPSS when all the category means are equal, indicating that X has no effect on Y1,Y2 (and so on). η2 assumes a value of 1, when there is no variability within each category of X, while there is some variability between the categories.

The final step in MANOVA in SPSS is to calculate the mean square which is obtained by dividing the sum of squares by the corresponding degrees of freedom. The null hypothesis of equal vectors of mean is done by an F statistic, which is the ratio of the mean square related to the independent variable to the mean square related to error.