# Analysis Of Variance

ANOVA (Analysis of Variance) is a parametric statistical test. The one-way ANOVA procedure is used when the dependent variable is measured either on an interval or ratio scale and when the independent variable consists of three or more categories/groups/levels. The two-way (or N-way) ANOVA is used when the dependent variable is measured either on an interval or ratio scale and when there are two independent variables.

### Fascinating Facts about Analysis of Variance (ANOVA)

ANOVA stands for “Analysis of Variance” and is used to compare differences between groups. Whereas the t-test can only be used compare TWO groups, analysis of variance can be used to compare TWO OR MORE groups.

EXAMPLE 1: Three groups of people with different sets of instructions look at a list of words:

Group A uses visual encoding: Is the word in capital letters?

Group B uses acoustic encoding: Does the word rhyme with “bear?”

Group C uses semantic encoding: Does the word represent something you can eat?

We are wondering whether the groups will be different in the number of words they can recall under the different types of encoding.

Analysis of variance is used to test these research and null hypotheses about our three groups:

• Research Hypothesis:  H1:  At least one population mean is different from the others (At least one of the groups will recall a significantly different number of words than the other groups).
• Null Hypothesis:  H0:  Pop M1 = Pop M2 = Pop M3  (All of the means are the same. In other words, the samples came from the same population and any differences among sample means are due to random variability.)

The technique of analysis of variance is used to analyze the variability of the groups, specifically to compare between groups variability to within groups variability.

Between groups variability is variability from one group to the next (e.g., Does Group C come from a population with a different mean from Groups A or B?). Within groups variability is variability within one population (e.g., The distribution of scores from Group A). With ANOVA we compute a statistic called the F-statistic. The F-statistic is a ratio of between groups variability towithin groups variability.  Your book mentions that this is like a “signal to noise ratio.” Between groups variability is the “signal” that we are looking for. We are trying to hear the “signal” amongst the “noise” which is the within groups variability. We need the signal to be loud enough to overcome the noise.

There are 4 steps to follow in computing the F-statistic:

1. Assess between and within groups variability:

SSB = Sum of Squares between groups: a measure of between groups variability.

SSW = Sum of Squares within groups: a measure of within groups variability

SST = Sum of Squares total: SST = SSB + SSW

2. Take into account degrees of freedom:

dfB = Between groups degrees of freedom:

dfB = NGroups – 1 (number of groups minus one)

dfW = Within groups degrees of freedom:

dfW = N – NGroups (total number of subjects minus the number of groups)

dfT = Total degrees of freedom

dfT = N – 1 (total number of subjects minus one)

dfT = dfB + dfW

3. Compute a ratio of Sum of Squares (SS) to degrees of freedom (df):

MSB = Mean Square between groups: MSB = SSB/dfB

MSW = Mean Square within groups: MSW = SSW/dfW

4. Compute an F ratio (or F-statistic): a ratio of Mean Square Between (MSB) to Mean Square Within (MSW):

F(Sample) = MSB / MSW

Now we’re going to use a new example to walk through the computations of analysis of variance and the steps of Hypothesis Testing.

EXAMPLE 2: A researcher hypothesizes that room temperature will affect learning performance. She randomly assigns three groups of subjects to learn and solve math problems under three different room temperatures. Group 1 is assigned to a room that is 50 degrees Fahrenheit, Group 2 is assigned to a room that is 70 degrees Fahrenheit, and Group 3 is assigned to a room that is 90 degrees Fahrenheit. The dependent variable is the number of problems each subject solves correctly. The data are presented below:

 Group 1 (X – M1) (X – M1)2 (50 degrees Fahrenheit) 0 0 – 1 = -1 1 M1 = 1 1 1 – 1 = 0 0 n1 = 5 3 3 – 1 = 2 4 SS1 = Ε(X – M1)2 = 6 1 1 – 1 = 0 0 0 0 – 1 = -1 1 Group 2 (X – M2) (X – M2)2 (70 degrees Fahrenheit) 4 4 – 4 = 0 0 M2 = 4 3 3 – 4 = -1 1 n2 = 5 6 6 – 4 = 2 4 SS2 = Ε(X – M2)2 = 6 3 3 – 4 = -1 1 4 4 – 4 = 0 0 Group 3 (X – M3) (X – M3)2 (90 degrees Fahrenheit) 1 1 – 1 = 0 0 M3 = 1 2 2 – 1 = 1 1 n3 = 5 2 2 – 1 = 1 1 SS3 = Ε(X – M3)2 = 4 0 0 – 1 = -1 1 0 0 – 1 = -1 1

Sum of squares for each group is the sum of the squared mean deviation scores:

SS = Ε(X – M)(X – M)   Note: This is the numerator of the variance!

Total number of subjects: (Big N!)

N = n1 + n2 + n3 = 5 + 5 + 5 = 15

Total number of groups:

NGroups = 3

Grand mean:

GM = (M1 + M2 + M3)/NGroups = (1 + 4 + 1) / 3 = 6/3 = 2

The steps of Hypothesis Testing are the same as before:

STEP 1: State Hypotheses:

H0: Pop M1 = Pop M2 = Pop M3

H1: At least one population mean is different from the others.

STEP 2: Comparison Distribution

The comparison distribution for analysis of variance is the F distribution. We said above that the F-statistic is a ratio of between groups variability to within groups variability (or a signal to noise ratio). Like the t distribution, the F distribution changes shape with different degrees of freedom. Unlike the t distribution the F distribution is ALWAYS positive and thus has only 1 tail, but it also has two different kinds of degrees of freedom: degrees of freedom between, which is also known as the numerator degrees of freedom, and degrees of freedom within, which is also known as the denominator degrees of freedom:

dfB = Between groups degrees of freedom (numerator):

• dfB = NGroups – 1
• dfB = 3 – 1 = 2

dfW = Within groups degrees of freedom (denominator):

• dfW = N – NGroups
• dfW = 15 – 3 = 12

For our example, the dfB (numerator) is equal to 2, and the dfW (denominator) is equal to 12. You will sometimes see this written as F(2, 12). The comparison distribution is the F distribution with 2 and 12 degrees of freedom.

STEP 3: Cutoff score on comparison distribution:

We use the degrees of freedom to look up a critical F ratio (or F-statistic) on an F Table (Table A-3, p. 417) with which we will later compare our sample’s F-statistic. For a = .05, the critical F ratio for our example (numerator df = 2, denominator df = 12) is 3.89.

STEP 4: Now we can compute the F-statistic for our sample:

1. Assess between and within groups variability:

SSB = Sum of Squares between groups:

• SSB = n[(M1 – GM)(M1 – GM) + (M2 – GM)(M2 – GM) + (M3 – GM)(M3 – GM)]
• SSB = 5[(1 – 2)(1 – 2) + (4 – 2)(4 – 2) + (1 – 2)(1 – 2)]Note:  don’t confuse n with N.  Little n is the number of subjects in each group (5).  Big N is the TOTAL number of subjects (15).
• SSB = 5[(-1)(-1) + (2)(2) + (-1)(-1)]
• SSB = 5[(1) + (4) + (1)]
• SSB = 5(6) = 30

SSW = Sum of Squares within groups:

• SSW = SS1 + SS2 + SS3
• SSW = 6 + 6 + 4 = 16

SST = Sum of Squares total:

• SST = SSB + SSW
• SST = 30 + 16 = 46

2. Take into account degrees of freedom:

dfB = Between groups degrees of freedom:

• dfB = NGroups – 1
• dfB = 3 – 1 = 2

dfW = Within groups degrees of freedom:

• dfW = N – NGroups
• dfW = 15 – 3 = 12

dfT = Total degrees of freedom

• dfT = N – 1
• dfT = 15 – 1 = 14
• Note:  dfT = dfB + dfW = 2 + 12 = 14

3. Compute a ratio of Sum of Squares (SS) to degrees of freedom (df):

MSB = Mean Square between groups: MSB = SSB / dfB = 30/2 = 15

MSW = Mean Square within groups: MSW = SSW / dfW = 16/12 = 1.3333

4. Compute an F ratio (or F-statistic): a ratio of Mean Square Between (MSB) to Mean Square Within (MSW):

F (Sample) = MSB / MSW = 15/1.3333 = 11.25

We can arrange the computations from above into this ANOVA Table:

 Source SS df MS F Between SSB = 30 dfB = 2 MSB = 15 F = 11.25 Within SSW = 16 dfW = 12 MSW = 1.3333 Total SST = 46 dfT = 14

STEP 5: Make a decision:

Just like the t-test, we compare our sample’s F to the critical F, but we don’t have to worry about positives and negatives because F is always positive:

If F(Sample) > or = F(Critical), reject H0

If F(Sample) < F(Critical), fail to reject H0

For our example: 11.25 > 3.89, so we reject the null hypothesis and conclude that at least one mean is different from the others. Note: Ultimately, we want to know which means are different, but this involves making comparisons between means and is beyond where we have time to go this semester.

One final note about the F-statistic: The F-statistic is a ratio of signal to noise, so if the signal is only as loud as the noise, the signal and the noise will be the same. If they are the same, then the numerator and denominator of the F-statistic will be the same and F will equal 1.0. The louder the signal gets, the better it is able to overcome the noise. As the signal gets louder (as differences between means get larger relative to the variability within groups), the numerator of the F-statistic grows and the F-statistic gets larger. We want the F-statistic to be significantly larger than 1.0.

Latest posts by Ayla Myrick (see all)