Analysis of variance (ANOVA)

Get the most by viewing this topic in your current grade. Pick your course now.

?
Intros
Lessons
  1. What is Analysis of Variance (ANOVA)?
  2. Hypothesis Testing with F-Distribution
?
Examples
Lessons
  1. Determining Degrees of Freedom
    A test was done to study the reaction time of car drivers at different periods of the day

    Reaction Time of Drivers (seconds)

    Morning:

    Afternoon:

    Evening:

    Night:

    1.32

    0.25

    2.34

    1.54

    0.71

    1.48

    1.75

    1.98

    2.27

    1.24

    0.64

    1.76

    0.57

    0.89

    0.98

    1.15

    x=1.2175\overline{x}=1.2175

    x=0.965\overline{x}=0.965

    x=1.4275\overline{x}=1.4275

    x=1.6075\overline{x}=1.6075

    x=1.304375\overline{x}=1.304375

    1. What are the degrees of freedom for each time of day?
    2. How many degrees of freedom are there if we wanted to measure the Grand Mean (the mean of all the groups)?
  2. Determining the Sum of Squares
    The following case study was done on what type of beverages office workers drink in the morning and their productivity.

    Juice/Milk Drinkers:

    Tea Drinkers:

    Coffee Drinkers:

    3

    5

    8

    5

    5

    6

    3

    6

    7

    1

    4

    7

    1. What is the Total Sum of Squares (TSS or SST) for this case study? Also what are the degrees of freedom for this group?
    2. What is the Sum of Squares Within Groups (SSW)? Also what is the number of degrees of freedom for all these groups?
    3. What is the Sum of Squares Between Groups (SSB)? Also what is the number of degrees of freedom for this calculation?
    4. Verify that: TSS=SSW+SSB for both the variation and the degrees of freedom.
  3. Hypothesis Testing with F-Distribution
    The following case study was done on what type of beverages office workers drink in the morning and their productivity.

    Juice/Milk Drinkers:

    Tea Drinkers:

    Coffee Drinkers:

    3

    5

    8

    5

    5

    6

    3

    6

    7

    1

    4

    7


    With a significance level of α\alpha=0.05 test the claim that "what you drink in the morning does not affect how productive you are at work."

    Use the fact that in the previous example we found that SSW=12 with 9 degrees of freedom. And we also had that SSB=32 with 2 degrees of freedom.
    Topic Notes
    ?

    Introduction to Analysis of Variance (ANOVA)

    Welcome to our exploration of Analysis of Variance, commonly known as ANOVA! This powerful statistical technique is essential for comparing data across multiple groups. ANOVA helps researchers determine if there are significant differences between group means, making it invaluable in fields like psychology, biology, and social sciences. By watching our introductory video, you'll gain a solid foundation in understanding ANOVA's purpose and application. The video breaks down complex concepts into digestible chunks, perfect for students new to statistical analysis. ANOVA's importance lies in its ability to minimize the risk of Type I errors when conducting multiple comparisons. It's like a mathematical detective, helping us uncover patterns and relationships in data that might otherwise go unnoticed. As we delve deeper into ANOVA, you'll see how it becomes an indispensable tool in your statistical toolkit, enabling you to make informed decisions based on data-driven insights.

    Understanding the nuances of ANOVA is crucial for any researcher. The technique not only helps in identifying Type I errors but also in ensuring the reliability of the results. By focusing on patterns and relationships within the data, ANOVA provides a comprehensive view that is essential for thorough analysis. This methodical approach is what makes ANOVA a cornerstone in the realm of statistical methods.

    Understanding ANOVA: Comparing Multiple Data Sets

    Analysis of Variance, commonly known as ANOVA, is a powerful statistical technique used to compare multiple data sets simultaneously. This method is particularly useful when researchers need to determine if there are significant differences between group means. ANOVA extends beyond the limitations of t-tests, which can only compare two groups at a time, making it an essential tool in various fields of study.

    The Basics of ANOVA

    At its core, ANOVA is a hypothesis testing method that examines the variation between and within groups. It allows researchers to analyze the impact of one or more factors on a dependent variable. By comparing the means of different groups, ANOVA helps determine if the observed differences are statistically significant or merely due to chance.

    Comparing Test Scores: A Practical Example

    To illustrate the concept of ANOVA, let's consider an example from education. Imagine a school wants to compare the effectiveness of three different teaching methods on student performance. They implement these methods in three separate classes and conduct a standardized test at the end of the semester. The resulting test scores form our data sets for comparison.

    In this scenario, ANOVA can help answer the question: "Is there a significant difference in test scores among the three classes?" This analysis goes beyond simply comparing average scores; it takes into account the variability within each class and between the classes.

    How ANOVA Works

    ANOVA operates by partitioning the total variance in the data into two components:

    1. Variance between groups (explained variance)
    2. Variance within groups (unexplained variance or error)

    The technique then compares these variances using the F-statistic. A large F-value suggests that the differences between group means are significant relative to the variability within the groups.

    Determining Significant Differences

    ANOVA helps in determining if there are significant differences between group means by testing the null hypothesis. In our example, the null hypothesis would state that there is no significant difference in mean test scores among the three classes. If ANOVA yields a p-value below the chosen significance level (typically 0.05), we reject the null hypothesis and conclude that at least one group mean differs significantly from the others.

    Advantages of ANOVA in Group Comparison

    ANOVA offers several advantages in comparing multiple data sets:

    • Efficiency: It allows for the simultaneous comparison of multiple groups, reducing the risk of Type I errors that might occur with multiple t-tests.
    • Versatility: ANOVA can be applied to various experimental designs and can handle both categorical and continuous independent variables.
    • Insight into Interactions: More complex ANOVA designs can reveal interactions between different factors affecting the dependent variable.

    Limitations and Considerations

    While ANOVA is a robust tool for group comparison, it's important to note its limitations. ANOVA assumes that the data is normally distributed within each group and that the variances are equal across groups (homogeneity of variance). Violations of these assumptions may require alternative statistical approaches or data transformations.

    In conclusion, ANOVA stands as a cornerstone technique in statistical analysis, offering researchers a powerful method to compare multiple data sets and uncover significant differences between group means. By understanding and applying ANOVA, researchers can gain valuable insights into complex relationships within their data, leading to more informed decisions and robust scientific conclusions.

    The Role of Variance in ANOVA

    Variance plays a crucial role in Analysis of Variance (ANOVA), serving as the foundation for determining significant differences between group means. Understanding the importance of variance in ANOVA is essential for researchers and statisticians alike. At its core, ANOVA uses variance to assess whether the variability between groups is significantly larger than the variability within groups, thus indicating meaningful differences among the groups being studied.

    The concept of variance in ANOVA is divided into two key components: between-group variability and within-group variability. Between-group variability refers to the differences among group means, while within-group variability represents the spread of individual scores within each group. By comparing these two types of variability, ANOVA can determine whether the observed differences between groups are statistically significant or merely due to chance.

    To illustrate these concepts, let's consider a simple example. Imagine we're studying the effects of three different teaching methods on student test scores. We have three groups of students, each taught using a different method. The between-group variability would measure how much the average test scores differ across the three teaching methods. On the other hand, the within-group variability would assess how much individual student scores vary within each teaching method group.

    ANOVA calculates an F-ratio by dividing the between-group variability by the within-group variability. A large F-ratio indicates that the differences between group means are more substantial than what we would expect by chance, suggesting that the teaching methods have a significant effect on test scores. Conversely, a small F-ratio suggests that the differences between group means are not significant enough to conclude that the teaching methods have different effects.

    The importance of variance in ANOVA extends beyond just identifying significant differences. It also helps quantify the magnitude of these differences through effect size measures. By understanding the proportion of variance explained by the independent variable (e.g., teaching methods) versus the unexplained variance, researchers can gauge the practical significance of their findings.

    Moreover, the analysis of variance components in ANOVA provides valuable insights into the sources of variability in the data. This information can guide further research, help in identifying confounding variables, and inform decision-making processes in various fields, from education and psychology to business and healthcare.

    In conclusion, variance is the cornerstone of ANOVA, enabling researchers to make informed decisions about group differences. By carefully examining between-group and within-group variability, ANOVA provides a powerful tool for hypothesis testing in ANOVA and understanding complex relationships in data across diverse disciplines.

    F-Distribution and Hypothesis Testing in ANOVA

    The F-distribution plays a crucial role in statistical analysis, particularly in hypothesis testing in ANOVA. This powerful tool allows researchers to compare multiple group means simultaneously, making it an essential component of hypothesis testing in ANOVA in various fields. Understanding the F-distribution and its application in ANOVA is vital for anyone working with statistical data.

    The F-distribution is a continuous probability distribution that arises in the testing of whether two population variances are equal. It is named after Sir Ronald Fisher, a prominent statistician who developed this distribution. In ANOVA, the F-distribution is used to determine whether there are any statistically significant differences between the means of three or more independent groups.

    Hypothesis testing using the F-distribution involves several key steps. First, we formulate the null hypothesis (H0) and the alternative hypothesis (H1). The null hypothesis typically states that there are no significant differences between group means, while the alternative hypothesis suggests that at least one group mean differs from the others. Next, we calculate the F-statistic, which is the ratio of between-group variability to within-group variability.

    The F-statistic, also known as the test statistic, is a crucial element in hypothesis testing. It quantifies the evidence against the null hypothesis. A larger F-statistic indicates a greater likelihood that the null hypothesis should be rejected. To interpret the F-statistic, we compare it to a critical value derived from the F-distribution.

    The critical value is a threshold that determines whether we reject or fail to reject the null hypothesis. It is obtained from the F-distribution table based on the chosen significance level (α) and the degrees of freedom. In ANOVA, there are two types of degrees of freedom: between-groups degrees of freedom (dfb) and within-groups degrees of freedom (dfw). These values are calculated based on the number of groups and the total sample size.

    To interpret F-distribution tables, follow these steps:

    1. Determine the significance level (α) for your test, typically 0.05 or 0.01.
    2. Calculate the degrees of freedom for both the numerator (dfb) and denominator (dfw).
    3. Locate the appropriate F-distribution table based on your chosen significance level.
    4. Find the row corresponding to your numerator degrees of freedom (dfb).
    5. Find the column corresponding to your denominator degrees of freedom (dfw).
    6. The value at the intersection of this row and column is your critical F-value.

    Once you have the critical value, compare it to your calculated F-statistic. If the F-statistic exceeds the critical value, you reject the null hypothesis, indicating that there are significant differences between group means. Conversely, if the F-statistic is less than the critical value, you fail to reject the null hypothesis, suggesting no significant differences between group means.

    It's important to note that modern statistical software often provides p-values along with F-statistics, simplifying the interpretation process. A p-value less than your chosen significance level (e.g., p < 0.05) indicates statistical significance, allowing you to reject the null hypothesis without directly consulting F-distribution tables.

    In conclusion, the F-distribution is a powerful tool for hypothesis testing in ANOVA, enabling researchers to compare multiple group means efficiently. By understanding the concepts of critical value, test statistic, and degrees of freedom, and knowing how to interpret F-distribution tables, you can confidently analyze and draw conclusions from your statistical data. This knowledge is invaluable across various fields, including psychology, biology, economics, and social sciences, where comparing group differences is a common research objective.

    Calculating the Test Statistic in ANOVA

    Analysis of Variance (ANOVA) is a powerful statistical technique used to compare means across multiple groups. At the heart of ANOVA lies the calculation of the test statistic, which is crucial for determining whether there are significant differences between group means. This article will provide a detailed explanation of how to calculate the test statistic in ANOVA, focusing on the formula involving sum of squares within groups and within groups.

    The test statistic in ANOVA, denoted as F, is calculated using the following formula:

    F = (Sum of Squares Between Groups / Degrees of Freedom Between) / (sum of squares within groups / Degrees of Freedom Within)

    Let's break down each component:

    1. Sum of Squares Between Groups (SSB): This represents the variation between group means and the overall mean. It's calculated as:

    SSB = Σ ni(X̄i - X̄)2

    Where ni is the sample size of each group, X̄i is the mean of each group, and X̄ is the overall mean.

    2. Sum of Squares Within Groups (SSW): This represents the variation within each group. It's calculated as:

    SSW = Σ(Xij - X̄i)2

    Where Xij is each individual observation, and X̄i is the mean of the group it belongs to.

    3. Degrees of Freedom: For between groups, it's (k-1) where k is the number of groups. For within groups, it's (N-k) where N is the total sample size.

    To illustrate this calculation, let's consider a simple numerical example:

    Suppose we have three groups with the following data:

    Group 1: 5, 7, 9 (n1 = 3, X̄1 = 7)

    Group 2: 2, 4, 6 (n2 = 3, X̄2 = 4)

    Group 3: 8, 10, 12 (n3 = 3, X̄3 = 10)

    The overall mean X̄ = (7 + 4 + 10) / 3 = 7

    Calculating SSB:

    SSB = 3(7-7)2 + 3(4-7)2 + 3(10-7)2 = 0 + 27 + 27 = 54

    Calculating SSW:

    SSW = [(5-7)2 + (7-7)2 + (9-7)2] + [(2-4)2 + (4-4)2 + (6-4)2] + [(8-10)2 + (10-10)2 + (12-10)2]

    SSW = 8 + 8 + 8 = 24

    Degrees of freedom: Between groups = 3-1 = 2, Within groups = 9-3 = 6

    Now, we can calculate the F statistic:

    F = (54/2) / (24/6) = 27 /

    Interpreting ANOVA Results and Making Decisions

    Interpreting ANOVA (Analysis of Variance) results is a crucial step in statistical analysis, allowing researchers to draw meaningful conclusions from their data. The process involves comparing the test statistic with the critical value to make decisions about the null hypothesis. Understanding this process is essential for anyone working with statistical data.

    To begin, let's focus on the null hypothesis. In ANOVA, the null hypothesis typically states that there are no significant differences between group means. The alternative hypothesis suggests that at least one group mean is different from the others. The goal is to determine whether to reject or fail to reject the null hypothesis based on the evidence provided by the data.

    The key to interpreting ANOVA results lies in comparing the calculated F-statistic (test statistic) with the critical F-value. If the F-statistic exceeds the critical F-value, we reject the null hypothesis. This indicates that there is statistical significance, meaning that the observed differences between group means are unlikely to have occurred by chance alone.

    Rejecting the null hypothesis has important implications. It suggests that there are significant differences between at least two group means in the population. This finding can lead to further investigation to determine which specific groups differ and by how much. Researchers often use post-hoc tests, such as Tukey's HSD, to explore these differences in detail.

    On the other hand, failing to reject the null hypothesis implies that there is not enough evidence to conclude that significant differences exist between group means. It's crucial to note that this doesn't prove that the groups are identical; it simply means that any observed differences could be due to random chance rather than a true effect in the population.

    The concept of statistical significance is central to interpreting ANOVA results. A result is considered statistically significant if the p-value associated with the F-statistic is less than the chosen significance level (often 0.05). This indicates that the probability of obtaining such results by chance is low, strengthening the case for rejecting the null hypothesis.

    However, it's important to remember that statistical significance doesn't always equate to practical significance. Researchers must consider the real-world implications of their findings, even when results are statistically significant. This is where effect size measures, such as eta-squared or omega-squared, become valuable in assessing the magnitude of differences between groups.

    ANOVA results have numerous practical applications across various fields. In marketing, for example, ANOVA can be used to compare the effectiveness of different advertising strategies on sales. Researchers might analyze sales data from regions exposed to different marketing campaigns, using ANOVA to determine if there are significant differences in sales performance.

    In healthcare, ANOVA can help evaluate the efficacy of different treatments. For instance, researchers might compare the outcomes of three different medications for reducing blood pressure. By applying ANOVA to the collected data, they can determine if there are significant differences in the effectiveness of these treatments, potentially influencing medical decisions and patient care.

    Educational researchers often use ANOVA to assess the impact of teaching methods on student performance. By comparing test scores from classes taught using different instructional approaches, educators can gain insights into which methods may be most effective, informing curriculum development and teaching practices.

    In conclusion, interpreting ANOVA results requires a careful comparison of the test statistic with the critical value, leading to decisions about the null hypothesis. Whether rejecting or failing to reject the null hypothesis, researchers must consider both statistical and practical significance. The versatility of ANOVA makes it a powerful tool across various fields, providing valuable insights that can drive decision-making and advance knowledge in numerous domains.

    Conclusion: The Power and Limitations of ANOVA

    ANOVA, a powerful tool in statistical analysis, has been thoroughly explored in this article. The introduction video serves as a crucial foundation for understanding this complex concept. ANOVA's strength lies in its ability to compare means across multiple groups, making it invaluable for data interpretation in various fields. However, it's important to recognize its limitations, such as the assumption of normal distribution and equal variances. The article has covered key points including one-way and two-way ANOVA, their applications, and the interpretation of results. For those seeking to deepen their knowledge, exploring advanced ANOVA topics like repeated measures and MANOVA is highly recommended. These advanced techniques offer even more sophisticated approaches to data analysis. By mastering ANOVA and its variations, researchers and analysts can unlock powerful insights from their data, leading to more informed decision-making and robust scientific conclusions.

    What is Analysis of Variance (ANOVA)?

    Analysis of Variance, commonly referred to as ANOVA, is a statistical method used to compare the means of three or more samples to understand if at least one of the sample means is significantly different from the others. This technique is widely used in various fields such as psychology, education, and business to test hypotheses and make data-driven decisions.

    Step 1: Introduction to ANOVA

    ANOVA stands for Analysis of Variance. It is a statistical method that helps in comparing different data sets to determine if there are any statistically significant differences between the means of these sets. For instance, if you have data from multiple classes or groups, ANOVA can help you understand if the average performance (mean) of one group is different from another.

    Step 2: Understanding the Purpose of ANOVA

    The primary purpose of ANOVA is to test for significant differences between group means. For example, if you have test scores from students in three different classes (Class A, Class B, and Class C), ANOVA can help you determine if the average test scores of these classes are significantly different from each other. This is particularly useful when you have more than two groups to compare.

    Step 3: Data Collection and Grouping

    To perform ANOVA, you first need to collect data from different groups. In our example, let's say you have polled several students from each class. You might have three students from Class A, three from Class B, and three from Class C. Each student's test score is recorded, and these scores are grouped according to their respective classes.

    Step 4: Comparing Group Means

    Once the data is collected, ANOVA helps in comparing the means of these groups. It checks if the average test score of students from Class A is different from those in Class B and Class C. This comparison is crucial because it helps in understanding if the observed differences in test scores are due to actual differences in group performance or just random variations.

    Step 5: Hypothesis Testing

    ANOVA involves hypothesis testing to determine if there are significant differences between group means. The null hypothesis (H0) states that all group means are equal, while the alternative hypothesis (H1) states that at least one group mean is different. By performing ANOVA, you can test these hypotheses and make informed conclusions about your data.

    Step 6: Role of Variance in ANOVA

    Variance plays a crucial role in ANOVA. It measures the extent to which individual data points differ from the mean. In the context of ANOVA, variance helps in understanding how much the group means differ from each other. If the variance between groups is significantly larger than the variance within groups, it indicates that the group means are different.

    Step 7: Calculating ANOVA

    To calculate ANOVA, you need to compute the following:

    • Total Sum of Squares (SST): Measures the total variation in the data.
    • Between-Group Sum of Squares (SSB): Measures the variation between group means.
    • Within-Group Sum of Squares (SSW): Measures the variation within each group.
    Using these sums of squares, you can calculate the F-statistic, which helps in determining if the observed differences between group means are statistically significant.

    Step 8: Interpreting ANOVA Results

    After calculating the F-statistic, you compare it with a critical value from the F-distribution table. If the calculated F-statistic is greater than the critical value, you reject the null hypothesis and conclude that there are significant differences between group means. Otherwise, you fail to reject the null hypothesis, indicating that the group means are not significantly different.

    Step 9: Practical Applications of ANOVA

    ANOVA is widely used in various fields to compare group means and make data-driven decisions. For example, in education, it can be used to compare the effectiveness of different teaching methods. In business, it can help in comparing the performance of different marketing strategies. By understanding and applying ANOVA, you can gain valuable insights from your data and make informed decisions.

    FAQs

    1. What is the main purpose of ANOVA?

      The main purpose of ANOVA (Analysis of Variance) is to compare means across multiple groups simultaneously. It helps researchers determine if there are statistically significant differences between group means, which is particularly useful when dealing with three or more groups. ANOVA is more efficient than conducting multiple t-tests, as it reduces the risk of Type I errors.

    2. How does ANOVA differ from a t-test?

      While both ANOVA and t-tests are used to compare means, they differ in the number of groups they can analyze. T-tests are limited to comparing two groups at a time, whereas ANOVA can compare three or more groups simultaneously. ANOVA is more versatile and efficient for multiple group comparisons, making it a preferred choice when dealing with complex experimental designs.

    3. What are the assumptions of ANOVA?

      ANOVA relies on several key assumptions: 1) Independence of observations, 2) Normal distribution of the dependent variable in each group, 3) Homogeneity of variances across groups, and 4) No significant outliers. Violating these assumptions can affect the reliability of ANOVA results, so it's important to check and address any violations before proceeding with the analysis.

    4. What is the difference between one-way and two-way ANOVA?

      One-way ANOVA examines the impact of a single independent variable on a dependent variable, comparing means across multiple groups. Two-way ANOVA, on the other hand, investigates the effects of two independent variables on a dependent variable, allowing researchers to examine both main effects and interactions between the variables. Two-way ANOVA is more complex but provides a more comprehensive analysis of factorial designs.

    5. How do you interpret the F-statistic in ANOVA?

      The F-statistic in ANOVA represents the ratio of between-group variability to within-group variability. A larger F-value suggests greater differences between group means relative to the variability within groups. To interpret the F-statistic, compare it to the critical F-value or examine the associated p-value. If the F-statistic exceeds the critical value or the p-value is less than the significance level (typically 0.05), you can reject the null hypothesis and conclude that there are significant differences between group means.

    Prerequisite Topics for Analysis of Variance (ANOVA)

    Understanding Analysis of Variance (ANOVA) requires a solid foundation in several key statistical concepts. One of the most crucial prerequisites is the understanding of null hypothesis and alternative hypothesis. These fundamental concepts form the basis of hypothesis testing in ANOVA, allowing researchers to make informed decisions about their data.

    Another essential prerequisite is the knowledge of Type 1 and type 2 errors. In ANOVA, understanding these errors is crucial for interpreting results accurately and avoiding false conclusions. Type II errors, in particular, are important to consider when assessing the power of an ANOVA test.

    ANOVA also relies heavily on the concept of sum of squares between groups. This algebraic concept is fundamental to calculating the variance between different groups in an ANOVA analysis. Familiarity with squares and cubic conversions helps in comprehending the calculations involved in ANOVA's sum of squares.

    Additionally, an introduction to normal distribution is crucial for ANOVA. The normal distribution assumption is a key aspect of ANOVA, as it underlies many of the statistical tests used in this analysis method.

    These prerequisite topics are interconnected and build upon each other to provide a comprehensive understanding of ANOVA. For instance, the null hypothesis in ANOVA often assumes no difference between group means, which directly relates to the normal distribution of data. Similarly, understanding type 1 and type 2 errors is essential when interpreting the results of hypothesis tests in ANOVA.

    Moreover, the sum of squares concept is integral to calculating the F-statistic in ANOVA, which in turn is used to test the null hypothesis. This demonstrates how algebraic skills combine with statistical concepts in ANOVA analysis.

    By mastering these prerequisite topics, students can approach ANOVA with a solid foundation. This understanding allows for more insightful interpretation of ANOVA results, better experimental design, and the ability to critically evaluate research that uses ANOVA.

    In conclusion, while ANOVA itself is a powerful statistical tool, its effective use and interpretation heavily depend on a strong grasp of these fundamental concepts. Students who take the time to thoroughly understand these prerequisites will find themselves better equipped to tackle the complexities of ANOVA and apply it effectively in their research and data analysis endeavors.

    Recall:
    σ2=(x1μ)2+(x2μ)2+(xnμ)2n\sigma^2=\frac{(x_1-\mu)^2+(x_2-\mu)^2+ \cdots (x_n-\mu)^2}{n}

    x=x1+x2++xnn\overline{x}=\frac{x_1+x_2+ \cdots +x_n}{n}

    Degrees of Freedom
    The degrees of freedom for a calculation is the number of variables that are free to vary. Think of calculating the mean of several variables.

    d.f.=n1d.f.=n-1

    Sums of Squares:

    The Sum of Squares Within Groups (SSW) is calculated by first finding the sum of squares for each individual group, and then adding them together.

    The Sum of Squares Between Groups (SSB) is calculated by first finding the mean for all the groups (the Grand Mean) and then seeing what is the sum of squares from each individual group to the Grand Mean.

    The Total Sum of Squares (TSS or SST) is just the sum of sqaures of the every single item from all the groups. Just imagine that all the groups come together to form one big group.

    Total Sum of Squares = Sum of Squares Within + Sum of Squares Between (TSS=SSW+SSB)

    Hypothesis Testing with F-Distribution
    This method is just the test that the variances between groups do not vary.

    F=between  group  variabilitywithin  group  variabilityF=\frac{between\;group\;variability}{within\;group\;variability} =SSBdfSSWdf=\frac{\frac{SSB}{df}}{\frac{SSW}{df}}

    F(dfSSB,dfSSW)F(df_{SSB},df_{SSW}) is the critical value for an F-distribution