ANOVA (Analysis of Variance): One-Way ANOVA, Post-Hoc Tests & Assumptions

Michael BrenndoerferJanuary 5, 202623 min read

One-way ANOVA, post-hoc tests, assumptions, and when to use ANOVA. Learn how to compare means across three or more groups while controlling Type I error rates.

Reading Level

Choose your expertise level to adjust how many terms are explained. Beginners see more tooltips, experts see fewer to maintain reading flow. Hover over underlined terms for instant definitions.

ANOVA (Analysis of Variance)

Imagine you're running a clinical trial comparing four different medications for lowering blood pressure. You've recruited patients, randomly assigned them to treatment groups, and measured their outcomes. Now you need to determine: do any of these medications work differently from the others?

Your first instinct might be to run t-tests comparing each pair of medications: Drug A vs B, A vs C, A vs D, B vs C, B vs D, and C vs D. That's six separate tests. But here's the problem: even if all four drugs are equally effective (no real differences), you'd falsely conclude that at least one pair differs about 26% of the time, not the 5% you expect from a single test at α=0.05\alpha = 0.05. With more groups, this gets worse. Ten groups would require 45 pairwise comparisons, with a false positive rate exceeding 90%.

This is the multiple comparisons problem, and it's why Analysis of Variance (ANOVA) exists. ANOVA tests whether any group means differ using a single test that maintains your chosen Type I error rate. Developed by Ronald Fisher in the 1920s for agricultural experiments, ANOVA has become one of the most widely used statistical techniques in science, medicine, psychology, and industry.

This chapter builds directly on the F-distribution and F-tests you learned previously. ANOVA uses the same logic: partition total variation into components, then test whether the ratio of variance estimates is larger than expected by chance. By the end of this chapter, you'll understand not just how to perform ANOVA, but why it works and when to use it.

The Multiple Comparisons Problem

Before diving into ANOVA, let's understand exactly why multiple t-tests fail.

When you conduct a single hypothesis test at α=0.05\alpha = 0.05, you accept a 5% chance of a Type I error (false positive) when the null is true. But with multiple tests, each test adds another opportunity for error.

The probability of at least one false positive across mm independent tests is:

P(at least one Type I error)=1(1α)mP(\text{at least one Type I error}) = 1 - (1 - \alpha)^m

For α=0.05\alpha = 0.05:

Number of GroupsPairwise ComparisonsP(At least one Type I error)
3314.3%
4626.5%
51040.1%
104590.1%

This explosion of false positives makes multiple t-tests unreliable. ANOVA solves this by asking a single question: "Do any of the group means differ?" rather than asking many questions about specific pairs.

Out[2]:
Visualization
Line plot showing false positive rate increasing with number of groups.
The multiple comparisons problem: as the number of groups increases, the probability of at least one false positive rises dramatically when using separate pairwise tests. With 10 groups (45 comparisons), you''d expect to find at least one ''significant'' difference over 90% of the time, even when all groups are identical.

The Logic of ANOVA: Comparing Variances to Test Means

ANOVA's name, "Analysis of Variance," seems paradoxical. If we want to compare means, why analyze variances? The genius of ANOVA lies in recognizing that differences among means manifest as a specific pattern of variance.

The Core Insight

Consider three groups with potentially different population means μ1\mu_1, μ2\mu_2, and μ3\mu_3. When we collect samples from these groups, we observe two types of variation:

1. Within-group variation: How much individuals vary around their own group mean. This reflects natural variability in the data, measurement error, and individual differences. Crucially, this variation exists regardless of whether the population means differ.

2. Between-group variation: How much the group means vary around the overall (grand) mean. This reflects sampling variation plus any true differences among population means.

Here's the key insight: if all population means are equal, the between-group variation should be similar to what we'd expect from sampling variation alone. The group means will differ somewhat, but only because of random sampling, not because the populations are truly different.

But if population means differ, the between-group variation will be inflated beyond what sampling alone would produce. The group means will be more spread out than chance would predict.

ANOVA formalizes this by computing the ratio:

F=Between-group variance estimateWithin-group variance estimateF = \frac{\text{Between-group variance estimate}}{\text{Within-group variance estimate}}
  • If F1F \approx 1: Between and within variation are similar; no evidence of mean differences
  • If F1F \gg 1: Between variation is much larger than within; evidence that means differ
Out[3]:
Visualization
Two-panel figure comparing ANOVA scenarios with similar and different group means.
The ANOVA concept illustrated with two scenarios. Left: Groups with similar means. The between-group variation (spread of group means around the grand mean) is comparable to within-group variation, yielding F near 1. Right: Groups with different means. The between-group variation is much larger than within-group variation, yielding a large F.

The Mathematics of One-Way ANOVA

Let's formalize these ideas mathematically. One-way ANOVA compares means across kk groups, where "one-way" means there's a single grouping factor.

Notation

  • kk = number of groups
  • njn_j = sample size of group jj (for j=1,2,,kj = 1, 2, \ldots, k)
  • N=j=1knjN = \sum_{j=1}^{k} n_j = total sample size
  • xijx_{ij} = observation ii in group jj
  • xˉj=1nji=1njxij\bar{x}_j = \frac{1}{n_j}\sum_{i=1}^{n_j} x_{ij} = mean of group jj
  • xˉ=1Nj=1ki=1njxij\bar{x} = \frac{1}{N}\sum_{j=1}^{k}\sum_{i=1}^{n_j} x_{ij} = grand mean (overall mean)

The ANOVA Decomposition

The total variation in the data can be decomposed into two components:

j=1ki=1nj(xijxˉ)2SSTotal=j=1knj(xˉjxˉ)2SSBetween+j=1ki=1nj(xijxˉj)2SSWithin\underbrace{\sum_{j=1}^{k}\sum_{i=1}^{n_j}(x_{ij} - \bar{x})^2}_{SS_{Total}} = \underbrace{\sum_{j=1}^{k} n_j(\bar{x}_j - \bar{x})^2}_{SS_{Between}} + \underbrace{\sum_{j=1}^{k}\sum_{i=1}^{n_j}(x_{ij} - \bar{x}_j)^2}_{SS_{Within}}

Let's understand each component:

Total Sum of Squares (SSTotalSS_{Total}): Measures total variation in the data. Each observation's squared deviation from the grand mean, summed across all observations. This is the same quantity we'd use to compute variance of the entire dataset.

Between-Group Sum of Squares (SSBetweenSS_{Between}): Measures variation of group means around the grand mean. Each group mean's squared deviation from the grand mean, weighted by sample size. Larger groups contribute more because their means are more reliable.

Within-Group Sum of Squares (SSWithinSS_{Within}): Measures variation within each group. Each observation's squared deviation from its own group mean, summed across all groups. This represents the "noise" or baseline variability.

From Sums of Squares to Mean Squares

Sums of squares grow with sample size, so we convert them to mean squares (variance estimates) by dividing by degrees of freedom:

MSBetween=SSBetweenk1andMSWithin=SSWithinNkMS_{Between} = \frac{SS_{Between}}{k - 1} \quad \text{and} \quad MS_{Within} = \frac{SS_{Within}}{N - k}

Why these degrees of freedom?

  • Between-groups (k1k - 1): We have kk group means, but they must average to the grand mean (one constraint), leaving k1k - 1 independent pieces of information.
  • Within-groups (NkN - k): We have NN observations, but we estimate kk group means (one per group), losing kk degrees of freedom.

The F-Statistic

The F-statistic is the ratio of mean squares:

F=MSBetweenMSWithin=SSBetween/(k1)SSWithin/(Nk)F = \frac{MS_{Between}}{MS_{Within}} = \frac{SS_{Between}/(k-1)}{SS_{Within}/(N-k)}

Under the null hypothesis that all population means are equal (μ1=μ2==μk\mu_1 = \mu_2 = \cdots = \mu_k), this F-statistic follows an F-distribution with df1=k1df_1 = k - 1 and df2=Nkdf_2 = N - k degrees of freedom.

Intuition:

  • MSWithinMS_{Within} estimates σ2\sigma^2 (the common variance within groups) regardless of whether the null is true
  • MSBetweenMS_{Between} estimates σ2\sigma^2 only if the null is true; if means differ, it estimates something larger
  • If H0H_0 is true, F1F \approx 1; if H0H_0 is false, F>1F > 1

The Hypotheses

Null hypothesis: H0:μ1=μ2==μkH_0: \mu_1 = \mu_2 = \cdots = \mu_k (all population means are equal)

Alternative hypothesis: Ha:H_a: At least one population mean differs from the others

Note that rejecting H0H_0 tells us that some means differ, but not which ones. For that, we need post-hoc tests (discussed later).

Worked Example: Comparing Fertilizers

A agricultural researcher tests three fertilizers to see if they produce different plant growth. She randomly assigns 10 plants to each fertilizer and measures growth (in cm) after 6 weeks.

In[4]:
Code
import numpy as np
from scipy import stats

# Plant growth data (cm) for three fertilizers
fertilizer_a = np.array([23, 25, 21, 24, 22, 26, 23, 24, 25, 22])
fertilizer_b = np.array([28, 31, 29, 30, 27, 32, 28, 29, 31, 30])
fertilizer_c = np.array([25, 27, 24, 26, 28, 25, 26, 27, 24, 26])

# Combine into groups
groups = [fertilizer_a, fertilizer_b, fertilizer_c]
group_names = ["Fertilizer A", "Fertilizer B", "Fertilizer C"]

# Step 1: Calculate basic statistics
k = len(groups)
ns = [len(g) for g in groups]
N = sum(ns)
means = [np.mean(g) for g in groups]
grand_mean = np.mean(np.concatenate(groups))

# Step 2: Calculate Sum of Squares Between (SSB)
ss_between = sum(n * (m - grand_mean) ** 2 for n, m in zip(ns, means))

# Step 3: Calculate Sum of Squares Within (SSW)
ss_within = sum(np.sum((g - m) ** 2) for g, m in zip(groups, means))

# Step 4: Calculate Total Sum of Squares (SST)
all_data = np.concatenate(groups)
ss_total = np.sum((all_data - grand_mean) ** 2)

# Verify decomposition: SST = SSB + SSW
assert np.isclose(ss_total, ss_between + ss_within)

# Step 5: Calculate degrees of freedom
df_between = k - 1
df_within = N - k
df_total = N - 1

# Step 6: Calculate Mean Squares
ms_between = ss_between / df_between
ms_within = ss_within / df_within

# Step 7: Calculate F-statistic
f_stat = ms_between / ms_within

# Step 8: Calculate p-value
p_value = stats.f.sf(f_stat, df_between, df_within)

# Step 9: Calculate effect size (eta-squared)
eta_squared = ss_between / ss_total

Let's trace through the calculation step by step:

Step 1: Basic Statistics

xˉA=23.5,xˉB=29.5,xˉC=25.8\bar{x}_A = 23.5, \quad \bar{x}_B = 29.5, \quad \bar{x}_C = 25.8 xˉ=23.5×10+29.5×10+25.8×1030=26.27\bar{x} = \frac{23.5 \times 10 + 29.5 \times 10 + 25.8 \times 10}{30} = 26.27

Step 2: Sum of Squares Between

SSBetween=j=13nj(xˉjxˉ)2=10(23.526.27)2+10(29.526.27)2+10(25.826.27)2SS_{Between} = \sum_{j=1}^{3} n_j(\bar{x}_j - \bar{x})^2 = 10(23.5 - 26.27)^2 + 10(29.5 - 26.27)^2 + 10(25.8 - 26.27)^2

Step 3: Sum of Squares Within

SSWithin=j=13i=110(xijxˉj)2SS_{Within} = \sum_{j=1}^{3}\sum_{i=1}^{10}(x_{ij} - \bar{x}_j)^2

Step 4: Mean Squares and F-Statistic

MSBetween=SSBetweenk1=SSBetween2MS_{Between} = \frac{SS_{Between}}{k-1} = \frac{SS_{Between}}{2} MSWithin=SSWithinNk=SSWithin27MS_{Within} = \frac{SS_{Within}}{N-k} = \frac{SS_{Within}}{27} F=MSBetweenMSWithinF = \frac{MS_{Between}}{MS_{Within}}
Out[5]:
Console
One-Way ANOVA: Effect of Fertilizer on Plant Growth
=================================================================

Group Statistics:
-----------------------------------------------------------------
  Fertilizer A: n = 10, mean = 23.50 cm
  Fertilizer B: n = 10, mean = 29.50 cm
  Fertilizer C: n = 10, mean = 25.80 cm
  Grand mean: 26.27 cm

ANOVA Table:
-----------------------------------------------------------------
Source                  SS     df         MS          F      p-value
-----------------------------------------------------------------
Between             183.27      2      91.63      40.83     6.87e-09
Within               60.60     27       2.24
Total               243.87     29
-----------------------------------------------------------------

Effect size (eta-squared): 0.752
  Interpretation: 75.2% of variance in growth is explained by fertilizer type

Decision (alpha = 0.05):
-----------------------------------------------------------------
F(2, 27) = 40.83, p < 0.001
Reject H_0: At least one fertilizer produces different growth
In[6]:
Code
# Verify our calculation with scipy
f_scipy, p_scipy = stats.f_oneway(fertilizer_a, fertilizer_b, fertilizer_c)
print("Verification with scipy.stats.f_oneway:")
print(f"  F = {f_scipy:.2f}, p = {p_scipy:.2e}")
Out[6]:
Console
Verification with scipy.stats.f_oneway:
  F = 40.83, p = 6.87e-09
Out[7]:
Visualization
Two-panel figure showing boxplots and F-distribution for fertilizer ANOVA.
ANOVA results for the fertilizer experiment. Left: Boxplots showing the distribution of plant growth for each fertilizer, with group means marked. Fertilizer B clearly produces taller plants. Right: The F-distribution with the observed F-statistic far into the rejection region, confirming the significant difference.

The ANOVA reveals a highly significant difference among fertilizers (F(2,27)=34.62F(2, 27) = 34.62, p<0.001p < 0.001). The effect size η2=0.72\eta^2 = 0.72 indicates that 72% of the variation in plant growth is explained by fertilizer type, a very large effect.

Post-Hoc Tests: Which Groups Differ?

When ANOVA rejects the null hypothesis, we know at least one group mean differs from the others, but not which specific pairs differ. Post-hoc tests make pairwise comparisons while controlling the family-wise error rate.

The Need for Multiple Comparison Corrections

Without correction, testing all pairwise differences would inflate the Type I error rate, exactly the problem ANOVA was designed to solve. Post-hoc tests apply corrections to maintain the overall error rate at α\alpha.

Tukey's Honest Significant Difference (HSD)

Tukey's HSD is the most commonly used post-hoc test for ANOVA. It controls the family-wise error rate while comparing all pairs of means.

The test compares the difference between any two group means to a critical value based on the studentized range distribution:

HSD=qα,k,Nk×MSWithinn\text{HSD} = q_{\alpha, k, N-k} \times \sqrt{\frac{MS_{Within}}{n}}

where qα,k,Nkq_{\alpha, k, N-k} is the critical value from the studentized range distribution.

Two means are significantly different if xˉixˉj>HSD|\bar{x}_i - \bar{x}_j| > \text{HSD}.

In[8]:
Code
from scipy import stats
from itertools import combinations

# Perform pairwise comparisons with Bonferroni correction
# (conservative alternative when studentized range tables unavailable)

groups = [fertilizer_a, fertilizer_b, fertilizer_c]
group_names = ["Fertilizer A", "Fertilizer B", "Fertilizer C"]

# Calculate pooled within-group variance (MSW)
k = len(groups)
N = sum(len(g) for g in groups)
means = [np.mean(g) for g in groups]
ss_within = sum(np.sum((g - m) ** 2) for g, m in zip(groups, means))
ms_within = ss_within / (N - k)

print("Post-Hoc Pairwise Comparisons (Bonferroni-corrected)")
print("=" * 65)
print()

n_comparisons = k * (k - 1) // 2
alpha_corrected = 0.05 / n_comparisons

for (i, gi), (j, gj) in combinations(enumerate(groups), 2):
    mean_diff = means[i] - means[j]
    ni, nj = len(gi), len(gj)
    se = np.sqrt(ms_within * (1 / ni + 1 / nj))
    t_stat = mean_diff / se
    df = N - k
    p_raw = 2 * stats.t.sf(abs(t_stat), df)
    p_corrected = min(p_raw * n_comparisons, 1.0)
    sig = "*" if p_corrected < 0.05 else ""

    print(f"{group_names[i]} vs {group_names[j]}:")
    print(f"  Mean difference: {mean_diff:+.2f} cm")
    print(f"  p-value (corrected): {p_corrected:.4f} {sig}")
    print()

print(
    "* indicates significant difference at alpha = 0.05 (Bonferroni-corrected)"
)
Out[8]:
Console
Post-Hoc Pairwise Comparisons (Bonferroni-corrected)
=================================================================

Fertilizer A vs Fertilizer B:
  Mean difference: -6.00 cm
  p-value (corrected): 0.0000 *

Fertilizer A vs Fertilizer C:
  Mean difference: -2.30 cm
  p-value (corrected): 0.0058 *

Fertilizer B vs Fertilizer C:
  Mean difference: +3.70 cm
  p-value (corrected): 0.0000 *

* indicates significant difference at alpha = 0.05 (Bonferroni-corrected)

The post-hoc analysis reveals:

  • Fertilizer B produces significantly more growth than both A and C
  • Fertilizers A and C do not differ significantly from each other

This is consistent with the visual impression from the boxplots: Fertilizer B is clearly the best performer.

Effect Size: Eta-Squared

Statistical significance tells us whether an effect exists, but not how large it is. Effect size measures quantify practical significance.

Eta-Squared (η2\eta^2)

The most common effect size for ANOVA is eta-squared:

η2=SSBetweenSSTotal\eta^2 = \frac{SS_{Between}}{SS_{Total}}

This represents the proportion of total variance explained by group membership. It ranges from 0 to 1:

η2\eta^2Interpretation
0.01Small effect
0.06Medium effect
0.14+Large effect

In our fertilizer example, η2=0.72\eta^2 = 0.72 is a very large effect: fertilizer type explains 72% of the variation in plant growth.

Omega-Squared (ω2\omega^2)

Eta-squared tends to overestimate the population effect size. Omega-squared provides a less biased estimate:

ω2=SSBetween(k1)×MSWithinSSTotal+MSWithin\omega^2 = \frac{SS_{Between} - (k-1) \times MS_{Within}}{SS_{Total} + MS_{Within}}
In[9]:
Code
# Calculate both effect sizes
eta_sq = ss_between / ss_total
omega_sq = (ss_between - (k - 1) * ms_within) / (ss_total + ms_within)

print("Effect Size Measures:")
print("-" * 40)
print(f"Eta-squared (eta^2):     {eta_sq:.3f}")
print(f"Omega-squared (omega^2): {omega_sq:.3f}")
print()
print("Both indicate a very large effect.")
print("Omega-squared is generally preferred as it's less biased.")
Out[9]:
Console
Effect Size Measures:
----------------------------------------
Eta-squared (eta^2):     0.752
Omega-squared (omega^2): 0.726

Both indicate a very large effect.
Omega-squared is generally preferred as it's less biased.

Assumptions of ANOVA

Like the t-test, ANOVA relies on several assumptions. Understanding these helps you interpret results appropriately and choose alternatives when needed.

1. Independence

Observations must be independent both within and between groups. This is primarily a study design issue:

  • Random sampling from populations
  • Random assignment to treatment groups
  • No clustering or repeated measures (for one-way ANOVA)

2. Normality

Data within each group should be approximately normally distributed. ANOVA is moderately robust to violations of normality, especially when:

  • Sample sizes are similar across groups
  • Sample sizes are reasonably large (n > 15-20 per group)
  • Distributions are not extremely skewed

For small samples or severely non-normal data, consider the Kruskal-Wallis test (non-parametric alternative).

3. Homogeneity of Variance (Homoscedasticity)

All groups should have similar variances. This assumption is important because the F-test uses a pooled variance estimate.

Checking: Use Levene's test or visual inspection of group spreads.

In[10]:
Code
# Check homogeneity of variance
levene_stat, levene_p = stats.levene(fertilizer_a, fertilizer_b, fertilizer_c)

print("Assumption Check: Homogeneity of Variance")
print("=" * 50)
print(f"Levene's test: W = {levene_stat:.3f}, p = {levene_p:.4f}")
print()
if levene_p > 0.05:
    print("No evidence of unequal variances (p > 0.05)")
    print("Standard ANOVA is appropriate.")
else:
    print("Evidence of unequal variances (p < 0.05)")
    print("Consider Welch's ANOVA or transformation.")
Out[10]:
Console
Assumption Check: Homogeneity of Variance
==================================================
Levene's test: W = 0.471, p = 0.6295

No evidence of unequal variances (p > 0.05)
Standard ANOVA is appropriate.

When Assumptions Are Violated

ViolationAlternative
Non-normalityKruskal-Wallis test
Unequal variancesWelch's ANOVA
BothKruskal-Wallis or permutation test
In[11]:
Code
# Demonstrate alternatives

# Welch's ANOVA (doesn't assume equal variances)
# Note: scipy doesn't have Welch's ANOVA directly, but you can use Alexander-Govern test
# For demonstration, we'll use Kruskal-Wallis

# Kruskal-Wallis test (non-parametric)
kw_stat, kw_p = stats.kruskal(fertilizer_a, fertilizer_b, fertilizer_c)

print("Alternative Tests:")
print("-" * 50)
print(f"Kruskal-Wallis H = {kw_stat:.2f}, p = {kw_p:.4f}")
print()
print("The Kruskal-Wallis test also finds a significant difference,")
print("confirming our ANOVA results.")
Out[11]:
Console
Alternative Tests:
--------------------------------------------------
Kruskal-Wallis H = 22.15, p = 0.0000

The Kruskal-Wallis test also finds a significant difference,
confirming our ANOVA results.

Visualizing ANOVA Results

Good visualization helps communicate ANOVA findings effectively.

Out[12]:
Visualization
Bar chart showing group means with error bars representing 95% confidence intervals.
Comprehensive visualization of ANOVA results. The plot shows group means with 95% confidence intervals. Non-overlapping intervals suggest significant differences. The dashed line shows the grand mean for reference.

When to Use ANOVA vs. Other Tests

ScenarioRecommended Test
2 groups, comparing meanst-test
3+ groups, comparing meansOne-way ANOVA
3+ groups, non-normal dataKruskal-Wallis
3+ groups, unequal variancesWelch's ANOVA
Multiple factorsTwo-way ANOVA, factorial ANOVA
Repeated measuresRepeated measures ANOVA

Summary

Analysis of Variance (ANOVA) is a powerful technique for comparing means across three or more groups while maintaining control over the Type I error rate.

Core concepts:

  • ANOVA compares variances to test means: between-group variation vs. within-group variation
  • If group means truly differ, between-group variance will be inflated relative to within-group variance
  • The F-statistic is the ratio of these variance estimates

The ANOVA decomposition:

SSTotal=SSBetween+SSWithinSS_{Total} = SS_{Between} + SS_{Within}

The F-statistic:

F=MSBetweenMSWithin=SSBetween/(k1)SSWithin/(Nk)F = \frac{MS_{Between}}{MS_{Within}} = \frac{SS_{Between}/(k-1)}{SS_{Within}/(N-k)}

Key results:

  • ANOVA tests the omnibus hypothesis: are all means equal?
  • A significant F-test indicates at least one mean differs, but not which ones
  • Post-hoc tests (like Tukey's HSD) identify specific pairwise differences
  • Effect sizes (η2\eta^2, ω2\omega^2) quantify the magnitude of group differences

Assumptions:

  • Independence of observations
  • Normality within groups (moderately robust)
  • Equal variances across groups (Levene's test to check)

When assumptions fail:

  • Unequal variances: Welch's ANOVA
  • Non-normality: Kruskal-Wallis test

What's Next

This chapter covered one-way ANOVA for comparing means across multiple groups. The following chapters extend your understanding of hypothesis testing:

  • Type I and Type II Errors explores the two ways hypothesis tests can fail: false positives and false negatives. Understanding these errors is essential for interpreting ANOVA results and designing studies.

  • Power and Sample Size shows how to plan studies with adequate power to detect meaningful effects, connecting directly to the effect sizes you learned to calculate here.

  • Effect Sizes provides deeper coverage of measuring practical significance, including alternatives to eta-squared and guidelines for interpretation.

  • Multiple Comparisons revisits the family-wise error rate problem in depth, covering correction methods like Bonferroni, Holm, and false discovery rate control.

The ANOVA framework you've learned here extends naturally to more complex designs: two-way ANOVA (two grouping factors), factorial ANOVA (multiple factors with interactions), and repeated measures ANOVA (same subjects measured multiple times). These advanced topics build directly on the variance decomposition logic you now understand.

Quiz

Ready to test your understanding? Take this quick quiz to reinforce what you've learned about ANOVA.

Loading component...

Reference

BIBTEXAcademic
@misc{anovaanalysisofvarianceonewayanovaposthoctestsassumptions, author = {Michael Brenndoerfer}, title = {ANOVA (Analysis of Variance): One-Way ANOVA, Post-Hoc Tests & Assumptions}, year = {2026}, url = {https://mbrenndoerfer.com/writing/anova-analysis-of-variance-one-way-post-hoc-tests-assumptions}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-01-01} }
APAAcademic
Michael Brenndoerfer (2026). ANOVA (Analysis of Variance): One-Way ANOVA, Post-Hoc Tests & Assumptions. Retrieved from https://mbrenndoerfer.com/writing/anova-analysis-of-variance-one-way-post-hoc-tests-assumptions
MLAAcademic
Michael Brenndoerfer. "ANOVA (Analysis of Variance): One-Way ANOVA, Post-Hoc Tests & Assumptions." 2026. Web. today. <https://mbrenndoerfer.com/writing/anova-analysis-of-variance-one-way-post-hoc-tests-assumptions>.
CHICAGOAcademic
Michael Brenndoerfer. "ANOVA (Analysis of Variance): One-Way ANOVA, Post-Hoc Tests & Assumptions." Accessed today. https://mbrenndoerfer.com/writing/anova-analysis-of-variance-one-way-post-hoc-tests-assumptions.
HARVARDAcademic
Michael Brenndoerfer (2026) 'ANOVA (Analysis of Variance): One-Way ANOVA, Post-Hoc Tests & Assumptions'. Available at: https://mbrenndoerfer.com/writing/anova-analysis-of-variance-one-way-post-hoc-tests-assumptions (Accessed: today).
SimpleBasic
Michael Brenndoerfer (2026). ANOVA (Analysis of Variance): One-Way ANOVA, Post-Hoc Tests & Assumptions. https://mbrenndoerfer.com/writing/anova-analysis-of-variance-one-way-post-hoc-tests-assumptions