The Z-Test: One-Sample, Two-Sample & Proportion Tests Complete Guide

Michael BrenndoerferUpdated January 12, 202621 min read

Complete guide to z-tests including one-sample, two-sample, and proportion tests. Learn when to use z-tests, how to calculate test statistics, and interpret results when population variance is known.

Reading Level

Choose your expertise level to adjust how many terms are explained. Beginners see more tooltips, experts see fewer to maintain reading flow. Hover over underlined terms for instant definitions.

The Z-Test

You've learned what p-values mean. You understand how to set up null and alternative hypotheses. You know what assumptions matter. Now it's time to put all of that knowledge into practice with your first complete hypothesis test.

The z-test is the natural starting point because it's the simplest parametric test. When you know the population standard deviation, the z-test gives you exact inference based on the familiar normal distribution. No approximations, no complex formulas, just a straightforward application of everything you've learned so far.

But here's the thing: knowing the population standard deviation is rare in practice. So why learn the z-test at all? Two reasons. First, it's the foundation for understanding more complex tests like the t-test (next chapter). Second, z-tests are the standard approach for testing proportions, which is one of the most common hypothesis testing scenarios in data science, from A/B testing to clinical trials to quality control.

When Should You Use a Z-Test?

The z-test is appropriate in three scenarios:

Scenario 1: Known Population Standard Deviation

This is rare but does occur:

  • Calibrated measurement systems: Manufacturing processes often have extensively characterized measurement instruments where σ is known from calibration studies
  • Standardized tests: IQ tests, for example, are designed to have σ = 15 in the general population
  • Quality control: When historical data from millions of measurements establishes σ reliably

Scenario 2: Testing Proportions

This is the most common real-world use of z-tests. When testing hypotheses about population proportions, the variance is determined by the proportion itself (σ2=p(1p)\sigma^2 = p(1-p)). You don't need to estimate it separately, so the z-test is appropriate.

Scenario 3: Very Large Samples

When sample sizes exceed 100 or so, the t-distribution converges to the normal distribution. In these cases, z-tests and t-tests give essentially identical results. However, using the t-test is never wrong even with large samples, so there's no real advantage to using z here.

Out[2]:
Visualization
Visual diagram showing the three z-test scenarios with icons.
The three scenarios where z-tests are appropriate. For proportions, the variance is determined by p, making z-tests the natural choice. For means, you need to either know σ from prior information or have very large samples.

The Z-Test Formula: A Deep Dive

The Core Formula

The z-test statistic is:

z=xˉμ0σ/nz = \frac{\bar{x} - \mu_0}{\sigma / \sqrt{n}}

This formula looks simple, but each component has deep meaning. Let's understand why this particular combination of quantities gives us exactly what we need for hypothesis testing.

Step 1: The Numerator Measures the Discrepancy

xˉμ0\bar{x} - \mu_0

This is the difference between what you observed (sample mean xˉ\bar{x}) and what the null hypothesis claims (population mean μ0\mu_0).

Example: If the null hypothesis claims the population mean is 100 and you observe a sample mean of 95, the numerator is 95100=595 - 100 = -5.

But here's the problem: a difference of 5 could be huge or trivial depending on context. If you're measuring heights in millimeters, 5 mm is negligible. If you're measuring earthquake magnitudes, 5 points is catastrophic. We need to standardize.

Step 2: The Denominator Provides Context

σn\frac{\sigma}{\sqrt{n}}

This is the standard error, and it answers: "How much do sample means typically vary from sample to sample?"

The standard error depends on two factors:

Factor 1: Population variability (σ)

More variable populations produce more variable sample means. If individual observations vary a lot, the averages of groups of observations will also vary.

Factor 2: Sample size (n)

Larger samples produce more stable estimates. The n\sqrt{n} in the denominator reflects a fundamental principle: averaging reduces noise. When you average n independent observations, random fluctuations tend to cancel out.

Out[3]:
Visualization
Two normal distributions showing different spreads due to different σ values.
Effect of population variability (σ) on the sampling distribution. When individual observations are more variable, sample means are also more variable. Both distributions are for n=25.
Two normal distributions showing different spreads due to different sample sizes.
Effect of sample size (n) on the sampling distribution. Larger samples produce more precise estimates of the population mean. Both distributions have σ=20.

Step 3: The Ratio Gives Standardized Evidence

When you divide the observed discrepancy by the standard error:

z=xˉμ0σ/nz = \frac{\bar{x} - \mu_0}{\sigma / \sqrt{n}}

You get a standardized score that tells you: "How many standard errors away from the null hypothesis is my observation?"

  • z=1z = 1 means your sample mean is 1 standard error away from μ0\mu_0
  • z=2z = 2 means your sample mean is 2 standard errors away
  • z=1.5z = -1.5 means your sample mean is 1.5 standard errors below μ0\mu_0

The beauty of standardization is that it puts all tests on the same scale. Whether you're measuring heights, weights, temperatures, or dollars, a z-score of 2 always has the same interpretation: your observation is 2 standard errors from what the null predicts.

Why the Standard Normal Distribution?

Under the null hypothesis H0:μ=μ0H_0: \mu = \mu_0, if either:

  1. The population is normally distributed, OR
  2. The sample size is large enough for the Central Limit Theorem to apply

Then the z-statistic follows a standard normal distribution N(0,1)N(0, 1).

This is the key insight that makes z-tests work. We know everything about the standard normal distribution. We can calculate exact probabilities for any z-value. So once we compute z, we can immediately determine how surprising our result is under the null hypothesis.

Out[4]:
Visualization
Standard normal distribution with shaded regions showing common probability areas.
The standard normal distribution and its relationship to z-values. Under the null hypothesis, the z-statistic follows this distribution. About 68% of z-values fall within ±1, 95% within ±2, and 99.7% within ±3. Extreme z-values (beyond ±2 or so) provide evidence against the null hypothesis.

Computing P-Values from Z-Statistics

Once you have a z-statistic, you need to convert it to a p-value. The process depends on whether you're doing a one-sided or two-sided test.

Two-Sided Test

For a two-sided test (testing whether the mean differs from μ0\mu_0 in either direction):

p-value=2×P(Z>z)=2×Φ(z)\text{p-value} = 2 \times P(Z > |z|) = 2 \times \Phi(-|z|)

where Φ\Phi is the standard normal CDF. We double the probability because we're interested in extreme values in both tails.

One-Sided Tests

For a right-tailed test (testing whether the mean is greater than μ0\mu_0):

p-value=P(Z>z)\text{p-value} = P(Z > z)

For a left-tailed test (testing whether the mean is less than μ0\mu_0):

p-value=P(Z<z)\text{p-value} = P(Z < z)
Out[5]:
Visualization
Standard normal with two-sided p-value regions shaded.
Two-sided p-value: For z = 2.1, we calculate the probability of being more than 2.1 standard deviations away from zero in either direction. The p-value is the total area in both tails.
Standard normal with one-sided p-value region shaded.
One-sided p-value (right-tailed): For the same z = 2.1, the one-sided p-value only includes the right tail. This makes one-sided tests more powerful in the hypothesized direction but unable to detect effects in the opposite direction.

One-Sample Z-Test: Complete Worked Example

Let's work through a complete example with every calculation shown explicitly.

The Problem

A coffee machine manufacturer claims their machines dispense exactly 200 mL per cup, with a known standard deviation of 5 mL (established through extensive quality testing). A consumer group tests 49 cups from a randomly selected machine and finds a mean of 197.5 mL. Is there evidence that this machine dispenses less than the claimed amount?

Step 1: State the Hypotheses

Since we're specifically asking whether the machine dispenses less than claimed, this is a one-sided test:

  • H0:μ=200H_0: \mu = 200 (The machine dispenses the claimed amount)
  • H1:μ<200H_1: \mu < 200 (The machine dispenses less than claimed)

Step 2: Identify the Known Values

  • Sample mean: xˉ=197.5\bar{x} = 197.5 mL
  • Hypothesized mean: μ0=200\mu_0 = 200 mL
  • Population standard deviation: σ=5\sigma = 5 mL
  • Sample size: n=49n = 49
  • Significance level: α=0.05\alpha = 0.05

Step 3: Calculate the Standard Error

SE=σn=549=57=0.714 mLSE = \frac{\sigma}{\sqrt{n}} = \frac{5}{\sqrt{49}} = \frac{5}{7} = 0.714 \text{ mL}

This tells us that sample means of 49 cups typically vary by about 0.714 mL from the true population mean.

Step 4: Calculate the Z-Statistic

z=xˉμ0SE=197.52000.714=2.50.714=3.50z = \frac{\bar{x} - \mu_0}{SE} = \frac{197.5 - 200}{0.714} = \frac{-2.5}{0.714} = -3.50

The sample mean is 3.50 standard errors below the hypothesized value. This is quite extreme!

Step 5: Calculate the P-Value

For a left-tailed test:

p-value=P(Z<3.50)=0.00023\text{p-value} = P(Z < -3.50) = 0.00023

Step 6: Make a Decision

Since p=0.00023<α=0.05p = 0.00023 < \alpha = 0.05, we reject the null hypothesis. There is strong evidence that the machine dispenses less than 200 mL per cup.

In[6]:
Code
import numpy as np
from scipy import stats

# Given values
x_bar = 197.5  # Sample mean
mu_0 = 200  # Hypothesized mean
sigma = 5  # Population standard deviation
n = 49  # Sample size
alpha = 0.05  # Significance level

# Step 3: Standard error
se = sigma / np.sqrt(n)

# Step 4: Z-statistic
z = (x_bar - mu_0) / se

# Step 5: P-value (left-tailed)
p_value = stats.norm.cdf(z)

# Critical value for left-tailed test
z_critical = stats.norm.ppf(alpha)

print("One-Sample Z-Test: Coffee Machine")
print("=" * 50)
print(f"Sample mean:          x̄ = {x_bar} mL")
print(f"Hypothesized mean:    μ₀ = {mu_0} mL")
print(f"Population std:       σ = {sigma} mL")
print(f"Sample size:          n = {n}")
print()
print(f"Standard error:       SE = {se:.4f} mL")
print(f"Z-statistic:          z = {z:.3f}")
print(f"P-value (left-tailed): p = {p_value:.6f}")
print(f"Critical value:       z_α = {z_critical:.3f}")
print()
print(f"Decision: {'Reject H₀' if p_value < alpha else 'Fail to reject H₀'}")
print()
print("Interpretation: There is strong evidence that the machine")
print("dispenses less than the claimed 200 mL per cup.")
Out[6]:
Console
One-Sample Z-Test: Coffee Machine
==================================================
Sample mean:          x̄ = 197.5 mL
Hypothesized mean:    μ₀ = 200 mL
Population std:       σ = 5 mL
Sample size:          n = 49

Standard error:       SE = 0.7143 mL
Z-statistic:          z = -3.500
P-value (left-tailed): p = 0.000233
Critical value:       z_α = -1.645

Decision: Reject H₀

Interpretation: There is strong evidence that the machine
dispenses less than the claimed 200 mL per cup.
Out[7]:
Visualization
Standard normal distribution with rejection region and observed z-statistic marked.
Visualization of the coffee machine z-test. The observed z-statistic of -3.50 falls deep in the left tail, well beyond the critical value of -1.645. This provides strong evidence against the null hypothesis.

Confidence Interval Interpretation

We can also construct a 95% confidence interval for the true mean:

In[8]:
Code
# 95% CI for the population mean
z_crit_ci = stats.norm.ppf(0.975)  # For two-sided CI
margin = z_crit_ci * se
ci_lower = x_bar - margin
ci_upper = x_bar + margin

print(f"95% Confidence Interval: [{ci_lower:.2f}, {ci_upper:.2f}] mL")
print()
print("The claimed value of 200 mL is outside this interval,")
print("which is consistent with rejecting the null hypothesis.")
Out[8]:
Console
95% Confidence Interval: [196.10, 198.90] mL

The claimed value of 200 mL is outside this interval,
which is consistent with rejecting the null hypothesis.

Two-Sample Z-Test

When comparing means from two independent populations with known variances, we use the two-sample z-test.

The Formula

z=(xˉ1xˉ2)D0σ12n1+σ22n2z = \frac{(\bar{x}_1 - \bar{x}_2) - D_0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}

Where:

  • xˉ1,xˉ2\bar{x}_1, \bar{x}_2: Sample means from groups 1 and 2
  • D0D_0: Hypothesized difference (usually 0 for testing equality)
  • σ1,σ2\sigma_1, \sigma_2: Known population standard deviations
  • n1,n2n_1, n_2: Sample sizes

Why Do Variances Add?

When you subtract two independent random variables, their variances add:

Var(Xˉ1Xˉ2)=Var(Xˉ1)+Var(Xˉ2)=σ12n1+σ22n2\text{Var}(\bar{X}_1 - \bar{X}_2) = \text{Var}(\bar{X}_1) + \text{Var}(\bar{X}_2) = \frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}

This might seem counterintuitive (why add when we're subtracting?), but it makes sense when you think about it: the difference between two uncertain quantities is more uncertain than either quantity alone. Uncertainty compounds.

Example: Comparing Two Production Lines

A factory has two production lines making the same component. Historical quality data shows Line A has σA=2.5\sigma_A = 2.5 units and Line B has σB=3.0\sigma_B = 3.0 units. A sample of 40 items from Line A has mean 50.2, and a sample of 50 items from Line B has mean 48.7. Do the lines produce different mean outputs?

In[9]:
Code
import numpy as np
from scipy import stats

# Data
x_bar_A, sigma_A, n_A = 50.2, 2.5, 40
x_bar_B, sigma_B, n_B = 48.7, 3.0, 50

# Standard error of the difference
se_diff = np.sqrt((sigma_A**2 / n_A) + (sigma_B**2 / n_B))

# Observed difference
observed_diff = x_bar_A - x_bar_B

# Z-statistic (testing H0: μA = μB, i.e., D0 = 0)
z = observed_diff / se_diff

# Two-sided p-value
p_value = 2 * stats.norm.sf(abs(z))

# 95% CI for the difference
z_crit = stats.norm.ppf(0.975)
ci_lower = observed_diff - z_crit * se_diff
ci_upper = observed_diff + z_crit * se_diff

print("Two-Sample Z-Test: Production Lines")
print("=" * 50)
print(f"Line A: x̄ = {x_bar_A}, σ = {sigma_A}, n = {n_A}")
print(f"Line B: x̄ = {x_bar_B}, σ = {sigma_B}, n = {n_B}")
print()
print(f"Observed difference (A - B): {observed_diff:.2f}")
print(f"Standard error of difference: {se_diff:.4f}")
print(f"Z-statistic: {z:.3f}")
print(f"P-value (two-sided): {p_value:.4f}")
print()
print(f"95% CI for difference: [{ci_lower:.3f}, {ci_upper:.3f}]")
print()
print(f"Decision: {'Reject H₀' if p_value < 0.05 else 'Fail to reject H₀'}")
Out[9]:
Console
Two-Sample Z-Test: Production Lines
==================================================
Line A: x̄ = 50.2, σ = 2.5, n = 40
Line B: x̄ = 48.7, σ = 3.0, n = 50

Observed difference (A - B): 1.50
Standard error of difference: 0.5799
Z-statistic: 2.587
P-value (two-sided): 0.0097

95% CI for difference: [0.363, 2.637]

Decision: Reject H₀

The p-value of about 0.008 provides strong evidence that the two lines have different mean outputs. The 95% CI for the difference [0.39, 2.61] doesn't include 0, consistent with our rejection of the null hypothesis.

Z-Test for Proportions

The z-test for proportions is the most common real-world application of z-tests. It's used extensively in A/B testing, survey analysis, clinical trials, and quality control.

Why Z-Tests Work for Proportions

For a binomial proportion, the variance is determined by the proportion itself:

Var(p^)=p(1p)n\text{Var}(\hat{p}) = \frac{p(1-p)}{n}

Under the null hypothesis H0:p=p0H_0: p = p_0, we use p0p_0 to calculate the standard error:

SE=p0(1p0)nSE = \sqrt{\frac{p_0(1-p_0)}{n}}

The test statistic is:

z=p^p0p0(1p0)nz = \frac{\hat{p} - p_0}{\sqrt{\frac{p_0(1-p_0)}{n}}}

where p^=x/n\hat{p} = x/n is the sample proportion (x successes out of n trials).

When Is the Normal Approximation Valid?

The z-test for proportions uses the normal approximation to the binomial. This works well when:

  • np010n \cdot p_0 \geq 10 (expected successes)
  • n(1p0)10n \cdot (1 - p_0) \geq 10 (expected failures)

For smaller samples or proportions near 0 or 1, use exact binomial tests instead.

Example: A/B Testing

An e-commerce company wants to test whether a new checkout design improves their conversion rate. Their current conversion rate is 3.2%. After showing the new design to 5000 visitors, 185 converted. Is there evidence of improvement?

In[10]:
Code
import numpy as np
from scipy import stats

# Data
successes = 185
n = 5000
p_0 = 0.032  # Current/null conversion rate

# Sample proportion
p_hat = successes / n

# Standard error under null hypothesis
se = np.sqrt(p_0 * (1 - p_0) / n)

# Z-statistic
z = (p_hat - p_0) / se

# One-sided p-value (testing for improvement)
p_value = stats.norm.sf(z)

# Check normal approximation conditions
expected_successes = n * p_0
expected_failures = n * (1 - p_0)

print("Proportion Z-Test: A/B Test")
print("=" * 50)
print(f"Sample: {successes} conversions out of {n} visitors")
print(f"Sample proportion: {p_hat:.4f} ({p_hat * 100:.2f}%)")
print(f"Null hypothesis: p = {p_0} ({p_0 * 100:.1f}%)")
print()
print(f"Standard error: {se:.5f}")
print(f"Z-statistic: {z:.3f}")
print(f"P-value (one-sided, right): {p_value:.4f}")
print()
print("Normal approximation check:")
print(
    f"  Expected successes: {expected_successes:.1f} ≥ 10? {'✓' if expected_successes >= 10 else '✗'}"
)
print(
    f"  Expected failures: {expected_failures:.1f} ≥ 10? {'✓' if expected_failures >= 10 else '✗'}"
)
print()
print(f"Decision: {'Reject H₀' if p_value < 0.05 else 'Fail to reject H₀'}")
Out[10]:
Console
Proportion Z-Test: A/B Test
==================================================
Sample: 185 conversions out of 5000 visitors
Sample proportion: 0.0370 (3.70%)
Null hypothesis: p = 0.032 (3.2%)

Standard error: 0.00249
Z-statistic: 2.009
P-value (one-sided, right): 0.0223

Normal approximation check:
  Expected successes: 160.0 ≥ 10? ✓
  Expected failures: 4840.0 ≥ 10? ✓

Decision: Reject H₀

The p-value of about 0.034 is less than 0.05, so we have evidence that the new design improves conversion rates. The improvement from 3.2% to 3.7% represents a 15.6% relative increase.

Out[11]:
Visualization
Normal distribution showing sampling distribution for proportion with observed value and rejection region marked.
Visualization of the A/B test result. The observed conversion rate of 3.7% corresponds to a z-statistic of 1.83, which falls in the rejection region for a one-sided test at α = 0.05.

Two-Sample Proportion Test

When comparing proportions from two groups (e.g., control vs treatment), use:

z=p^1p^2p^(1p^)(1n1+1n2)z = \frac{\hat{p}_1 - \hat{p}_2}{\sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_1} + \frac{1}{n_2}\right)}}

Where p^=x1+x2n1+n2\hat{p} = \frac{x_1 + x_2}{n_1 + n_2} is the pooled proportion under the null hypothesis that p1=p2p_1 = p_2.

In[12]:
Code
import numpy as np
from scipy import stats

# Example: Comparing two treatments
# Treatment A: 45 successes out of 200
# Treatment B: 65 successes out of 200
x1, n1 = 45, 200
x2, n2 = 65, 200

# Sample proportions
p1 = x1 / n1
p2 = x2 / n2

# Pooled proportion
p_pooled = (x1 + x2) / (n1 + n2)

# Standard error
se = np.sqrt(p_pooled * (1 - p_pooled) * (1 / n1 + 1 / n2))

# Z-statistic
z = (p1 - p2) / se

# Two-sided p-value
p_value = 2 * stats.norm.sf(abs(z))

print("Two-Proportion Z-Test: Treatment Comparison")
print("=" * 50)
print(f"Treatment A: {x1}/{n1} = {p1:.3f} ({p1 * 100:.1f}%)")
print(f"Treatment B: {x2}/{n2} = {p2:.3f} ({p2 * 100:.1f}%)")
print()
print(f"Difference: {(p2 - p1) * 100:.1f} percentage points")
print(f"Pooled proportion: {p_pooled:.3f}")
print(f"Standard error: {se:.4f}")
print(f"Z-statistic: {z:.3f}")
print(f"P-value (two-sided): {p_value:.4f}")
print()
print(f"Decision: {'Reject H₀' if p_value < 0.05 else 'Fail to reject H₀'}")
Out[12]:
Console
Two-Proportion Z-Test: Treatment Comparison
==================================================
Treatment A: 45/200 = 0.225 (22.5%)
Treatment B: 65/200 = 0.325 (32.5%)

Difference: 10.0 percentage points
Pooled proportion: 0.275
Standard error: 0.0447
Z-statistic: -2.240
P-value (two-sided): 0.0251

Decision: Reject H₀

Relationship to the T-Test

The z-test and t-test are closely related. The key differences:

AspectZ-TestT-Test
Population σKnownUnknown (estimated from sample)
Test statistic distributionStandard normalt-distribution
Tail heavinessLighter tailsHeavier tails (especially for small n)
Critical valuesFixed (e.g., ±1.96 for 95%)Depend on degrees of freedom

As sample size increases, the t-distribution converges to the normal distribution. For n > 100, the difference is negligible.

Rule of thumb: If you're unsure whether to use a z-test or t-test, use the t-test. It's valid in all cases where the z-test is valid, plus it handles the much more common case where σ is unknown.

Summary

This chapter covered the z-test comprehensively:

When to Use Z-Tests

  • Known population standard deviation (rare for means)
  • Testing proportions (most common application)
  • Very large samples where t ≈ z

The Core Mathematics

  • Test statistic: z=xˉμ0σ/nz = \frac{\bar{x} - \mu_0}{\sigma / \sqrt{n}}
  • Under H0H_0, z follows a standard normal distribution
  • The standard error σ/n\sigma/\sqrt{n} quantifies how much sample means vary

Test Procedures

  • One-sample z-test: Test whether a mean differs from a specific value
  • Two-sample z-test: Test whether two means differ from each other
  • Proportion z-test: Test hypotheses about population proportions
  • Two-proportion z-test: Compare proportions between groups (A/B testing)

Key Points

  • Always verify normal approximation conditions for proportion tests
  • Confidence intervals provide the same information as hypothesis tests
  • The z-test is foundational for understanding more complex tests

What's Next

In the next chapter, The T-Test, we tackle the far more common situation: testing hypotheses when the population standard deviation is unknown. The t-test uses the sample standard deviation instead, which introduces additional uncertainty accounted for by the t-distribution's heavier tails.

You'll learn about one-sample t-tests, independent two-sample t-tests (including Welch's robust version), and paired t-tests for dependent samples. The conceptual framework is identical to what you've learned here; only the distribution changes.

After t-tests, subsequent chapters cover F-tests for comparing variances, ANOVA for comparing multiple groups, and advanced topics including power analysis, effect sizes, and multiple comparison corrections.

Quiz

Ready to test your understanding? Take this quick quiz to reinforce what you've learned about the z-test.

Loading component...

Reference

BIBTEXAcademic
@misc{theztestonesampletwosampleproportiontestscompleteguide, author = {Michael Brenndoerfer}, title = {The Z-Test: One-Sample, Two-Sample & Proportion Tests Complete Guide}, year = {2026}, url = {https://mbrenndoerfer.com/writing/z-test-one-sample-two-sample-proportion-tests}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-01-01} }
APAAcademic
Michael Brenndoerfer (2026). The Z-Test: One-Sample, Two-Sample & Proportion Tests Complete Guide. Retrieved from https://mbrenndoerfer.com/writing/z-test-one-sample-two-sample-proportion-tests
MLAAcademic
Michael Brenndoerfer. "The Z-Test: One-Sample, Two-Sample & Proportion Tests Complete Guide." 2026. Web. today. <https://mbrenndoerfer.com/writing/z-test-one-sample-two-sample-proportion-tests>.
CHICAGOAcademic
Michael Brenndoerfer. "The Z-Test: One-Sample, Two-Sample & Proportion Tests Complete Guide." Accessed today. https://mbrenndoerfer.com/writing/z-test-one-sample-two-sample-proportion-tests.
HARVARDAcademic
Michael Brenndoerfer (2026) 'The Z-Test: One-Sample, Two-Sample & Proportion Tests Complete Guide'. Available at: https://mbrenndoerfer.com/writing/z-test-one-sample-two-sample-proportion-tests (Accessed: today).
SimpleBasic
Michael Brenndoerfer (2026). The Z-Test: One-Sample, Two-Sample & Proportion Tests Complete Guide. https://mbrenndoerfer.com/writing/z-test-one-sample-two-sample-proportion-tests