Market Risk Measurement: VaR, Expected Shortfall & Stress Tests

Michael BrenndoerferDecember 20, 202549 min read

Learn VaR calculation using parametric, historical, and Monte Carlo methods. Explore Expected Shortfall and stress testing for market risk management.

Reading Level

Choose your expertise level to adjust how many terms are explained. Beginners see more tooltips, experts see fewer to maintain reading flow. Hover over underlined terms for instant definitions.

Market Risk Measurement - VaR and Beyond

The collapse of Barings Bank in 1995 and the near-failure of Long-Term Capital Management in 1998 demonstrated that even sophisticated financial institutions could face catastrophic losses from market risk exposures they poorly understood. These events accelerated the adoption of Value-at-Risk (VaR), a deceptively simple metric that answers a question every risk manager needs answered: "How bad could things get?"

VaR distills the complexity of a portfolio's risk exposures into a single number representing the maximum expected loss over a specified time horizon at a given confidence level. If a portfolio has a one-day 99% VaR of $10 million, this means that under normal market conditions, the portfolio should not lose more than $10 million in a single day with 99% probability. Equivalently, losses exceeding $10 million should occur only about 1% of trading days, or roughly 2-3 times per year.

The Basel Committee on Banking Supervision embedded VaR into the regulatory framework for bank capital requirements in 1996, cementing its role as the industry standard for market risk measurement. However, the 2008 financial crisis exposed serious limitations: VaR tells you where the cliff is but not how far you might fall. This chapter develops VaR from first principles, implements the three major calculation methods, and then moves beyond VaR to Expected Shortfall and stress testing techniques that address its shortcomings.

The VaR Framework

Before diving into calculation methods, we need to establish the formal definition of VaR and understand what it does and does not measure. At its heart, VaR addresses a fundamental question in risk management: given the inherent uncertainty in financial markets, what is the worst loss we should reasonably expect under normal conditions? The answer requires us to think probabilistically about potential outcomes rather than seeking a single deterministic prediction.

We cannot know exactly what loss will occur tomorrow. Instead, we can characterize the full distribution of possible losses and then identify a threshold that separates "normal" adverse outcomes from truly extreme events. This threshold becomes our VaR estimate. By specifying a confidence level, we acknowledge that losses beyond this threshold will sometimes occur, but we want such exceedances to be rare enough that we consider them exceptional rather than routine.

Value-at-Risk (VaR)

The Value-at-Risk at confidence level α\alpha over time horizon hh is defined as the smallest loss \ell such that the probability of experiencing a loss greater than \ell is at most 1α1 - \alpha:

VaRα=inf{ R:P(L>)1α }\text{VaR}_{\alpha} = \inf\{\ \ell \in \mathbb{R} : P(L > \ell) \leq 1 - \alpha\ \}

where:

  • VaRα\text{VaR}_{\alpha}: Value-at-Risk at confidence level α\alpha
  • \ell: a potential loss value
  • LL: portfolio loss (positive values indicate losses)
  • α\alpha: confidence level
  • inf\inf: infimum (smallest value)

Equivalently, VaR is the α\alpha-quantile of the loss distribution.

This formal definition uses the infimum (greatest lower bound) to handle distributions that may have discontinuities or discrete probability masses. For most practical purposes with continuous return distributions, you can think of VaR simply as the loss value that separates the worst (1α)(1-\alpha) fraction of outcomes from the better α\alpha fraction. The definition ensures that we identify the precise point on the loss distribution where cumulative probability reaches our target confidence level.

VaR depends on three key parameters:

  • Confidence level (α\alpha): Typically 95% or 99%. Higher confidence levels produce larger VaR estimates but are harder to backtest due to fewer tail observations.
  • Time horizon (hh): Usually 1 day for trading desks or 10 days for regulatory capital calculations. Longer horizons capture more risk but assume portfolio composition remains static.
  • Portfolio value: VaR is typically expressed in absolute currency terms, though it can also be expressed as a percentage of portfolio value.

The sign convention varies in practice. Some practitioners define VaR as a positive number representing potential loss, while others express it as a negative return. We follow the convention where VaR is positive, so a VaR of $10 million means the portfolio could lose up to $10 million.

Relationship to Return Distributions

Understanding how VaR connects to the underlying return distribution is essential for all calculation methods. This relationship provides the mathematical bridge between the abstract definition of VaR and the practical task of computing it from return data or models.

For a portfolio with current value VV and return RR over the horizon, the loss is L=VRL = -V \cdot R (since negative returns create positive losses). This transformation is crucial: when we observe a return of -3%, the portfolio experiences a positive loss equal to 3% of its value. By expressing everything in terms of losses, we ensure that VaR comes out as a positive number that can be directly interpreted as potential monetary damage.

If returns follow a continuous distribution with cumulative distribution function FRF_R, then VaR at confidence level α\alpha corresponds to the (1α)(1-\alpha) quantile of the return distribution:

VaRα=VFR1(1α)\text{VaR}_{\alpha} = -V \cdot F_R^{-1}(1-\alpha)

where:

  • VaRα\text{VaR}_{\alpha}: Value-at-Risk at confidence level α\alpha
  • VV: current portfolio value
  • FR1F_R^{-1}: inverse cumulative distribution function of returns
  • 1α1-\alpha: probability level for the tail

To understand why VaR uses the (1α)(1-\alpha) quantile rather than the α\alpha quantile, consider what we are seeking. For 99% VaR, we want the loss that is exceeded only 1% of the time. This corresponds to returns that fall below the 1st percentile of the return distribution. The inverse CDF FR1(0.01)F_R^{-1}(0.01) gives us exactly this value: the return below which only 1% of outcomes fall. Multiplying by V-V converts this worst-case return into the corresponding dollar loss.

This relationship is the foundation for all VaR calculation methods. The methods differ in how they estimate the return distribution and its quantiles. Parametric methods assume a specific distributional form and estimate its parameters. Historical simulation uses the empirical distribution directly. Monte Carlo simulation generates a synthetic distribution from a specified stochastic model. Each approach represents a different answer to the question: "How do we characterize the distribution of portfolio returns?"

Out[2]:
Visualization
Normal distribution curve with left tail shaded showing VaR concept.
VaR as a quantile of the return distribution. The visualization illustrates how the 99% VaR threshold isolates extreme tail events, separating the most severe 1% of potential losses from standard market fluctuations.

Parametric VaR: The Variance-Covariance Method

The parametric approach assumes portfolio returns follow a known distribution, typically the normal distribution. This allows closed-form calculation of VaR using only the mean and standard deviation of returns.

The parametric method is simple and computationally efficient. If we accept the assumption that returns are normally distributed, we can exploit the well-known properties of the normal distribution to derive an explicit formula for VaR. This avoids the need for simulation or extensive historical databases, making the method particularly attractive for large portfolios where computational speed matters.

Single-Asset Parametric VaR

For a single asset with normally distributed returns having mean μ\mu and standard deviation σ\sigma, we can determine any quantile of the distribution using the standard normal transformation. The key insight is that any normal distribution can be expressed as a linear transformation of the standard normal distribution, which has mean zero and standard deviation one.

The (1α)(1-\alpha) quantile of returns is:

R1α=μ+σz1αR_{1-\alpha} = \mu + \sigma \cdot z_{1-\alpha}

where:

  • R1αR_{1-\alpha}: (1α)(1-\alpha) quantile of returns
  • μ\mu: mean return
  • σ\sigma: return standard deviation
  • z1αz_{1-\alpha}: (1α)(1-\alpha) quantile of the standard normal distribution

This formula follows directly from the properties of normal distributions. If RN(μ,σ2)R \sim N(\mu, \sigma^2), then Z=(Rμ)/σZ = (R - \mu)/\sigma follows the standard normal N(0,1)N(0,1). Solving for RR at the point where Z=z1αZ = z_{1-\alpha} gives us the quantile formula above.

For common confidence levels, z0.051.645z_{0.05} \approx -1.645 and z0.012.326z_{0.01} \approx -2.326. These values indicate how many standard deviations below the mean we must go to reach the specified tail probability. The negative sign reflects that we are looking at the left tail of the distribution, where the worst returns reside.

The parametric VaR for a portfolio of value VV is derived by scaling the return quantile by the portfolio value:

VaRα=VR1α(scale return quantile by value)=V(μ+σz1α)(substitute parametric return model)\begin{aligned} \text{VaR}_{\alpha} &= -V \cdot R_{1-\alpha} && \text{(scale return quantile by value)} \\ &= -V(\mu + \sigma \cdot z_{1-\alpha}) && \text{(substitute parametric return model)} \end{aligned}

where:

  • VaRα\text{VaR}_{\alpha}: Value-at-Risk
  • VV: portfolio value
  • μ\mu: expected return
  • σ\sigma: volatility
  • z1αz_{1-\alpha}: standard normal quantile

Let us trace through this derivation more carefully. We start with the relationship VaRα=VR1α\text{VaR}_{\alpha} = -V \cdot R_{1-\alpha}, which converts the worst expected return into a dollar loss. Substituting our expression for the return quantile, we obtain VaRα=V(μ+σz1α)\text{VaR}_{\alpha} = -V \cdot (\mu + \sigma \cdot z_{1-\alpha}). Since z1αz_{1-\alpha} is negative for tail quantiles (we are looking at returns below the mean), the product σz1α\sigma \cdot z_{1-\alpha} is also negative, and subtracting this negative quantity effectively adds a positive risk term.

For short time horizons like one day, the expected return μ\mu is typically negligible compared to volatility (μ0\mu \approx 0). To see why, consider that a typical stock might have an expected daily return of 0.04% (about 10% annualized), while daily volatility might be 2%. The volatility term dominates the calculation by roughly a factor of 50. Since z1αz_{1-\alpha} is negative for tail quantiles, z1α=z1α-z_{1-\alpha} = |z_{1-\alpha}|, allowing us to simplify:

VaRαVσz1α\text{VaR}_{\alpha} \approx V \cdot \sigma \cdot |z_{1-\alpha}|

where:

  • VaRα\text{VaR}_{\alpha}: simplified Value-at-Risk
  • VV: portfolio value
  • σ\sigma: volatility
  • z1α|z_{1-\alpha}|: absolute value of the quantile (e.g., 1.645 for 95% confidence)

This simplified formula reveals the intuitive structure of parametric VaR. The risk measure is proportional to portfolio value, volatility, and a multiplier that depends on the confidence level. For 95% confidence, the multiplier is 1.645; for 99% confidence, it increases to 2.326. This scaling reflects the deeper penetration into the tail required for higher confidence levels.

Let's implement this for a stock portfolio using historical data to estimate volatility:

In[3]:
Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats

plt.rcParams.update(
    {
        "figure.figsize": (6.0, 4.0),
        "figure.dpi": 300,
        "figure.constrained_layout.use": True,
        "font.family": "sans-serif",
        "font.sans-serif": [
            "Noto Sans CJK SC",
            "Apple SD Gothic Neo",
            "DejaVu Sans",
            "Arial",
        ],
        "font.size": 10,
        "axes.titlesize": 11,
        "axes.titleweight": "bold",
        "axes.titlepad": 8,
        "axes.labelsize": 10,
        "axes.labelpad": 4,
        "xtick.labelsize": 9,
        "ytick.labelsize": 9,
        "legend.fontsize": 9,
        "legend.title_fontsize": 10,
        "legend.frameon": True,
        "legend.loc": "best",
        "lines.linewidth": 1.5,
        "lines.markersize": 5,
        "axes.grid": True,
        "grid.alpha": 0.3,
        "grid.linestyle": "--",
        "axes.spines.top": False,
        "axes.spines.right": False,
        "axes.prop_cycle": plt.cycler(
            color=["#1f77b4", "#ff7f0e", "#2ca02c", "#d62728", "#7f7f7f"]
        ),
    }
)

np.random.seed(42)

# Simulate 2 years of daily returns for demonstration
n_days = 504
returns = np.random.normal(
    0.0005, 0.02, n_days
)  # ~0.05% daily mean, 2% daily vol

# Portfolio parameters
portfolio_value = 10_000_000  # $10 million portfolio

# Estimate return statistics
mu = returns.mean()
sigma = returns.std()

# Calculate parametric VaR at 95% and 99% confidence
alpha_95 = 0.95
alpha_99 = 0.99

z_05 = stats.norm.ppf(1 - alpha_95)  # -1.645
z_01 = stats.norm.ppf(1 - alpha_99)  # -2.326

var_95_parametric = -portfolio_value * (mu + sigma * z_05)
var_99_parametric = -portfolio_value * (mu + sigma * z_01)
Out[4]:
Console
Parametric VaR Calculation
========================================
Portfolio Value: $10,000,000
Daily Mean Return: 0.0715%
Daily Volatility: 1.9664%

95% VaR: $316,294
99% VaR: $450,303

The 99% VaR of approximately $460,000 means we expect daily losses to exceed this amount only about 1% of trading days, roughly 2-3 times per year.

Out[5]:
Visualization
Line plot showing VaR increasing with confidence level from 90% to 99.9%.
VaR sensitivity to confidence level. As the confidence level increases from 90% to 99.9%, the risk measure penetrates deeper into the tail of the distribution, resulting in significantly higher loss estimates.

Multi-Asset Parametric VaR

For a portfolio of multiple assets, we need to account for correlations between asset returns. As we discussed in Part IV on Modern Portfolio Theory, the variance of a portfolio depends on the covariance matrix of its constituents. When assets move together, portfolio risk is higher; when they move independently or inversely, diversification reduces total risk below the weighted sum of individual risks.

The mathematical framework for multi-asset VaR builds directly on portfolio theory. Consider a portfolio with nn assets, where we invest fraction wiw_i in asset ii. The portfolio return is the weighted sum of individual returns: Rp=i=1nwiRiR_p = \sum_{i=1}^{n} w_i R_i. Since the sum of normally distributed random variables is also normal, the portfolio return inherits normality from its constituents.

For a portfolio with weight vector w\mathbf{w} and asset covariance matrix Σ\boldsymbol{\Sigma}, the portfolio variance is:

σp2=wTΣw\sigma_p^2 = \mathbf{w}^T \boldsymbol{\Sigma} \mathbf{w}

where:

  • σp2\sigma_p^2: portfolio variance
  • w\mathbf{w}: vector of portfolio weights
  • Σ\boldsymbol{\Sigma}: covariance matrix of asset returns
  • T^T: transpose operator

This quadratic form captures all pairwise interactions between assets. The diagonal elements of Σ\boldsymbol{\Sigma} contribute weighted variances, while the off-diagonal elements contribute weighted covariances. When assets are less than perfectly correlated, the cross-product terms partially cancel, reducing portfolio variance below what a simple weighted average would suggest.

The parametric VaR extends naturally by substituting portfolio volatility σp=wTΣw\sigma_p = \sqrt{\mathbf{w}^T \boldsymbol{\Sigma} \mathbf{w}} into our single-asset formula:

VaRα=VwTΣwz1α\text{VaR}_{\alpha} = V \cdot \sqrt{\mathbf{w}^T \boldsymbol{\Sigma} \mathbf{w}} \cdot |z_{1-\alpha}|

where:

  • VaRα\text{VaR}_{\alpha}: Value-at-Risk
  • VV: total portfolio value
  • w\mathbf{w}: weight vector
  • Σ\boldsymbol{\Sigma}: covariance matrix
  • z1αz_{1-\alpha}: standard normal quantile

This formula demonstrates that portfolio VaR depends on the entire covariance structure, not just individual asset volatilities. Two portfolios with identical asset volatilities can have dramatically different VaR depending on their correlation structure. A portfolio of highly correlated assets concentrates risk, while a portfolio of uncorrelated or negatively correlated assets disperses it.

In[6]:
Code
# Create a 3-asset portfolio
n_assets = 3
n_obs = 504

# Simulate correlated returns
correlation_matrix = np.array(
    [[1.0, 0.6, 0.3], [0.6, 1.0, 0.4], [0.3, 0.4, 1.0]]
)

# Daily volatilities
daily_vols = np.array([0.02, 0.025, 0.015])  # 2%, 2.5%, 1.5%

# Construct covariance matrix
cov_matrix = np.outer(daily_vols, daily_vols) * correlation_matrix

# Portfolio weights (equal-weighted)
weights = np.array([0.4, 0.35, 0.25])

# Calculate portfolio volatility
portfolio_vol = np.sqrt(weights @ cov_matrix @ weights)

# Multi-asset parametric VaR
z_05 = stats.norm.ppf(1 - alpha_95)  # -1.645
z_01 = stats.norm.ppf(1 - alpha_99)  # -2.326

var_95_multi = portfolio_value * portfolio_vol * abs(z_05)
var_99_multi = portfolio_value * portfolio_vol * abs(z_01)
Out[7]:
Console
Multi-Asset Parametric VaR
========================================
Portfolio Weights: [0.4  0.35 0.25]
Asset Volatilities: [0.02  0.025 0.015]

Correlation Matrix:
[[1.  0.6 0.3]
 [0.6 1.  0.4]
 [0.3 0.4 1. ]]

Portfolio Volatility: 1.6819%

95% VaR: $276,646
99% VaR: $391,266

Notice that the portfolio volatility (approximately 1.7%) is lower than a simple weighted average of individual volatilities would suggest. This is the diversification benefit: imperfect correlations reduce portfolio risk.

Out[8]:
Visualization
Line plot showing portfolio VaR decreasing as correlation decreases.
Impact of asset correlation on portfolio VaR. Lower correlations provide diversification benefits, reducing portfolio risk.

Advantages and Limitations of Parametric VaR

The parametric method has compelling advantages. It is computationally fast, requires only estimates of means, volatilities, and correlations, and provides intuitive decomposition of risk contributions. Financial institutions with thousands of positions can compute VaR almost instantaneously.

However, the normality assumption is problematic. As we explored in the chapter on Stylized Facts of Financial Returns, actual return distributions exhibit fat tails and excess kurtosis. The normal distribution underestimates the probability of extreme events. A return of 4 standard deviations should occur about once every 126 years under normality, yet markets regularly experience such moves several times per decade.

Historical Simulation VaR

Historical simulation sidesteps distributional assumptions by using the actual empirical distribution of past returns. The method revalues the portfolio using historical return scenarios and directly reads VaR from the resulting distribution of portfolio values.

Historical simulation avoids theoretical models by using actual data. Whatever patterns exist in historical returns, including fat tails, skewness, and complex dependencies, are automatically captured when we use actual historical scenarios as the basis for risk measurement.

The Historical Simulation Approach

The procedure begins by assembling a database of historical returns for all assets in the portfolio. Each historical day provides one complete scenario: a vector of returns for every asset that actually occurred together in the market. This preserves the natural co-movement patterns that existed on each date, including any unusual correlation structures during market stress periods.

The steps proceed systematically:

  1. Collect historical returns for all portfolio assets over a lookback window (typically 250-500 days)
  2. Apply each historical return scenario to the current portfolio
  3. Calculate the portfolio profit or loss for each scenario
  4. Sort the P&L outcomes and select the appropriate quantile as VaR

For 500 historical observations and 99% confidence, VaR is the 5th worst loss (since 1% of 500 is 5).

The key conceptual shift from parametric VaR is that we no longer assume any particular distributional form. Instead, we treat the historical return vectors as equally likely future scenarios. Each past day contributes one potential outcome, and the empirical distribution of these outcomes becomes our estimate of the true return distribution. VaR is then read directly from this empirical distribution as the appropriate order statistic.

In[9]:
Code
# Generate historical scenarios for 3 assets using Cholesky decomposition
np.random.seed(42)
cholesky = np.linalg.cholesky(cov_matrix)
random_normals = np.random.randn(n_obs, n_assets)
asset_returns = random_normals @ cholesky.T

# Calculate portfolio returns for each historical scenario
portfolio_returns = asset_returns @ weights

# Calculate portfolio P&L for each scenario
portfolio_pnl = portfolio_value * portfolio_returns

# Sort P&L from worst to best
sorted_pnl = np.sort(portfolio_pnl)

# Historical simulation VaR
# For 95% confidence with 500 observations, use the 25th worst outcome (5% * 500)
idx_95 = int(np.floor(n_obs * (1 - alpha_95)))
idx_99 = int(np.floor(n_obs * (1 - alpha_99)))

var_95_hist = -sorted_pnl[idx_95]
var_99_hist = -sorted_pnl[idx_99]
Out[10]:
Console
Historical Simulation VaR
========================================
Number of scenarios: 504
Index for 95% VaR: 25 (worst outcome #26)
Index for 99% VaR: 5 (worst outcome #6)

95% VaR: $233,226
99% VaR: $303,960

The historical simulation results are subject to sampling noise given the limited history (504 days). With 99% confidence, we are looking at the 5th or 6th worst outcome, making the estimate sensitive to individual extreme points in the dataset.

Visualizing the P&L Distribution

Out[11]:
Visualization
Histogram of portfolio P&L with vertical lines marking VaR thresholds.
Historical P&L distribution showing the 95% and 99% VaR thresholds. The empirical distribution demonstrates how most outcomes cluster near zero, while the shaded tail regions highlight the significant but infrequent losses captured by VaR.

The histogram shows that losses beyond the 99% VaR threshold are rare but do occur. Historical simulation captures the actual shape of the return distribution, including any fat tails present in the data.

Advantages and Limitations of Historical Simulation

Historical simulation requires no distributional assumptions and automatically captures fat tails, skewness, and nonlinear dependencies present in historical data. It handles options and other nonlinear instruments naturally since you revalue the actual portfolio under each scenario.

The method has significant drawbacks, however. It assumes the past is representative of the future: if the lookback window contains only calm markets, VaR will underestimate risk. The method is also sensitive to the choice of window length. Too short a window produces noisy estimates; too long a window includes stale data that may no longer reflect current market dynamics. Additionally, rare extreme events in the lookback period can cause VaR to jump discontinuously as those observations enter or exit the window.

Monte Carlo VaR

Monte Carlo simulation combines the flexibility of historical simulation with the ability to model forward-looking dynamics. Rather than using actual historical returns, Monte Carlo VaR generates scenarios from a specified stochastic model of asset returns.

Monte Carlo simulation overcomes the lack of extreme events in historical data. By specifying a stochastic model, we can generate arbitrarily many scenarios, including rare combinations that have never occurred historically but remain plausible. This allows us to explore the full range of potential outcomes implied by our risk model.

The Monte Carlo Approach

Building on our coverage of Monte Carlo simulation in Part III for derivative pricing, the procedure for VaR is analogous:

  1. Specify a stochastic model for asset returns (e.g., multivariate normal, multivariate t, or more complex dynamics)
  2. Estimate model parameters from historical data
  3. Generate thousands of random scenarios from the model
  4. Revalue the portfolio under each scenario
  5. Calculate VaR from the simulated P&L distribution

The key advantage is flexibility: you can incorporate time-varying volatility (such as GARCH models we studied in Part III), fat-tailed distributions, or scenario-specific assumptions.

The quality of Monte Carlo VaR depends critically on the specified model. If the model accurately captures the true dynamics of asset returns, the simulated distribution will converge to the true distribution as the number of scenarios increases. However, if the model is misspecified, even infinitely many simulations will converge to the wrong answer. This emphasizes that Monte Carlo simulation does not eliminate model risk; it merely shifts the burden from distributional assumptions to model specification.

In[12]:
Code
# Monte Carlo VaR with multivariate normal model
np.random.seed(42)
n_simulations = 10000

# Generate simulated returns using the same covariance structure
random_normals_mc = np.random.randn(n_simulations, n_assets)
simulated_returns = random_normals_mc @ cholesky.T

# Calculate portfolio returns and P&L
simulated_portfolio_returns = simulated_returns @ weights
simulated_pnl = portfolio_value * simulated_portfolio_returns

# Sort and calculate VaR
sorted_simulated_pnl = np.sort(simulated_pnl)

idx_95_mc = int(np.floor(n_simulations * (1 - alpha_95)))
idx_99_mc = int(np.floor(n_simulations * (1 - alpha_99)))

var_95_mc = -sorted_simulated_pnl[idx_95_mc]
var_99_mc = -sorted_simulated_pnl[idx_99_mc]
Out[13]:
Console
Monte Carlo VaR (Normal Distribution)
========================================
Number of simulations: 10,000

95% VaR: $274,905
99% VaR: $384,912

These results are similar to the parametric method because the underlying simulation used a normal distribution and the same covariance structure. The slight differences arise from sampling noise in the random number generation, which would decrease as the number of simulations increases.

Monte Carlo VaR with Fat Tails

To address the fat-tail problem, we can replace the normal distribution with a Student's t-distribution, which has heavier tails controlled by the degrees of freedom parameter ν\nu. Lower values of ν\nu produce fatter tails; as ν\nu \to \infty, the t-distribution converges to the normal.

The t-distribution provides a natural extension that preserves much of the mathematical tractability of the normal while allowing for heavier tails. Empirical studies suggest that financial returns are often well-approximated by t-distributions with degrees of freedom between 3 and 8, depending on the asset class and time period. The choice of ν\nu represents a modeling decision about how fat we believe the tails to be.

Constructing multivariate t-distributed random variables requires a subtle technique. We cannot simply substitute t-distributed marginals into a correlation structure because that would not produce the correct joint distribution. Instead, we exploit a mathematical property: a multivariate t-distribution can be represented as a multivariate normal divided by an independent chi-squared random variable. This representation allows us to generate correlated t-distributed returns while preserving the desired dependency structure.

In[14]:
Code
# Monte Carlo VaR with multivariate t-distribution
np.random.seed(42)
degrees_of_freedom = 5  # Produces fatter tails than normal

# Generate t-distributed random variables
# Multivariate t can be constructed from normal and chi-squared
chi2_samples = np.random.chisquare(degrees_of_freedom, n_simulations)
scaling = np.sqrt(degrees_of_freedom / chi2_samples)

# Scale the normal simulations to get t-distributed returns
t_simulated_returns = simulated_returns * scaling[:, np.newaxis]

# Calculate portfolio P&L
t_simulated_portfolio_returns = t_simulated_returns @ weights
t_simulated_pnl = portfolio_value * t_simulated_portfolio_returns

# Sort and calculate VaR
sorted_t_pnl = np.sort(t_simulated_pnl)

var_95_mc_t = -sorted_t_pnl[idx_95_mc]
var_99_mc_t = -sorted_t_pnl[idx_99_mc]
Out[15]:
Console
Monte Carlo VaR (Student's t-Distribution, df=5)
========================================

95% VaR: $327,621
99% VaR: $551,620

Increase vs. Normal:
  95% VaR: 19.2%
  99% VaR: 43.3%

The t-distribution produces substantially higher VaR estimates, particularly at the 99% level where tail behavior dominates. This illustrates why distributional assumptions matter significantly for risk measurement.

Out[16]:
Visualization
Line plot comparing probability densities of normal and t-distributions in the tail region.
Comparison of normal and t-distribution tails. The t-distribution (df=5) assigns significantly higher probability to extreme returns, explaining why t-based VaR estimates are larger.
Notebook output

Comparing All Three Methods

Out[17]:
Visualization
Bar chart comparing VaR estimates from parametric, historical, and Monte Carlo methods.
Comparison of VaR estimates across the three calculation methods. The t-distribution Monte Carlo produces the highest estimates due to fatter tails.

Limitations of VaR

Despite its widespread adoption, VaR has fundamental limitations that became painfully apparent during the 2008 financial crisis. Understanding these limitations is essential for proper risk management.

VaR Tells You Nothing About Tail Severity

The most critical limitation is that VaR is silent about losses beyond the threshold. A 99% VaR of $10 million says losses will exceed this level 1% of the time, but it provides no information about whether those tail losses average $11 million or $50 million. Two portfolios with identical VaR can have vastly different risk profiles in the tail.

Consider a portfolio that sells deep out-of-the-money options. Most days it earns small premium income, producing a favorable VaR. But on the rare days when the options go in-the-money, losses can be catastrophic. VaR completely misses this asymmetric risk profile.

VaR Is Not Sub-Additive

A mathematically troubling property is that VaR can violate sub-additivity: the VaR of a combined portfolio can exceed the sum of individual VaRs. This means diversification can appear to increase risk under VaR, which contradicts financial intuition and makes risk aggregation problematic.

Coherent Risk Measures

A risk measure ρ\rho is coherent if it satisfies four axioms:

  1. Monotonicity: If portfolio A always loses more than portfolio B, then ρ(A)ρ(B)\rho(A) \geq \rho(B)
  2. Sub-additivity: ρ(A+B)ρ(A)+ρ(B)\rho(A + B) \leq \rho(A) + \rho(B) (diversification reduces risk)
  3. Positive homogeneity: ρ(λA)=λρ(A)\rho(\lambda A) = \lambda \rho(A) for λ>0\lambda > 0 (doubling position doubles risk)
  4. Translation invariance: Adding cash reduces risk by the cash amount

VaR satisfies all axioms except sub-additivity for general distributions.

Backtesting Challenges

Validating VaR models requires backtesting: comparing predicted VaR to actual realized P&L. For 99% VaR, you expect exceedances about 1% of trading days. With 250 trading days per year, that's only 2-3 expected exceedances, making it statistically difficult to distinguish a good model from a bad one. You need years of data to have statistical power, but by then market dynamics may have changed.

Model Risk and Parameter Uncertainty

All three VaR methods depend on historical data to estimate parameters or scenarios. When markets enter regimes not represented in the lookback period, VaR estimates can be dangerously wrong. The 2008 crisis saw correlations spike toward 1.0 across asset classes, volatilities explode, and return distributions shift dramatically. VaR models calibrated to pre-crisis data severely underestimated risk.

Expected Shortfall: A Coherent Alternative

Expected Shortfall (ES), also known as Conditional VaR (CVaR) or Average VaR, addresses VaR's silence about tail severity by measuring the average loss given that the loss exceeds VaR.

Expected Shortfall accounts for the entire tail rather than just the threshold. VaR marks the boundary of the danger zone, but Expected Shortfall tells us the average severity of outcomes within that zone. This provides a more complete picture of tail risk exposure.

Expected Shortfall

Expected Shortfall at confidence level α\alpha is the expected loss conditional on the loss exceeding VaR:

ESα=E[L  L>VaRα]\text{ES}_{\alpha} = E[L\ |\ L > \text{VaR}_{\alpha}]

where:

  • ESα\text{ES}_{\alpha}: Expected Shortfall at confidence level α\alpha
  • E[]E[\cdot]: expected value operator
  • LL: portfolio loss
  • VaRα\text{VaR}_{\alpha}: Value-at-Risk threshold

Equivalently, ES is the average of all losses in the tail beyond VaR.

The definition has a clear practical interpretation. Expected Shortfall answers the question: "Given that we have entered the danger zone, how bad do things typically get?" This conditional expectation provides exactly the information that VaR withholds.

Expected Shortfall is always greater than or equal to VaR at the same confidence level, and it satisfies sub-additivity, making it a coherent risk measure. The Basel Committee recognized these advantages and incorporated ES into the Basel III framework, requiring banks to calculate 97.5% ES for market risk capital.

Calculating Expected Shortfall

For historical simulation or Monte Carlo, ES is simply the average of losses beyond the VaR threshold. This calculation is straightforward once we have generated or collected the distribution of portfolio outcomes:

In[18]:
Code
# Expected Shortfall from historical simulation
losses_hist = -portfolio_pnl  # Convert P&L to losses
sorted_losses_hist = np.sort(losses_hist)[::-1]  # Sort from largest to smallest

# ES is average of losses beyond VaR threshold
alpha_95 = 0.95
alpha_99 = 0.99
n_tail_95 = int(np.ceil(n_obs * (1 - alpha_95)))
n_tail_99 = int(np.ceil(n_obs * (1 - alpha_99)))

es_95_hist = sorted_losses_hist[:n_tail_95].mean()
es_99_hist = sorted_losses_hist[:n_tail_99].mean()

# Expected Shortfall from Monte Carlo (normal)
losses_mc = -simulated_pnl
sorted_losses_mc = np.sort(losses_mc)[::-1]

n_tail_95_mc = int(np.ceil(n_simulations * (1 - alpha_95)))
n_tail_99_mc = int(np.ceil(n_simulations * (1 - alpha_99)))

es_95_mc = sorted_losses_mc[:n_tail_95_mc].mean()
es_99_mc = sorted_losses_mc[:n_tail_99_mc].mean()

# Expected Shortfall from Monte Carlo (t-distribution)
losses_mc_t = -t_simulated_pnl
sorted_losses_mc_t = np.sort(losses_mc_t)[::-1]

es_95_mc_t = sorted_losses_mc_t[:n_tail_95_mc].mean()
es_99_mc_t = sorted_losses_mc_t[:n_tail_99_mc].mean()
Out[19]:
Console
Expected Shortfall Comparison
==================================================

Historical Simulation:
  95% ES: $290,377 (VaR: $233,226)
  99% ES: $369,021 (VaR: $303,960)

Monte Carlo (Normal):
  95% ES: $341,885 (VaR: $274,905)
  99% ES: $435,761 (VaR: $384,912)

Monte Carlo (t-distribution):
  95% ES: $473,235 (VaR: $327,621)
  99% ES: $747,620 (VaR: $551,620)

Expected Shortfall is consistently higher than VaR because it averages all losses in the tail rather than just identifying the threshold. The difference is most pronounced in the t-distribution model, where fat tails lead to significantly larger expected losses beyond the VaR cutoff.

Parametric Expected Shortfall

For normally distributed returns, Expected Shortfall has a closed-form solution that follows from the properties of the truncated normal distribution. The derivation involves computing the expected value of a normal random variable conditional on it falling below a specified threshold.

ESα=Vσϕ(z1α)1α\text{ES}_{\alpha} = V \cdot \sigma \cdot \frac{\phi(z_{1-\alpha})}{1-\alpha}

where:

  • ESα\text{ES}_{\alpha}: Expected Shortfall
  • VV: portfolio value
  • σ\sigma: volatility
  • ϕ\phi: standard normal probability density function
  • z1αz_{1-\alpha}: (1α)(1-\alpha) quantile of standard normal distribution
  • 1α1-\alpha: tail probability

The term ϕ(z1α)1α\frac{\phi(z_{1-\alpha})}{1-\alpha} (often called the inverse Mills ratio) represents the expected value of the standardized tail. It tells us how far the average tail event is from the mean in standard deviation units. To understand this intuitively, consider that the density function ϕ(z1α)\phi(z_{1-\alpha}) measures how much probability mass exists exactly at the VaR threshold, while the denominator (1α)(1-\alpha) is the total probability in the tail. Their ratio captures the shape of the tail and determines how far beyond VaR we expect to go on average.

By multiplying this factor by the portfolio volatility σ\sigma and value VV, we convert the standardized tail expectation into a currency loss. This formula connects the properties of the normal distribution to practical risk measurement.

In[20]:
Code
# Parametric ES for normal distribution
def parametric_es(confidence, volatility, portfolio_val):
    """Calculate parametric ES assuming normal distribution."""
    alpha = 1 - confidence
    z = stats.norm.ppf(alpha)
    phi_z = stats.norm.pdf(z)
    return portfolio_val * volatility * phi_z / alpha


es_95_parametric = parametric_es(alpha_95, portfolio_vol, portfolio_value)
es_99_parametric = parametric_es(alpha_99, portfolio_vol, portfolio_value)
Out[21]:
Console
Parametric Expected Shortfall (Normal)
==================================================
95% ES: $346,925 (VaR: $276,646)
99% ES: $448,259 (VaR: $391,266)

ES/VaR Ratio at 95%: 1.25
ES/VaR Ratio at 99%: 1.15

For the normal distribution, the ratio of ES to VaR is relatively stable (approximately 1.15 for 99% confidence). This ratio is often used as a rule of thumb, but it dangerously underestimates tail risk if the underlying distribution has fat tails (excess kurtosis).

Visualizing VaR vs. Expected Shortfall

Out[22]:
Visualization
Histogram showing P&L distribution with VaR threshold and ES level marked.
Comparison of VaR and Expected Shortfall thresholds on a P&L distribution. While 99% VaR identifies the point where the worst 1% of losses begin, 99% ES averages all outcomes beyond that point to capture the full severity of the tail.
Out[23]:
Visualization
Bar chart comparing ES/VaR ratios for normal and t-distribution models.
Ratio of Expected Shortfall to Value-at-Risk across different distributions. The t-distribution (df=5) exhibits a significantly higher ratio than the normal distribution, highlighting how fat tails increase the expected severity of extreme losses relative to the VaR threshold.

Stress Testing and Scenario Analysis

VaR and ES are statistical measures based on historical patterns or distributional assumptions. Stress testing complements these by explicitly examining portfolio performance under extreme but plausible scenarios that may not be well-represented in historical data.

Historical Stress Tests

Historical stress tests revalue the portfolio using returns from actual crisis periods. This ensures the scenarios are internally consistent: correlations, volatility spikes, and return magnitudes all reflect what actually happened in real market dislocations.

Common historical stress scenarios include:

  • Black Monday (October 19, 1987): S&P 500 fell 22.6% in a single day
  • LTCM Crisis (August 1998): Flight to quality crushed credit spreads while equity volatility spiked
  • Global Financial Crisis (September-October 2008): Broad market declines with correlation breakdown
  • COVID Crash (March 2020): Rapid 34% decline in equities with liquidity dislocations
In[24]:
Code
# Define historical stress scenarios
# Returns are approximate single-day or short-period moves
stress_scenarios = {
    "Black Monday 1987": {
        "equities": -0.226,
        "bonds": 0.03,
        "commodities": -0.05,
    },
    "LTCM Crisis 1998": {
        "equities": -0.06,
        "bonds": 0.02,
        "commodities": -0.08,
    },
    "Lehman Bankruptcy 2008": {
        "equities": -0.09,
        "bonds": -0.02,
        "commodities": -0.12,
    },
    "COVID Crash 2020": {
        "equities": -0.12,
        "bonds": 0.01,
        "commodities": -0.15,
    },
    "Flash Crash 2010": {
        "equities": -0.09,
        "bonds": 0.01,
        "commodities": -0.03,
    },
}

# Current portfolio (using same weights as before)
# Assume Asset 1 = equities, Asset 2 = commodities, Asset 3 = bonds
asset_mapping = ["equities", "commodities", "bonds"]
portfolio_value = 10_000_000
weights = np.array([0.4, 0.35, 0.25])

stress_results = {}
for scenario_name, scenario_returns in stress_scenarios.items():
    # Map scenario returns to our 3-asset portfolio
    returns_vector = np.array(
        [
            scenario_returns["equities"],
            scenario_returns["commodities"],
            scenario_returns["bonds"],
        ]
    )

    # Calculate portfolio P&L
    portfolio_return = weights @ returns_vector
    portfolio_loss = -portfolio_value * portfolio_return

    stress_results[scenario_name] = {
        "portfolio_return": portfolio_return,
        "portfolio_loss": portfolio_loss,
    }
Out[25]:
Console
Historical Stress Test Results
============================================================
Portfolio Value: $10,000,000
Weights: Equities 40%, Commodities 35%, Bonds 25%

------------------------------------------------------------
Black Monday 1987         Return: -10.04%  Loss: $1,004,000
LTCM Crisis 1998          Return:  -4.70%  Loss: $470,000
Lehman Bankruptcy 2008    Return:  -8.30%  Loss: $830,000
COVID Crash 2020          Return:  -9.80%  Loss: $980,000
Flash Crash 2010          Return:  -4.40%  Loss: $440,000
------------------------------------------------------------

For comparison:
  99% VaR (Monte Carlo): $384,912
  99% ES (Monte Carlo):  $435,761

The stress test results reveal that all historical crisis scenarios produce losses substantially exceeding the 99% VaR. This is expected: VaR estimates typical tail behavior, while stress tests examine genuine market dislocations that may occur once per decade or less frequently.

Hypothetical Stress Scenarios

Beyond historical events, risk managers construct hypothetical scenarios to test vulnerabilities specific to their portfolios. These might include interest rate shocks, currency devaluations, sector-specific collapses, or combinations that stress multiple risk factors simultaneously.

In[26]:
Code
# Hypothetical stress scenarios
hypothetical_scenarios = {
    "Interest Rate Shock +200bp": {
        "equities": -0.05,
        "bonds": -0.08,
        "commodities": -0.03,
    },
    "Dollar Collapse": {"equities": -0.03, "bonds": -0.02, "commodities": 0.15},
    "Stagflation": {"equities": -0.15, "bonds": -0.05, "commodities": 0.10},
    "Deflationary Spiral": {
        "equities": -0.20,
        "bonds": 0.08,
        "commodities": -0.25,
    },
    "Correlation Breakdown": {
        "equities": -0.10,
        "bonds": -0.05,
        "commodities": -0.10,
    },
}

hypothetical_results = {}
portfolio_value = 10_000_000
weights = np.array([0.4, 0.35, 0.25])

for scenario_name, scenario_returns in hypothetical_scenarios.items():
    returns_vector = np.array(
        [
            scenario_returns["equities"],
            scenario_returns["commodities"],
            scenario_returns["bonds"],
        ]
    )

    portfolio_return = weights @ returns_vector
    portfolio_loss = -portfolio_value * portfolio_return

    hypothetical_results[scenario_name] = {
        "portfolio_return": portfolio_return,
        "portfolio_loss": portfolio_loss,
    }
Out[27]:
Console
Hypothetical Stress Test Results
============================================================
Interest Rate Shock +200bp     Return:  -5.05%  Loss: $505,000
Dollar Collapse                Return:   3.55%  Loss: $355,000 gain
Stagflation                    Return:  -3.75%  Loss: $375,000
Deflationary Spiral            Return: -14.75%  Loss: $1,475,000
Correlation Breakdown          Return:  -8.75%  Loss: $875,000

The hypothetical scenarios highlight that events like a deflationary spiral or correlation breakdown could generate losses exceeding $2 million, significantly outpacing the standard VaR estimates.

Reverse Stress Testing

Traditional stress tests ask "what happens if scenario X occurs?" Reverse stress testing inverts the question: "what scenarios would cause losses exceeding threshold Y?" This helps identify hidden vulnerabilities and concentration risks.

For a portfolio heavily weighted in technology stocks, reverse stress testing might reveal that a 15% tech sector decline combined with rising interest rates would breach the firm's risk limits, even if such a scenario seems unlikely based on recent history.

Visualizing Stress Test Results

Out[28]:
Visualization
Horizontal bar chart showing stress test losses with VaR reference lines.
Stress test losses compared to VaR thresholds. Most crisis scenarios produce losses far exceeding statistical risk measures.

Integrating VaR, ES, and Stress Testing

Effective market risk management uses VaR, ES, and stress testing as complementary tools, not substitutes. Each provides different information essential for a complete risk picture.

VaR serves as the primary day-to-day risk metric and regulatory capital benchmark. It is computed daily, compared against limits, and used to allocate risk capital across desks and strategies. However, VaR alone is insufficient because it ignores tail severity and assumes historical patterns continue.

Expected Shortfall addresses tail severity and is increasingly mandated by regulators. The shift from 99% VaR to 97.5% ES in Basel III reflects recognition that measuring average tail losses provides more robust capital requirements.

Stress testing examines scenarios that statistical models might miss. It forces explicit consideration of "what if" questions and identifies concentration risks. Unlike VaR and ES, stress tests can incorporate forward-looking views about emerging risks not present in historical data.

Out[29]:
Visualization
Multi-panel dashboard with risk metrics and stress test comparisons.
Comprehensive risk dashboard integrating VaR, Expected Shortfall, and stress test results. The multi-panel view contrasts statistical metrics with stress scenarios, highlighting how stress tests capture extreme tail risks that VaR and ES may underestimate.
Notebook output
Notebook output
Notebook output

Key Parameters

The market risk models implemented here rely on several key parameters:

  • n_obs: The lookback window for historical simulation (504 days).
  • n_simulations: The number of scenarios generated for Monte Carlo methods (10,000).
  • degrees_of_freedom: The shape parameter for the t-distribution (ν=5\nu=5), controlling tail heaviness.
  • weights: The portfolio allocation vector across assets.
  • confidence_level: The probability threshold (α\alpha) for VaR and ES calculations (95% or 99%).

Practical Considerations

Implementing VaR systems in production requires addressing several practical challenges beyond the mathematical framework.

Data Quality and History

VaR calculations are only as good as the underlying data. Missing prices, stale quotes, and corporate actions (splits, dividends) can introduce significant errors. Historical simulation is particularly sensitive: a data error creating a spurious extreme return will directly impact VaR estimates. Production systems require robust data validation and cleaning procedures.

Time Horizon Scaling

Regulators often require 10-day VaR for capital calculations, but most institutions compute daily VaR internally. A common approximation scales daily VaR by h\sqrt{h} where hh is the horizon in days:

VaRh-dayVaR1-day×h\text{VaR}_{h\text{-day}} \approx \text{VaR}_{1\text{-day}} \times \sqrt{h}

where:

  • VaRh-day\text{VaR}_{h\text{-day}}: VaR for an hh-day horizon
  • VaR1-day\text{VaR}_{1\text{-day}}: VaR for a 1-day horizon
  • hh: time horizon in days

This scaling assumes returns are independent and identically distributed, which is violated by volatility clustering. When volatility is elevated, the square-root-of-time rule underestimates multi-day risk because high volatility tends to persist.

Out[30]:
Visualization
Line plot showing VaR scaling with time horizon from 1 to 20 days.
VaR scaling over time horizons from 1 to 20 days. The square root of time rule illustrates how the risk estimate grows as the holding period increases, providing a standard baseline for multi-day risk assessment.

Backtesting and Model Validation

Robust VaR systems include ongoing backtesting that compares predicted VaR to actual P&L. The Basel framework specifies a traffic light approach: too few exceedances suggest the model is overly conservative (inefficient capital usage), while too many indicate underestimation of risk and potential capital inadequacy.

A formal backtest statistic is Kupiec's proportion-of-failures test, which compares the observed exceedance rate to the expected rate under the null hypothesis of a correctly specified model:

In[31]:
Code
import numpy as np
from scipy import stats


def kupiec_test(n_observations, n_exceedances, confidence_level):
    """
    Kupiec's proportion of failures test for VaR backtesting.
    Returns the likelihood ratio test statistic and p-value.
    """
    expected_rate = 1 - confidence_level
    observed_rate = n_exceedances / n_observations

    if observed_rate == 0:
        observed_rate = 0.0001  # Avoid log(0)
    if observed_rate == 1:
        observed_rate = 0.9999

    # Likelihood ratio statistic
    lr = -2 * (
        n_exceedances * np.log(expected_rate / observed_rate)
        + (n_observations - n_exceedances)
        * np.log((1 - expected_rate) / (1 - observed_rate))
    )

    # P-value from chi-squared distribution with 1 degree of freedom
    p_value = 1 - stats.chi2.cdf(lr, 1)

    return lr, p_value, observed_rate


# Example backtest
n_obs_backtest = 250
n_exceedances_99 = 4  # Observed 4 exceedances vs. expected 2.5

lr_stat, p_val, obs_rate = kupiec_test(n_obs_backtest, n_exceedances_99, 0.99)
Out[32]:
Console
VaR Backtest Results (Kupiec Test)
==================================================
Observations: 250
Exceedances observed: 4
Exceedances expected: 2.5
Observed exceedance rate: 1.60%
Expected exceedance rate: 1.00%

Likelihood ratio statistic: 0.769
P-value: 0.380

Conclusion: Do not reject model at 5% significance

The Kupiec test p-value indicates that we cannot reject the null hypothesis of a correct model at the 5% significance level. The observed number of exceedances is statistically consistent with the expected number given the sample size, suggesting the VaR model is calibrated correctly for this period.

The next chapter on Credit Risk Fundamentals extends our risk measurement framework from market risk to the distinct challenges of measuring and managing default and credit spread risk.

Summary

This chapter developed the three major approaches to market risk measurement and explored their limitations and extensions.

Value-at-Risk quantifies the maximum expected loss at a given confidence level and time horizon. The parametric method assumes normally distributed returns and provides fast, closed-form calculations but underestimates tail risk. Historical simulation uses actual past returns without distributional assumptions but depends heavily on the lookback window. Monte Carlo simulation offers flexibility to incorporate complex dynamics like fat tails and time-varying volatility.

VaR limitations include silence about tail severity, potential violations of sub-additivity, and dependence on historical patterns that may not persist. The 2008 financial crisis demonstrated that VaR can provide false comfort when market dynamics shift dramatically.

Expected Shortfall addresses tail severity by measuring the average loss given that VaR is exceeded. It satisfies the coherence axioms that VaR violates and has become the primary regulatory metric under Basel III.

Stress testing complements statistical measures by examining portfolio performance under extreme but plausible scenarios. Historical stress tests use actual crisis returns, while hypothetical scenarios explore vulnerabilities not present in historical data. Together, VaR, ES, and stress testing provide a comprehensive view of market risk that no single measure can deliver alone.

Quiz

Ready to test your understanding? Take this quick quiz to reinforce what you've learned about Value-at-Risk, Expected Shortfall, and stress testing methodologies.

Loading component...

Reference

BIBTEXAcademic
@misc{marketriskmeasurementvarexpectedshortfallstresstests, author = {Michael Brenndoerfer}, title = {Market Risk Measurement: VaR, Expected Shortfall & Stress Tests}, year = {2025}, url = {https://mbrenndoerfer.com/writing/market-risk-var-expected-shortfall-stress-testing}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-01-01} }
APAAcademic
Michael Brenndoerfer (2025). Market Risk Measurement: VaR, Expected Shortfall & Stress Tests. Retrieved from https://mbrenndoerfer.com/writing/market-risk-var-expected-shortfall-stress-testing
MLAAcademic
Michael Brenndoerfer. "Market Risk Measurement: VaR, Expected Shortfall & Stress Tests." 2026. Web. today. <https://mbrenndoerfer.com/writing/market-risk-var-expected-shortfall-stress-testing>.
CHICAGOAcademic
Michael Brenndoerfer. "Market Risk Measurement: VaR, Expected Shortfall & Stress Tests." Accessed today. https://mbrenndoerfer.com/writing/market-risk-var-expected-shortfall-stress-testing.
HARVARDAcademic
Michael Brenndoerfer (2025) 'Market Risk Measurement: VaR, Expected Shortfall & Stress Tests'. Available at: https://mbrenndoerfer.com/writing/market-risk-var-expected-shortfall-stress-testing (Accessed: today).
SimpleBasic
Michael Brenndoerfer (2025). Market Risk Measurement: VaR, Expected Shortfall & Stress Tests. https://mbrenndoerfer.com/writing/market-risk-var-expected-shortfall-stress-testing