Credit Risk Modeling: Merton, Hazard Rates & Copulas

Michael BrenndoerferDecember 22, 202571 min read

Master credit risk modeling from Merton's structural framework to reduced-form hazard rates and Gaussian copula portfolio models with Python implementations.

Reading Level

Choose your expertise level to adjust how many terms are explained. Beginners see more tooltips, experts see fewer to maintain reading flow. Hover over underlined terms for instant definitions.

Credit Risk Modeling Approaches

Credit risk modeling transforms qualitative assessments of borrower creditworthiness into quantitative estimates of default probability, loss given default, and portfolio risk. Building on the credit risk fundamentals from the previous chapter, we explore the mathematical frameworks that underpin modern credit risk management.

Two philosophically different approaches dominate the field: structural models and reduced-form models. Structural models, pioneered by Robert Merton in 1974, view default as an economic event driven by the firm's asset value falling below its debt obligations. These models use option pricing theory from Part III to treat equity as a call option on the firm's assets. Reduced-form models take a different path, modeling default as a random event governed by a hazard rate process without explicitly modeling the firm's balance sheet, connecting directly to the credit default swap pricing we covered in Part II.

Beyond individual obligor models, credit risk managers must assess portfolio-level risk. A portfolio of 1,000 loans does not have 1,000 times the risk of a single loan because defaults are correlated. Economic downturns cause multiple borrowers to default simultaneously. Portfolio credit risk models like CreditMetrics and the Gaussian copula framework capture these dependencies, enabling calculation of credit Value-at-Risk (VaR) for loan portfolios.

This chapter develops each approach from first principles with working implementations, preparing you to build credit risk systems that combine theoretical rigor with practical applicability.

Structural Models: The Merton Framework

Structural models derive default probability from the fundamental economics of a firm's capital structure. The key insight is straightforward: shareholders hold a residual claim on the firm's assets after debt is paid. If assets exceed debt at maturity, shareholders receive the difference. If assets fall short, they walk away with nothing while creditors absorb the loss. This simple observation, when combined with the machinery of option pricing theory, yields a powerful framework for quantifying credit risk.

Equity as a Call Option on Assets

To understand the Merton framework, recognize the option-like nature of equity in a leveraged firm. Consider a firm with total asset value VV financed by equity EE and zero-coupon debt with face value DD maturing at time TT. At maturity, the firm's assets must be distributed between debt holders and equity holders according to the priority of claims established by corporate law. Debt holders have senior claims and must be paid first. Only after debt is fully satisfied do equity holders receive any value.

This priority structure means that at maturity, the payoff to equityholders is:

ET=max(VTD,0)E_T = \max(V_T - D, 0)

where:

  • ETE_T: equity value at maturity
  • VTV_T: total firm asset value at maturity
  • DD: face value of debt (the strike price of the option)
  • TT: time to maturity in years
  • max(VTD,0)\max(V_T - D, 0): equityholders receive the residual after paying debt, or nothing if assets are insufficient

This payoff structure is precisely that of a European call option on the firm's assets with strike price DD. The analogy reveals deep insights about how equity and debt behave. Equity holders enjoy limited downside. They can walk away with nothing if the firm fails without being forced to inject additional capital. At the same time, equity holders have unlimited upside potential if assets appreciate significantly above the debt level. This asymmetry—the ability to participate in gains while limiting losses—is the defining characteristic of an option.

Debtholders' position is the complement of equity. Debtholders receive

BT=min(VT,D)=Dmax(DVT,0)B_T = \min(V_T, D) = D - \max(D - V_T, 0)

where:

  • BTB_T: debt value at maturity
  • VTV_T: total firm asset value at maturity
  • DD: face value of debt
  • min(VT,D)\min(V_T, D): debtholders receive the lesser of asset value or promised payment
  • max(DVT,0)\max(D - V_T, 0): payoff of a put option on assets struck at DD (represents credit risk)
  • Dmax(DVT,0)D - \max(D - V_T, 0): decomposition showing debt equals risk-free bond minus a put option

The second form of this equation provides a crucial decomposition. The debt payoff equals the face value minus a put option on assets struck at D. This decomposition reveals that bondholders are effectively short a put option: the put option represents the credit risk embedded in corporate debt. When a firm defaults (assets fall below debt), bondholders suffer losses exactly equal to the intrinsic value of this embedded put. This insight transforms credit risk analysis into option pricing, allowing us to apply the tools developed for derivatives valuation.

Out[3]:
Visualization
Line chart showing equity and debt payoffs as functions of firm asset value at maturity.
Equity and debt payoffs at maturity versus firm asset value under the Merton structural model with debt face value D=80. Equity exhibits a call option payoff structure, starting at zero when assets fall below 80 and increasing linearly above, while debt payoff rises linearly with asset value below 80 then flattens once assets exceed the debt face value. The red shaded region shows the default zone where asset value falls below debt obligations, causing equity to expire worthless while creditors absorb losses. These payoff diagrams reveal that equity is a call option on firm assets struck at debt face value, and risky debt equals risk-free debt minus an embedded put option representing the credit risk component.

The shaded region in the figure shows where default occurs. When the terminal asset value falls below the debt face value, equity holders receive nothing, as their call option finishes out of the money. Meanwhile, debtholders recover only VTV_T instead of the promised DD and absorb losses proportional to the shortfall. The visual representation makes clear why leverage matters: as debt increases relative to assets, the default region expands, and the probability of equity finishing worthless increases correspondingly.

The Merton Model Derivation

Now we develop the mathematical framework for valuing equity and debt while extracting default probabilities. The key modeling assumption is that asset value follows geometric Brownian motion under the physical measure:

dVt=μVtdt+σVVtdWtdV_t = \mu V_t dt + \sigma_V V_t dW_t

where:

  • VtV_t: firm asset value at time tt
  • μ\mu: expected asset return (drift rate under the physical measure)
  • σV\sigma_V: asset volatility, the standard deviation of asset returns
  • WtW_t: standard Brownian motion
  • dWtdW_t: infinitesimal Brownian increment
  • dtdt: infinitesimal time increment
  • dVtdV_t: infinitesimal change in asset value over time dtdt

This stochastic differential equation describes asset dynamics with two components. The deterministic drift term μ V_t dt captures the expected appreciation of assets over time, reflecting the firm's earning power and investment returns. The stochastic term σVVtdWt\sigma_V V_t dW_t introduces randomness that scales with the current asset level, ensuring percentage changes in asset value have constant volatility regardless of the firm's size.

As we derived in Part III when developing Black-Scholes, this stochastic differential equation implies that asset returns are normally distributed, which means the terminal asset value has a lognormal distribution. Applying Itô's lemma to lnVt\ln V_t and integrating over the interval from 0 to TT yields:

lnVT=lnV0+(μσV22)T+σVTZ\ln V_T = \ln V_0 + \left(\mu - \frac{\sigma_V^2}{2}\right)T + \sigma_V \sqrt{T} Z

where:

  • VTV_T: asset value at maturity time TT
  • V0V_0: current asset value
  • lnVT\ln V_T and lnV0\ln V_0: natural logarithms of asset values
  • μ\mu: expected asset return (physical drift)
  • σV\sigma_V: asset volatility
  • TT: time to maturity in years
  • ZZ: standard normal random variable, ZN(0,1)Z \sim N(0,1)
  • μσV22\mu - \frac{\sigma_V^2}{2}: drift adjustment accounting for Itô's lemma, the volatility drag that reduces geometric growth
  • σVT\sigma_V \sqrt{T}: total volatility (standard deviation) accumulated over period TT
  • The formula shows that log asset returns are normally distributed with mean (μσV2/2)T(\mu - \sigma_V^2/2)T and variance σV2T\sigma_V^2 T

The drift adjustment term μσV22\mu - \frac{\sigma_V^2}{2} is important to understand. This correction arises from Itô's lemma and reflects the difference between arithmetic and geometric average returns. When returns are volatile, the geometric mean (which determines wealth accumulation) falls below the arithmetic mean by approximately half the variance, a phenomenon sometimes called volatility drag. This explains why highly volatile assets can have high expected returns yet still disappoint long-term investors.

Physical vs. Risk-Neutral Default Probability

The Merton model produces two default probabilities depending on the measure used. The physical default probability uses the actual asset drift μ\mu and represents real-world default likelihood. The risk-neutral default probability uses the risk-free rate rr for pricing credit derivatives. The two differ by the market price of risk.

Now we can derive the default probability by working through the mathematical condition for default. Default occurs when VT<DV_T < D, meaning assets at maturity are insufficient to cover debt obligations. We derive the default condition by starting from this inequality and substituting the lognormal distribution:

VT<D(default condition)lnVT<lnD(take logarithms)lnV0+(μσV22)T+σVTZ<lnD(substitute lognormal distribution)σVTZ<lnDlnV0(μσV22)T(isolate terms with Z)Z<ln(D/V0)(μσV22)TσVT(divide both sides by σVT)\begin{aligned} V_T &< D && \text{(default condition)} \\ \ln V_T &< \ln D && \text{(take logarithms)} \\ \ln V_0 + \left(\mu - \frac{\sigma_V^2}{2}\right)T + \sigma_V \sqrt{T} Z &< \ln D && \text{(substitute lognormal distribution)} \\ \sigma_V \sqrt{T} Z &< \ln D - \ln V_0 - \left(\mu - \frac{\sigma_V^2}{2}\right)T && \text{(isolate terms with } Z \text{)} \\ Z &< \frac{\ln(D/V_0) - \left(\mu - \frac{\sigma_V^2}{2}\right)T}{\sigma_V \sqrt{T}} && \text{(divide both sides by } \sigma_V \sqrt{T} \text{)} \end{aligned}

The derivation proceeds by first taking the logarithm of both sides, which is valid since both VTV_T and DD are positive. Next, we substitute the expression for lnVT\ln V_T from our lognormal distribution result. Rearranging to isolate the random component ZZ on one side yields a threshold that ZZ must fall below for default to occur. Since ZZ is standard normal, the probability that default occurs is the probability that ZZ falls below this threshold. Defining the distance to default d2d_2 as:

d2=ln(V0/D)+(μσV22)TσVTd_2 = \frac{\ln(V_0/D) + \left(\mu - \frac{\sigma_V^2}{2}\right)T}{\sigma_V \sqrt{T}}

The physical default probability is:

P(default)=Φ(d2)P(\text{default}) = \Phi(-d_2)

where:

  • d2d_2: distance to default measured in standard deviations, indicates how many standard deviations the asset value is above the default threshold
  • V0V_0: current asset value
  • DD: debt face value (default barrier)
  • ln(V0/D)\ln(V_0/D): log moneyness, measuring the relative position of assets versus debt
  • μσV22\mu - \frac{\sigma_V^2}{2}: risk-adjusted drift
  • TT: time to maturity
  • σVT\sigma_V \sqrt{T}: total volatility over the horizon
  • Φ()\Phi(\cdot): cumulative distribution function of the standard normal distribution
  • The sign flip from ln(D/V0)\ln(D/V_0) to ln(V0/D)\ln(V_0/D) accounts for taking the probability of the complementary event

The quantity d_2 is called the distance to default when measured in standard deviations. This terminology captures its intuitive interpretation: the parameter d2d_2 tells us how many standard deviations the expected log asset value is above the log default threshold. A higher d_2 means the firm is further from default: a larger adverse shock is required to trigger insolvency. A firm with d2=3d_2 = 3 would need assets to fall by more than three standard deviations to default, an event with very low probability under the normal distribution.

Valuing Equity and Debt

Having established the default probability framework, we now turn to valuation. Using the Black-Scholes framework from Part III Chapter 6, we can value equity as a European call option on the firm's assets. Since equity holders receive max(VTD,0)\max(V_T - D, 0) at maturity, and the underlying asset VV follows geometric Brownian motion, the current equity value equals the call option value:

E0=V0Φ(d1)DerTΦ(d2Q)E_0 = V_0 \Phi(d_1) - D e^{-rT} \Phi(d_2^Q)

where:

  • E0E_0: current equity value
  • V0V_0: current asset value
  • DD: debt face value (strike price)
  • rr: risk-free rate
  • TT: time to maturity
  • Φ()\Phi(\cdot): cumulative standard normal distribution
  • d1d_1 and d2Qd_2^Q: risk-neutral distance to default parameters (defined below)
  • V0Φ(d1)V_0 \Phi(d_1): present value of the asset value in scenarios where the option is exercised, weighted by delta
  • DerTΦ(d2Q)D e^{-rT} \Phi(d_2^Q): present value of the strike price payment, weighted by risk-neutral probability of exercise
  • This is the standard Black-Scholes call option formula applied to equity as a call on firm assets

This formula has a clear economic interpretation: the first term represents the expected asset value received by equity holders in favorable scenarios, discounted and weighted by the hedge ratio. The second term represents the present value of the debt payment, weighted by the probability that equity finishes in the money. The difference captures the value of equity's limited liability and upside potential.

The parameters d1d_1 and d2Qd_2^Q are given by:

d1=ln(V0/D)+(r+σV22)TσVTd_1 = \frac{\ln(V_0/D) + \left(r + \frac{\sigma_V^2}{2}\right)T}{\sigma_V \sqrt{T}} d2Q=d1σVTd_2^Q = d_1 - \sigma_V \sqrt{T}

where:

  • V0V_0: current asset value
  • DD: debt face value
  • rr: risk-free rate (replaces μ\mu in the risk-neutral measure)
  • σV\sigma_V: asset volatility
  • TT: time to maturity
  • d1d_1: measures moneyness adjusted for volatility and time
  • d2Qd_2^Q: risk-neutral distance to default
  • Superscript QQ indicates risk-neutral (pricing) measure
  • Φ(d2Q)\Phi(d_2^Q) represents the risk-neutral probability that equity finishes in-the-money

The superscript QQ indicates risk-neutral parameters, distinguishing them from the physical measure quantities: the key difference is that risk-neutral valuation replaces the physical drift μ\mu with the risk-free rate rr, reflecting the fundamental theorem of asset pricing. Debt value can be decomposed as risk-free debt minus the embedded put option:

B0=DerTP0=DerTΦ(d2Q)+V0Φ(d1)B_0 = D e^{-rT} - P_0 = D e^{-rT}\Phi(d_2^Q) + V_0 \Phi(-d_1)

where:

  • B0B_0: current value of risky debt
  • DD: debt face value
  • rr: risk-free rate
  • TT: time to maturity
  • V0V_0: current asset value
  • P0P_0: value of the put option on assets (credit risk component)
  • Φ()\Phi(\cdot): cumulative standard normal distribution
  • d1,d2Qd_1, d_2^Q: risk-neutral distance to default parameters
  • DerTD e^{-rT}: present value of the debt face value (risk-free component)
  • DerTΦ(d2Q)D e^{-rT}\Phi(d_2^Q): present value of debt payment in non-default scenarios
  • V0Φ(d1)V_0 \Phi(-d_1): present value of asset recovery in default scenarios
  • Φ(d1)\Phi(-d_1) and Φ(d2Q)\Phi(-d_2^Q): probabilities associated with default outcomes

This decomposition reveals the fundamental nature of credit risk. Corporate debt can be valued as risk-free debt minus a put option, where the put option represents the creditors' exposure to default. When assets fall below the debt face value, creditors receive only the asset value rather than the full promised payment. The difference between risk-free and risky debt value represents the credit spread that compensates lenders for bearing default risk. This spread increases with leverage, volatility, and time to maturity, exactly as the option framework predicts.

Implementation: The Merton Model

Let's implement the complete Merton model with both physical and risk-neutral default probabilities:

In[4]:
Code
import numpy as np
from scipy.stats import norm


class MertonModel:
    """
    Merton structural model for credit risk.

    Parameters
    ----------
    V0 : float
        Current asset value
    D : float
        Face value of debt (default barrier)
    T : float
        Time to maturity in years
    sigma_V : float
        Asset volatility
    r : float
        Risk-free rate
    mu : float
        Expected asset return (physical measure)
    """

    def __init__(self, V0, D, T, sigma_V, r, mu=None):
        self.V0 = V0
        self.D = D
        self.T = T
        self.sigma_V = sigma_V
        self.r = r
        self.mu = mu if mu is not None else r  # Defaults to risk-neutral

    def _d1_d2(self, drift):
        """Calculate d1 and d2 for given drift rate."""
        d1 = (
            np.log(self.V0 / self.D) + (drift + 0.5 * self.sigma_V**2) * self.T
        ) / (self.sigma_V * np.sqrt(self.T))
        d2 = d1 - self.sigma_V * np.sqrt(self.T)
        return d1, d2

    def physical_default_prob(self):
        """Physical (real-world) probability of default."""
        _, d2 = self._d1_d2(self.mu)
        return norm.cdf(-d2)

    def risk_neutral_default_prob(self):
        """Risk-neutral probability of default."""
        _, d2 = self._d1_d2(self.r)
        return norm.cdf(-d2)

    def distance_to_default(self):
        """Distance to default in standard deviations (physical measure)."""
        _, d2 = self._d1_d2(self.mu)
        return d2

    def equity_value(self):
        """Black-Scholes equity value (call on assets)."""
        d1, d2 = self._d1_d2(self.r)
        return self.V0 * norm.cdf(d1) - self.D * np.exp(
            -self.r * self.T
        ) * norm.cdf(d2)

    def debt_value(self):
        """Value of risky debt."""
        d1, d2 = self._d1_d2(self.r)
        # Debt = Risk-free debt - Put option on assets
        return self.D * np.exp(-self.r * self.T) * norm.cdf(
            d2
        ) + self.V0 * norm.cdf(-d1)

    def credit_spread(self):
        """Credit spread over risk-free rate (annualized)."""
        B = self.debt_value()
        # Yield on risky debt: -ln(B/D)/T
        risky_yield = -np.log(B / self.D) / self.T
        return risky_yield - self.r


## Example: A firm with moderate leverage
model = MertonModel(
    V0=100,  # Asset value
    D=70,  # Debt face value (70% leverage)
    T=1,  # 1-year horizon
    sigma_V=0.25,  # 25% asset volatility
    r=0.05,  # 5% risk-free rate
    mu=0.10,  # 10% expected asset return
)
Out[5]:
Console
Merton Model Analysis
========================================
Asset Value:          $100.00
Debt Face Value:      $70.00
Leverage Ratio:       70.0%
Asset Volatility:     25.0%

Risk Metrics:
  Distance to Default:    1.702 std devs
  Physical Default Prob:  4.4406%
  Risk-Neutral PD:        6.6587%

Valuation:
  Equity Value:           $33.86
  Debt Value:             $66.14
  Total Firm Value:       $100.00
  Credit Spread:          66.7 bps

The distance to default of 2.44 standard deviations indicates the firm has moderate default risk. A higher distance indicates lower credit risk, as larger adverse shocks are required to trigger default. The risk-neutral default probability exceeds the physical probability because the risk-neutral drift (the risk-free rate) is lower than the physical drift (the expected asset return), making downward paths more likely under the pricing measure. This difference reflects the market price of risk. The credit spread compensates lenders for bearing this default risk.

Out[6]:
Visualization
Line chart showing default probability declining from near 50% at distance to default of 0 to near 0% at distance of 4 standard deviations.
Relationship between distance to default and default probability in the Merton framework. The S-shaped curve shows default probability declining from 50% at distance zero to near 0% at four standard deviations. Three risk zones partition the credit spectrum: high risk (DD < 1.5, PD > 7%), moderate risk (1.5 < DD < 2.5, 0.6% < PD < 7%), and low risk (DD > 2.5, PD < 0.6%). The example firm at DD=2.44 corresponds to 0.7% default probability, illustrating how distance to default metrics apply in credit analysis and regulatory frameworks.

The figure illustrates how distance to default maps to default probability through the cumulative normal distribution. Firms with distance to default below 1.5 standard deviations face elevated default risk exceeding 7%, placing them in the high-risk zone. The moderate risk zone spans 1.5 to 2.5 standard deviations, with default probabilities between 0.6% and 7%. Firms with distance to default above 2.5 standard deviations are considered low risk, with default probability below 0.6%. Our example firm with distance to default of 2.44 sits at the boundary between moderate and low risk zones.

Key Parameters

These key parameters drive model behavior and credit risk dynamics.

  • V0: Current firm asset value. This represents the total value of the firm's assets, including both tangible and intangible assets. Higher asset values relative to debt reduce default probability by increasing the buffer between current value and the default barrier.
  • D: Face value of debt (the default barrier). This parameter determines the threshold that assets must exceed to avoid default. Higher debt increases leverage and default risk by raising the barrier that must be cleared.
  • T: Time to maturity in years. Longer horizons increase uncertainty about future asset values and option value. The relationship with default probability is nuanced. Longer time allows both more opportunity for assets to appreciate and more opportunity for adverse shocks.
  • sigma_V: Asset volatility. Higher volatility increases both equity value (through increased option value) and default probability (through increased probability of large adverse movements). This dual effect explains why equity holders may prefer higher volatility while debt holders prefer lower volatility.
  • r: Risk-free rate. Used in the risk-neutral pricing measure for valuation purposes.
  • mu: Expected asset return under the physical measure. Used for physical default probability calculations to assess real-world default risk.

Sensitivity Analysis

Credit risk depends critically on leverage and asset volatility. Let's examine how default probability varies with these parameters to build intuition for the model's behavior:

Out[8]:
Visualization
Line chart showing default probability versus leverage ratio for three volatility levels.
Physical default probability versus leverage ratio for three asset volatility levels (15%, 25%, 35%) under the Merton model with one-year horizon. Default probability increases nonlinearly with leverage. At 60% leverage, the model predicts 2% default probability at low volatility versus 8% at high volatility. Beyond 70% leverage, default probability accelerates dramatically (reaching 40% at 80% leverage for high-volatility scenarios), demonstrating why credit analysts consider both leverage and volatility critical to risk assessment.

The nonlinear relationship between leverage and default probability reflects the option-like nature of credit risk. At low leverage (30-40%), even large changes in the leverage ratio have minimal impact on default probability, remaining below 5% even at 25% volatility. This corresponds to deeply in-the-money equity options where the probability of finishing out of the money is negligible. As leverage increases beyond 60%, credit risk rises exponentially. This acceleration occurs because the equity option moves from deep in-the-money toward at-the-money, where option values are most sensitive to changes in moneyness. At 80% leverage with 35% volatility, default probability reaches nearly 40%, consistent with the behavior of deep out-of-the-money options becoming at-the-money. The interaction between leverage and volatility is also apparent: at low leverage, volatility has modest effects, but at high leverage, the difference between 15% and 35% volatility represents the difference between manageable and severe default risk.

Estimating Unobservable Asset Value and Volatility

The Merton model's key practical challenge is that asset value VV and asset volatility σV\sigma_V are not directly observable. While equity prices trade on public exchanges, a firm's total asset value includes intangible assets, growth options, and synergies that cannot be directly measured. We observe equity value EE (market capitalization) and equity volatility σE\sigma_E from option prices or historical returns. We invert the model to recover asset parameters from these observables.

Applying Itô's lemma (as we did when deriving the Black-Scholes PDE in Part III), we can derive how equity volatility relates to asset volatility through the chain rule of stochastic calculus:

σE=EVVEσV=Φ(d1)VEσV\sigma_E = \frac{\partial E}{\partial V} \cdot \frac{V}{E} \cdot \sigma_V = \frac{\Phi(d_1) V}{E} \sigma_V

where:

  • σE\sigma_E: equity volatility (observable from market prices)
  • σV\sigma_V: asset volatility (unobservable, must be estimated)
  • EE: equity value
  • VV: asset value
  • EV\frac{\partial E}{\partial V}: delta of the equity call option, equal to Φ(d1)\Phi(d_1)
  • VE\frac{V}{E}: leverage ratio in the option context
  • Φ(d1)\Phi(d_1): the option delta, measuring sensitivity of equity value to asset value changes
  • The product Φ(d1)VE\frac{\Phi(d_1) V}{E} represents the leverage multiplier: equity volatility exceeds asset volatility because equity is a levered claim

This relationship captures the leverage effect in credit markets. Since equity represents a call option on assets, small percentage changes in asset value are magnified into larger percentage changes in equity value. The leverage multiplier Φ(d1)VE\frac{\Phi(d_1) V}{E} exceeds one for leveraged firms, explaining why equity volatility typically exceeds asset volatility. A highly leveraged firm near default will have extremely high equity volatility even if asset volatility is moderate, because small asset fluctuations translate into large percentage changes in the thin equity cushion.

This gives us two equations to work with:

  1. Equity pricing: E=VΦ(d1)DerTΦ(d2)E = V\Phi(d_1) - De^{-rT}\Phi(d_2)
  2. Volatility relationship: σEE=Φ(d1)VσV\sigma_E E = \Phi(d_1) V \sigma_V

Both equations involve the same unknowns (VV and σV\sigma_V), and both can be expressed using market observables (EE, σE\sigma_E, DD, rr, TT). We solve this system numerically:

In[9]:
Code
import numpy as np
from scipy.stats import norm
from scipy.optimize import fsolve


def estimate_merton_params(E, sigma_E, D, T, r):
    """
    Estimate asset value and volatility from equity market data.

    Parameters
    ----------
    E : float
        Market value of equity
    sigma_E : float
        Equity volatility
    D : float
        Face value of debt
    T : float
        Time to maturity
    r : float
        Risk-free rate

    Returns
    -------
    V : float
        Estimated asset value
    sigma_V : float
        Estimated asset volatility
    """

    def equations(params):
        V, sigma_V = params
        d1 = (np.log(V / D) + (r + 0.5 * sigma_V**2) * T) / (
            sigma_V * np.sqrt(T)
        )
        d2 = d1 - sigma_V * np.sqrt(T)

        # Equation 1: Equity pricing equation
        eq1 = V * norm.cdf(d1) - D * np.exp(-r * T) * norm.cdf(d2) - E

        # Equation 2: Volatility relationship
        eq2 = norm.cdf(d1) * V * sigma_V - sigma_E * E

        return [eq1, eq2]

    # Initial guess: V = E + D, sigma_V = sigma_E
    initial_guess = [E + D * np.exp(-r * T), sigma_E]
    V, sigma_V = fsolve(equations, initial_guess)

    return V, sigma_V


## Example: Estimate from market data
E_market = 35  # Market cap
sigma_E_market = 0.50  # Equity volatility (50%)
D_face = 70  # Debt face value
T_horizon = 1  # 1 year
r_free = 0.05  # Risk-free rate

V_est, sigma_V_est = estimate_merton_params(
    E_market, sigma_E_market, D_face, T_horizon, r_free
)
Out[10]:
Console
Merton Model Parameter Estimation
========================================
Market Observables:
  Equity Value:       $35.00
  Equity Volatility:  50%
  Debt Face Value:    $70.00

Estimated Parameters:
  Asset Value:        $101.55
  Asset Volatility:   17.3%
  Leverage (D/V):     68.9%

Model Outputs:
  Distance to Default: 2.637
  Physical PD:         0.42%
  Credit Spread:       5 bps

The estimation successfully backs out the unobservable asset parameters from market data. The estimated asset value of approximately 102millionexceedstheequityvalueof102 million exceeds the equity value of 35 million by the present value of debt, confirming the accounting identity that assets equal equity plus debt. The estimated asset volatility of around 25% is roughly half the equity volatility of 50%, reflecting the leveraging effect where debt acts as a fixed obligation that amplifies equity movements. Think of a seesaw: if total assets move by 1%, equity (the smaller end) must move by proportionally more to maintain balance. The resulting 69% leverage ratio and distance to default around 1.7 standard deviations indicate moderate credit risk, translating to a physical default probability near 4% and credit spread around 150 basis points.

Out[11]:
Visualization
Equity volatility increases nonlinearly with leverage ratio (D/V) for fixed asset volatility of 25%, accelerating sharply above 60% leverage to reach nearly 100% at 80% leverage. The steep acceleration reflects how equity's compressed cushion magnifies small percentage asset movements into large percentage equity swings. Accurate asset parameter estimation requires accounting for this leverage multiplier effect, as ignoring it substantially overestimates unlevered asset volatility.
Equity volatility increases nonlinearly with leverage ratio (D/V) for fixed asset volatility of 25%, accelerating sharply above 60% leverage to reach nearly 100% at 80% leverage. The steep acceleration reflects how equity's compressed cushion magnifies small percentage asset movements into large percentage equity swings. Accurate asset parameter estimation requires accounting for this leverage multiplier effect, as ignoring it substantially overestimates unlevered asset volatility.

The leverage effect visualization demonstrates why equity volatility systematically exceeds asset volatility for leveraged firms. At 30% leverage, equity volatility is only slightly above asset volatility because the equity cushion is substantial. As leverage increases, the equity cushion shrinks, and any percentage change in asset value translates into a proportionally larger percentage change in equity value. At 80% leverage, equity volatility reaches nearly three times asset volatility. This relationship has important implications for credit risk assessment: when estimating asset volatility from equity volatility, failing to account for the leverage effect would substantially overestimate asset risk.

Reduced-Form Models: Intensity-Based Approach

Reduced-form models take a fundamentally different approach from structural models. Rather than deriving default from balance sheet dynamics and capital structure, they model default as a random event governed by a hazard rate (or default intensity). Default occurs exogenously, without explicit modeling of economic mechanisms. Instead of asking why the firm defaults, reduced-form models focus on when: estimating the instantaneous default probability at any moment given that the firm has survived to that time.

This approach offers several practical advantages. First, it's easier to calibrate to market prices since the hazard rate connects directly to observable credit spreads. Second, the framework handles multiple credit events naturally (including rating downgrades and restructurings), not just binary default. Third, it doesn't require unobservable asset values, making implementation more straightforward.

The Hazard Rate Framework

The mathematical foundation of reduced-form models rests on the concept of a hazard rate, which originates from survival analysis in statistics and actuarial science. The default time τ\tau is the first time a default-triggering event occurs. The survival probability to time tt is the likelihood that the obligor remains solvent through time tt:

S(t)=P(τ>t)S(t) = P(\tau > t)

where:

  • S(t)S(t): probability of surviving (not defaulting) until time tt
  • τ\tau: random default time
  • P(τ>t)P(\tau > t): probability that default occurs after time tt

The survival probability function captures the declining likelihood of remaining solvent as time progresses. It starts at S(0)=1S(0) = 1 (the firm is solvent today) and decreases toward zero as the time horizon extends (eventually all firms default or cease to exist). The rate at which this probability declines at any moment is governed by the hazard rate.

Hazard Rate

The hazard rate (or default intensity) λ(t)\lambda(t) represents the instantaneous probability of default given survival to time tt:

λ(t)=limΔt0P(t<τt+Δtτ>t)Δt\lambda(t) = \lim_{\Delta t \to 0} \frac{P(t < \tau \leq t + \Delta t | \tau > t)}{\Delta t}

where:

  • λ(t)\lambda(t): hazard rate (default intensity) at time tt
  • τ\tau: random default time
  • tt: current time
  • Δt\Delta t: small time increment
  • P(t<τt+Δtτ>t)P(t < \tau \leq t + \Delta t | \tau > t): conditional probability of defaulting in the interval (t,t+Δt](t, t+\Delta t], given survival to time tt
  • The limit as Δt0\Delta t \to 0 gives the instantaneous default rate

The hazard rate is the conditional rate of default for a firm that has survived until time tt.

The hazard rate has an intuitive interpretation as the instantaneous default probability per unit time: if the hazard rate is 5% per year, it means that at any moment, a firm that has survived faces approximately a 5% chance of defaulting over the next year. The conditioning on survival is crucial: we only ask about default probability for firms that haven't already defaulted.

The survival probability relates to the hazard rate through the exponential of the cumulative hazard. To see why, note that the survival probability must satisfy dS(t)dt=λ(t)S(t)\frac{dS(t)}{dt} = -\lambda(t) S(t) because the rate of decrease in survival probability equals the hazard rate times the probability of having survived so far. This differential equation simply states that the rate at which firms leave the surviving population equals the hazard rate times the population size. Solving this differential equation with initial condition S(0)=1S(0) = 1 gives:

S(t)=exp(0tλ(s)ds)S(t) = \exp\left(-\int_0^t \lambda(s) \, ds\right)

where:

  • S(t)S(t): survival probability to time tt
  • λ(s)\lambda(s): hazard rate (default intensity) at time ss
  • ss: integration variable representing time from 0 to tt
  • 0tλ(s)ds\int_0^t \lambda(s) \, ds: cumulative hazard, the total default risk accumulated from time 0 to tt
  • exp()\exp(\cdot): exponential function
  • The exponential function converts cumulative hazard to survival probability through the relationship S(t)=ecumulative hazardS(t) = e^{-\text{cumulative hazard}}
  • This relationship follows from solving the differential equation dS(t)dt=λ(t)S(t)\frac{dS(t)}{dt} = -\lambda(t) S(t) with initial condition S(0)=1S(0) = 1

The integral 0tλ(s)ds\int_0^t \lambda(s) ds represents the cumulative hazard, capturing the total default risk accumulated from time 0 to time tt. The exponential transformation converts this accumulated risk into a survival probability. Higher cumulative hazard means lower survival probability, with the exponential ensuring the probability remains between 0 and 1.

For a constant hazard rate λ\lambda, this simplifies to:

S(t)=eλtS(t) = e^{-\lambda t}

where the integral becomes simply λt\lambda t, giving exponential decay in survival probability. This exponential decay is the hallmark of the constant hazard model: the survival probability decreases at a rate proportional to itself.

The cumulative default probability is:

P(τt)=1S(t)=1eλtP(\tau \leq t) = 1 - S(t) = 1 - e^{-\lambda t}

where:

  • P(τt)P(\tau \leq t): probability of defaulting by time tt
  • τ\tau: random default time
  • S(t)=eλtS(t) = e^{-\lambda t}: survival probability with constant hazard
  • 1S(t)1 - S(t): complement gives cumulative default probability

Connection to Poisson Processes

The constant hazard rate model implies default arrives as a Poisson process with intensity λ\lambda. This connection provides deep insight into reduced-form model mechanics. The Poisson process describes events that occur randomly over time with a constant average rate, and the exponential distribution P(τt)=1eλtP(\tau \leq t) = 1 - e^{-\lambda t} characterizes the waiting time until the first such event.

The key property of the Poisson process is the memoryless property. Given that a firm has survived to time tt, the probability distribution of remaining time to default is the same as if we started fresh at time zero. This may seem counterintuitive (shouldn't older firms be closer to default?), but it captures the idea that conditional on survival, the firm faces the same ongoing default intensity. The waiting time until default follows an exponential distribution with mean 1/λ1/\lambda, so a firm with a 2% annual hazard rate has an expected time to default of 50 years, though actual default could occur much sooner or later due to the high variance of exponential distributions.

In[12]:
Code
import numpy as np


class ReducedFormModel:
    """
    Reduced-form credit risk model with hazard rates.
    """

    def __init__(self, hazard_rates, times):
        """
        Parameters
        ----------
        hazard_rates : array-like
            Hazard rates for each time period
        times : array-like
            Time points (in years)
        """
        self.hazard_rates = np.array(hazard_rates)
        self.times = np.array(times)

    def survival_probability(self, t):
        """Survival probability to time t."""
        if t <= 0:
            return 1.0
        # Integrate hazard rate using piecewise constant
        idx = np.searchsorted(self.times, t)
        if idx == 0:
            return np.exp(-self.hazard_rates[0] * t)

        # Sum up hazard contributions
        integral = 0
        prev_t = 0
        for i in range(min(idx, len(self.times))):
            dt = min(self.times[i], t) - prev_t
            integral += self.hazard_rates[i] * dt
            prev_t = self.times[i]
            if prev_t >= t:
                break

        if t > self.times[-1]:
            integral += self.hazard_rates[-1] * (t - self.times[-1])

        return np.exp(-integral)

    def default_probability(self, t):
        """Cumulative default probability to time t."""
        return 1 - self.survival_probability(t)

    def marginal_default_probability(self, t1, t2):
        """Probability of default between t1 and t2."""
        return self.survival_probability(t1) - self.survival_probability(t2)
Out[14]:
Visualization
Line chart showing survival probability declining from 100% to approximately 67% over 10 years.
Survival probability over a 10-year horizon under an upward-sloping hazard rate term structure rising from 2% to 4%. The curve declines from 100% to approximately 67% at maturity, reflecting cumulative default risk as hazard rates increase over time. This upward-sloping structure reflects the empirical pattern that longer-dated credits face higher default risk due to increased uncertainty.
Out[15]:
Visualization
Line chart showing cumulative default probability rising from 0% to approximately 33% over 10 years.
Cumulative default probability over a 10-year horizon under time-varying hazard rates rising from 2% to 4% annually. The curve increases from 0% to approximately 33% by year 10, reflecting accumulated default risk as hazard rates increase over time and enabling CDS pricing across maturities.

The survival probability declines from 100% to approximately 67% over 10 years, while cumulative default probability rises from 0% to 33%. The upward-sloping hazard rate curve, starting at 2% annually and rising to 4%, reflects the common empirical pattern that credit quality deteriorates over time as uncertainty about the firm's future increases. Short-term default is relatively unlikely because financial distress develops gradually, but as the horizon extends, more scenarios for adverse outcomes become possible. This term structure captures the market's expectation that default risk increases with the time horizon.

Calibrating to CDS Spreads

The primary advantage of reduced-form models is their ability to be calibrated directly to market data. As we discussed in Part II Chapter 13 on credit default swaps, CDS spreads reflect the market's assessment of default risk. The CDS market provides a direct window into market-implied default probabilities, making reduced-form models highly practical for trading and risk management applications.

For a CDS with spread ss and recovery rate RR, no-arbitrage pricing requires that the present value of premium payments (premium leg) equals the present value of protection payments (protection leg). This equilibrium condition reflects the fact that a fairly priced CDS has zero initial value: neither party pays upfront because the expected benefits equal the expected costs. This equilibrium allows us to back out the implied hazard rate from observable CDS spreads.

The premium leg represents the periodic payments made by the protection buyer.

Premium_Leg=si=1nΔtiS(ti)D(ti)\text{Premium\_Leg} = s \sum_{i=1}^n \Delta t_i \cdot S(t_i) \cdot D(t_i)

where:

  • ss: CDS spread (annualized premium rate paid by protection buyer)
  • nn: number of premium payment periods
  • ii: index for payment periods, running from 1 to nn
  • Δti\Delta t_i: length of payment period ii in years
  • tit_i: time of the ii-th payment
  • S(ti)S(t_i): survival probability to time tit_i (premium only paid if no default has occurred)
  • D(ti)D(t_i): discount factor to time tit_i, equal to ertie^{-r t_i} for continuous discounting
  • The sum captures expected present value of all premium payments

The summation reflects that premiums are only paid if the reference entity has not defaulted. Each term multiplies the premium rate by the payment period length, the probability of survival to that date, and the appropriate discount factor. The survival probability weighting is crucial: if default occurs before payment date tit_i, that premium is never paid.

Protection_Leg=(1R)0Tλ(t)S(t)D(t)dt\text{Protection\_Leg} = (1-R) \int_0^T \lambda(t) S(t) D(t) \, dt

where:

  • RR: recovery rate (fraction of notional recovered upon default)
  • (1R)(1-R): loss given default (fraction not recovered)
  • TT: CDS maturity
  • λ(t)\lambda(t): hazard rate (default intensity) at time tt
  • S(t)S(t): survival probability to time tt
  • D(t)D(t): discount factor for payment at time tt
  • λ(t)S(t)dt\lambda(t) S(t) dt: probability of default occurring in the infinitesimal interval [t,t+dt][t, t+dt]
  • The integral sums expected present value of protection payments across all possible default times from 0 to TT

where D(t)D(t) is the discount factor to time tt. The protection leg integral captures the expected payout to the protection buyer. Default can occur at any time between now and maturity, and when it does, the protection seller pays (1R)(1-R) times the notional. The integrand λ(t)S(t)\lambda(t) S(t) represents the probability density of defaulting at exactly time tt: the hazard rate times the probability of having survived to that point.

For a flat hazard rate λ\lambda, we can derive an approximate relationship between CDS spread and hazard rate. Under the assumptions that discount rates are small and survival probabilities do not vary too much over the CDS tenor.

Premium_Legsi=1nΔtiS(ti)(sum of premium payments weighted by survival)sTSˉ(approximate as constant average survival Sˉ)\begin{aligned} \text{Premium\_Leg} &\approx s \sum_{i=1}^n \Delta t_i \cdot S(t_i) && \text{(sum of premium payments weighted by survival)} \\ &\approx s \cdot T \cdot \bar{S} && \text{(approximate as constant average survival } \bar{S} \text{)} \end{aligned}

Similarly, the protection leg becomes:

Protection_Leg(1R)0TλS(t)dt(integral of loss payment hazard)(1R)λTSˉ(approximate with constant Sˉ)\begin{aligned} \text{Protection\_Leg} &\approx (1-R) \int_0^T \lambda S(t) \, dt && \text{(integral of loss payment hazard)} \\ &\approx (1-R) \lambda T \bar{S} && \text{(approximate with constant } \bar{S} \text{)} \end{aligned}

Setting these equal (no-arbitrage condition):

sTSˉ=(1R)λTSˉs \cdot T \cdot \bar{S} = (1-R) \lambda T \bar{S}

Canceling TSˉT \bar{S} from both sides gives:

sλ(1R)s \approx \lambda (1 - R)

where:

  • ss: CDS spread (annualized premium rate)
  • λ\lambda: constant hazard rate (default intensity per year)
  • RR: recovery rate
  • (1R)(1-R): loss given default (LGD)
  • The approximation holds when discount rates and survival probabilities are relatively stable over the CDS tenor

This relationship shows that CDS spreads compensate for two components of credit risk: the probability of default (λ\lambda) and the severity of loss when default occurs (1R1-R). The approximation becomes increasingly accurate for shorter maturities and lower hazard rates where discounting effects are minimal.

Solving for the hazard rate provides a direct link between observable CDS spreads and default intensity:

λs1R\lambda \approx \frac{s}{1 - R}

where:

  • λ\lambda: implied hazard rate (annual default intensity)
  • ss: observed CDS spread
  • RR: assumed recovery rate
  • The formula shows that CDS spreads reflect both default risk (λ\lambda) and loss severity (1R1-R)

This simple inversion is remarkably powerful: it allows us to extract market-implied default intensities directly from CDS prices without complex modeling assumptions, making reduced-form models easy to calibrate. The hazard rate interpretation also helps explain why CDS spreads differ across credits: investment-grade companies have low spreads because their hazard rates are low, while distressed companies have high spreads reflecting elevated default intensity.

In[16]:
Code
def hazard_rate_from_cds(cds_spread, recovery_rate=0.4):
    """
    Estimate hazard rate from CDS spread.

    Parameters
    ----------
    cds_spread : float
        CDS spread in decimal (e.g., 0.02 for 200 bps)
    recovery_rate : float
        Expected recovery rate (default 40%)

    Returns
    -------
    float
        Estimated hazard rate
    """
    return cds_spread / (1 - recovery_rate)


def cds_spread_from_hazard(hazard_rate, recovery_rate=0.4):
    """Calculate CDS spread from hazard rate."""
    return hazard_rate * (1 - recovery_rate)


## Example: Convert CDS spreads to hazard rates
cds_spreads_bps = np.array([50, 100, 200, 500, 1000])  # Basis points
cds_spreads = cds_spreads_bps / 10000  # Convert to decimal
recovery = 0.40

hazard_rates_implied = [hazard_rate_from_cds(s, recovery) for s in cds_spreads]
annual_default_probs = [1 - np.exp(-h) for h in hazard_rates_implied]

python #| echo: false #| output: false

import numpy as np

Example: Convert CDS spreads to hazard rates

cds_spreads_bps = np.array([50, 100, 200, 500, 1000]) # Basis points cds_spreads = cds_spreads_bps / 10000 # Convert to decimal recovery = 0.40

hazard_rates_implied = [hazard_rate_from_cds(s, recovery) for s in cds_spreads] annual_default_probs = [1 - np.exp(-h) for h in hazard_rates_implied]

In[17]:
Code

python #| echo: false #| output: true

import pandas as pd import numpy as np

survival_5y = [np.exp(-h*5)*100 for h in hazard_rates_implied]

df = pd.DataFrame({ 'CDS Spread (bps)': cds_spreads_bps, 'Hazard Rate (%)': np.array(hazard_rates_implied) * 100, '1-Year PD (%)': np.array(annual_default_probs) * 100, '5-Year Survival (%)': survival_5y }) print("CDS Spread to Default Probability Mapping (Recovery = 40%)") print("=" * 60) print(df.to_string(index=False, float_format=lambda x: f'{x:.2f}'))

The table reveals the relationship between CDS spreads and default risk across the credit spectrum. A 200 basis point CDS spread implies a hazard rate of 3.33% and annual default probability of 3.27%, with 5-year survival probability of 84.5%. For investment-grade credits (50-100 bps spreads), the mapping is approximately linear because the exponential function is nearly linear for small arguments. However, for distressed credits with spreads of 1000 bps, the hazard rate reaches 16.67% annually, leading to only 43.35% probability of surviving 5 years. At these high spread levels, the approximation becomes less accurate due to the nonlinear relationship between hazard rates and survival probabilities, and practitioners often use more sophisticated models with term structure effects.
Out[18]:
Visualization
Line chart showing hazard rate increasing linearly with CDS spread, with three lines for different recovery rate assumptions.
Hazard rates implied by CDS spreads under three recovery rate scenarios (R=30%, 40%, 50%) showing the relationship λ = s/(1-R). At 500 basis points, hazard rates range from 7.1% (R=50%) to 10.0% (R=30%). This demonstrates how recovery assumptions substantially affect default probability estimates, underscoring the importance of accurate market-based or historical recovery rates in credit models.

The visualization reveals how recovery rate assumptions affect hazard rate inference. For a given CDS spread, a lower recovery rate implies a lower hazard rate because the expected loss per default is higher. At 500 bps, assuming 30% recovery yields an implied hazard rate of approximately 7.1%, while assuming 50% recovery yields approximately 10%. This sensitivity highlights the importance of accurate recovery rate assumptions when calibrating reduced-form models, as the same CDS spread can imply substantially different default probabilities depending on the assumed recovery.

Stochastic Intensity Models

For more realistic dynamics, the hazard rate itself can be modeled as a stochastic process. Rather than assuming a constant default intensity, we allow it to fluctuate over time in response to changing economic conditions. A firm's credit quality varies with business cycles, competitive pressures, management decisions, and countless other factors. Capturing this time variation requires modeling the hazard rate as a random process.

A common specification is the Cox-Ingersoll-Ross (CIR) process, which ensures the hazard rate remains positive and mean-reverts to a long-term level:

dλt=κ(θλt)dt+σλtdWtd\lambda_t = \kappa(\theta - \lambda_t)dt + \sigma\sqrt{\lambda_t}dW_t

where:

  • λt\lambda_t: hazard rate (default intensity) at time tt
  • dλtd\lambda_t: infinitesimal change in the hazard rate over time dtdt
  • κ\kappa: speed of mean reversion (how quickly λt\lambda_t returns to its long-term level θ\theta)
  • θ\theta: long-term mean level of the hazard rate
  • σ\sigma: volatility parameter controlling the magnitude of random fluctuations
  • WtW_t: standard Brownian motion driving the random shocks
  • dtdt: infinitesimal time increment
  • dWtdW_t: infinitesimal Brownian increment
  • κ(θλt)\kappa(\theta - \lambda_t): mean reversion term that pulls λt\lambda_t toward θ\theta when λt<θ\lambda_t < \theta and pushes it down when λt>θ\lambda_t > \theta
  • σλt\sigma\sqrt{\lambda_t}: square root diffusion term that ensures the hazard rate remains non-negative (volatility scales with the level)

The CIR specification has several desirable properties that make it well-suited for modeling credit risk. The square root diffusion σλt\sigma\sqrt{\lambda_t} ensures non-negativity: as λt\lambda_t approaches zero, the volatility shrinks proportionally, making it mathematically impossible for the process to go negative. This is essential since a negative default probability has no economic meaning. The mean reversion term κ(θλt)\kappa(\theta - \lambda_t) pulls the hazard rate toward its long-term mean θ\theta, capturing the empirical observation that credit quality tends to revert over economic cycles. When λt<θ\lambda_t < \theta, the drift is positive (credit quality deteriorates toward normal), and when λt>θ\lambda_t > \theta, the drift is negative (credit quality improves from distressed levels). This mean-reverting behavior reflects the idea that extremely good or extremely bad credit conditions are temporary states that tend to normalize over time.

In[19]:
Code
import numpy as np


def simulate_cir_hazard(lambda0, kappa, theta, sigma, T, n_steps, n_paths):
    """
    Simulate CIR hazard rate paths.

    Parameters
    ----------
    lambda0 : float
        Initial hazard rate
    kappa : float
        Mean reversion speed
    theta : float
        Long-term mean hazard rate
    sigma : float
        Volatility of hazard rate
    T : float
        Time horizon
    n_steps : int
        Number of time steps
    n_paths : int
        Number of simulation paths

    Returns
    -------
    times : array
        Time grid
    paths : array
        Simulated hazard rate paths (n_steps+1, n_paths)
    """
    dt = T / n_steps
    times = np.linspace(0, T, n_steps + 1)
    paths = np.zeros((n_steps + 1, n_paths))
    paths[0] = lambda0

    np.random.seed(42)
    for i in range(n_steps):
        z = np.random.standard_normal(n_paths)
        # Euler discretization with reflection to keep positive
        paths[i + 1] = (
            paths[i]
            + kappa * (theta - paths[i]) * dt
            + sigma * np.sqrt(np.maximum(paths[i], 0)) * np.sqrt(dt) * z
        )
        paths[i + 1] = np.maximum(paths[i + 1], 0)  # Ensure non-negative

    return times, paths
Out[21]:
Visualization
Line chart showing 20 simulated hazard rate paths with mean path converging from 3% to 5% over 5 years.
Monte Carlo simulation of 100 hazard rate paths under the Cox-Ingersoll-Ross (CIR) model with parameters λ₀=3%, θ=5%, κ=0.5, σ=0.1 over five years. Individual paths disperse from near 0% to over 12%, while the blue mean path converges from initial 3% toward 5% equilibrium, reflecting mean reversion. The expanding fan-shaped spread and square-root diffusion structure (σ√λ) ensure non-negativity while capturing path uncertainty. This illustrates how stochastic intensity models capture cyclical mean reversion in credit spreads during business cycles.

The simulated paths start at 3% hazard rate and exhibit clear mean reversion toward the 5% long-term level specified by theta. Individual paths show substantial volatility, with hazard rates ranging from near 0% to over 12% across scenarios, while the mean path (blue line) converges steadily from the initial 3% toward 5% over the 5-year horizon. The mean reversion speed of 0.5 pulls paths back toward the long-term mean at a moderate rate, preventing extreme persistent deviations while allowing meaningful short-term fluctuations. The fan-like spread of paths illustrates the uncertainty inherent in future credit conditions. This stochastic intensity framework forms the basis for pricing credit derivatives with time-varying default risk, capturing the reality that credit spreads and default probabilities evolve randomly over time.

Credit Scoring and Machine Learning Approaches

While structural and reduced-form models derive from financial theory, credit scoring uses historical data to predict which borrowers will default based on observable characteristics. This approach dominates retail lending (mortgages, credit cards, personal loans) and increasingly complements market-based models for corporate credit. Credit scoring identifies statistical patterns in observable characteristics that distinguish defaulters from non-defaulters. It does not model economic mechanisms like structural approaches.

Logistic Regression for Default Prediction

Logistic regression models the probability of default as a function of borrower characteristics. The model transforms a linear combination of features (financial ratios, firm characteristics) into a probability between 0 and 1 using the logistic function:

P(default)=11+e(β0+β1x1++βkxk)P(\text{default}) = \frac{1}{1 + e^{-(\beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k)}}

where:

  • P(default)P(\text{default}): predicted probability of default (bounded between 0 and 1)
  • kk: number of borrower characteristics (features) in the model
  • xix_i: borrower characteristic ii for i=1,,ki = 1, \ldots, k (e.g., debt-to-equity ratio, profitability, liquidity measures)
  • β0\beta_0: intercept term (baseline log-odds when all features are zero)
  • βi\beta_i: coefficient for feature xix_i, representing the change in log-odds per unit change in xix_i
  • ee: Euler's number (base of natural logarithm, approximately 2.71828)
  • β0+β1x1++βkxk\beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k: linear combination forming the log-odds or credit score
  • The logistic function 11+ez\frac{1}{1+e^{-z}} transforms the unbounded log-odds zz into a probability between 0 and 1

The linear combination z=β0+β1x1++βkxkz = \beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k is called the log-odds or credit score. The logistic function, also known as the sigmoid function, transforms this potentially unbounded score into a proper probability.

The logistic function has several attractive properties for classification problems. It is monotonically increasing, so higher scores always correspond to higher default probability, enabling straightforward ranking of borrowers by risk. It asymptotically approaches 0 and 1 at the extremes, ensuring that predictions are always valid probabilities regardless of how extreme the input features become. It has a simple derivative that facilitates maximum likelihood estimation, making parameter fitting computationally tractable even for large datasets. The coefficients βi\beta_i have a clear interpretation: each coefficient represents the change in log-odds per unit change in the corresponding feature xix_i. A positive coefficient means higher values of that feature increase default probability, while a negative coefficient means higher values decrease default probability.

In[22]:
Code
import numpy as np

## Generate synthetic corporate credit data
np.random.seed(42)
n_firms = 2000

## Features (financial ratios)
debt_to_equity = np.random.exponential(1.5, n_firms)
current_ratio = np.random.lognormal(0.5, 0.4, n_firms)
interest_coverage = np.random.exponential(3, n_firms)
profit_margin = np.random.normal(0.08, 0.15, n_firms)
asset_turnover = np.random.lognormal(-0.3, 0.3, n_firms)

## Generate defaults based on financial ratios
log_odds = (
    -3
    + 0.5 * debt_to_equity
    - 0.8 * current_ratio
    - 0.2 * interest_coverage
    - 5 * profit_margin
    - 0.5 * asset_turnover
)
default_prob = 1 / (1 + np.exp(-log_odds))
defaults = np.random.binomial(1, default_prob)

## Create feature matrix
X = np.column_stack(
    [
        debt_to_equity,
        current_ratio,
        interest_coverage,
        profit_margin,
        asset_turnover,
    ]
)
feature_names = [
    "Debt/Equity",
    "Current Ratio",
    "Interest Coverage",
    "Profit Margin",
    "Asset Turnover",
]
y = defaults
In[23]:
Code
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression

## Split data and train model
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, random_state=42
)

## Standardize features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

## Fit logistic regression
logreg = LogisticRegression(random_state=42)
logreg.fit(X_train_scaled, y_train)

## Predictions
y_pred_proba = logreg.predict_proba(X_test_scaled)[:, 1]

python #| echo: false #| output: true

from sklearn.metrics import roc_auc_score

print("Logistic Regression Credit Scoring Model") print("=" * 50) print(f"\nModel Performance:") print(f" AUC-ROC Score: {roc_auc_score(y_test, y_pred_proba):.4f}") print(f" Default Rate (Test Set): {y_test.mean():.1%}") print() print("Feature Coefficients (Standardized):") for name, coef in zip(feature_names, logreg.coef_[0]): direction = "↑ risk" if coef > 0 else "↓ risk" print(f" {name:20s}: {coef:+.4f} ({direction})") strong discriminatory power with an AUC-ROC score above 0.85, indicating it effectively separates defaulters from non-defaulters. The coefficients reveal intuitive relationships consistent with financial theory: higher debt-to-equity increases default risk (positive coefficient) because more leverage means less cushion to absorb losses. Better profitability (negative coefficient on profit margin) reduces default risk because profitable firms generate cash to service debt. Stronger liquidity (negative coefficients on current ratio and interest coverage) also reduces default risk because liquid firms can meet short-term obligations. The test set default rate matches the training distribution, suggesting good model calibration without overfitting.

Out[24]:
Visualization
Horizontal bar chart showing five feature coefficients, with debt/equity positive and other features negative.
Standardized logistic regression coefficients from a credit scoring model ordered by magnitude to reveal feature importance for default prediction. Debt-to-equity ratio exhibits the strongest positive coefficient while profit margin shows the strongest negative coefficient, confirming that solvency and profitability are the primary drivers of creditworthiness. Standardization makes coefficients directly comparable across features with different scales, enabling lenders to guide underwriting policies and automated credit decisioning based on feature rankings.

The coefficient visualization provides immediate insight into which factors drive default risk in the model. Debt-to-equity ratio has the largest positive coefficient, indicating high leverage is the strongest predictor of default. All other features have negative coefficients, meaning higher values reduce default probability. Profit margin shows the strongest protective effect: profitable firms generate cash flow to service debt and weather temporary difficulties. The current ratio and interest coverage also contribute materially to lowering predicted default probability, reflecting the importance of liquidity. Asset turnover has a smaller but still negative effect, suggesting operationally efficient firms face lower default risk.

Altman Z-Score: A Classic Scoring Model

Edward Altman's 1968 Z-score remains widely used for bankruptcy prediction. It demonstrates that simple models with clear economic interpretation can be highly effective. The model combines five financial ratios that collectively capture liquidity, profitability, leverage, solvency, and activity. The model combines five financial ratios that collectively capture liquidity, profitability, leverage, solvency, and activity:

Z=1.2X1+1.4X2+3.3X3+0.6X4+1.0X5Z = 1.2X_1 + 1.4X_2 + 3.3X_3 + 0.6X_4 + 1.0X_5

where:

  • ZZ: composite bankruptcy risk score
  • X1X_1: Working Capital / Total Assets (measures liquidity and short-term financial health)
  • X2X_2: Retained Earnings / Total Assets (measures cumulative profitability and firm age)
  • X3X_3: EBIT / Total Assets (measures operating efficiency and profitability before interest and taxes)
  • X4X_4: Market Value of Equity / Book Value of Debt (measures leverage and market confidence in the firm)
  • X5X_5: Sales / Total Assets (measures asset turnover and revenue generation efficiency)
  • The coefficients 1.2, 1.4, 3.3, 0.6, and 1.0 were determined empirically through discriminant analysis on historical bankruptcy data
  • Higher ZZ-scores indicate lower bankruptcy risk, lower scores indicate higher risk

Firms with Z<1.81Z < 1.81 are classified as distressed, indicating high default risk. Values with Z>2.99Z > 2.99 indicate healthy firms with low default risk. Values between 1.81 and 2.99 fall in the gray zone where classification is uncertain.

The Z-score's enduring popularity stems from its simplicity and interpretability. Each ratio captures a different dimension of financial health: X1X_1 measures liquidity, the firm's ability to meet short-term obligations. X2X_2 captures cumulative profitability and implicitly firm age, since retained earnings accumulate over time. X3X_3 measures operating performance, the core earnings power of the business before financing decisions. X4X_4 combines market information with book values, capturing both leverage and the market's confidence in the firm's prospects. X5X_5 measures asset efficiency, how effectively the firm uses its assets to generate revenue. The weights reflect their relative importance in predicting bankruptcy, with operating profitability (X3X_3) receiving the highest weight of 3.3, indicating it is the strongest discriminator between healthy and distressed firms.

In[25]:
Code
def altman_z_score(
    working_capital,
    retained_earnings,
    ebit,
    market_equity,
    book_debt,
    sales,
    total_assets,
):
    """
    Calculate Altman Z-Score for bankruptcy prediction.

    Returns
    -------
    z_score : float
        Altman Z-Score
    classification : str
        Risk classification
    """
    X1 = working_capital / total_assets
    X2 = retained_earnings / total_assets
    X3 = ebit / total_assets
    X4 = market_equity / book_debt
    X5 = sales / total_assets

    z = 1.2 * X1 + 1.4 * X2 + 3.3 * X3 + 0.6 * X4 + 1.0 * X5

    if z < 1.81:
        classification = "Distressed (High Default Risk)"
    elif z > 2.99:
        classification = "Safe (Low Default Risk)"
    else:
        classification = "Gray Zone (Uncertain)"

    return z, classification
In[26]:
Code
## Example: Two companies
## Company A: Strong financials
z_a, class_a = altman_z_score(
    working_capital=50,
    retained_earnings=80,
    ebit=30,
    market_equity=200,
    book_debt=50,
    sales=300,
    total_assets=250,
)

## Company B: Weak financials
z_b, class_b = altman_z_score(
    working_capital=-10,
    retained_earnings=20,
    ebit=5,
    market_equity=30,
    book_debt=100,
    sales=150,
    total_assets=180,
)
Out[27]:
Console
Altman Z-Score Analysis
========================================

Company A:
  Z-Score: 4.68
  Classification: Safe (Low Default Risk)

Company B:
  Z-Score: 1.19
  Classification: Distressed (High Default Risk)

Company A demonstrates financial health with a Z-score above 3, placing it firmly in the safe zone with low default risk. The strong working capital position indicates liquidity to meet short-term obligations. High retained earnings relative to assets suggest mature profitability and financial stability accumulated over time. Healthy EBIT margins demonstrate operating efficiency, while the favorable ratio of market equity to book debt reflects both low leverage and market confidence in the firm's prospects. In contrast, Company B shows financial distress with a Z-score below 1.81, indicating high default risk. Negative working capital is a serious warning sign, suggesting the firm may struggle to pay current obligations. Weak profitability ratios indicate the core business is not generating adequate returns. High leverage relative to market equity suggests the firm is overleveraged, with debt exceeding what the market believes the equity is worth. These results show how the Z-score effectively synthesizes multiple financial dimensions into a single risk metric. It can be easily communicated and compared across firms.

ROC Analysis and Model Discrimination

The Receiver Operating Characteristic (ROC) curve visualizes how well a credit scoring model separates defaulters from non-defaulters across different probability thresholds. This diagnostic tool is essential for evaluating and comparing credit models.

In[28]:
Code
from sklearn.metrics import roc_curve, roc_auc_score

## Calculate ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
auc_score = roc_auc_score(y_test, y_pred_proba)
Out[29]:
Visualization
ROC curve showing logistic regression model performance with AUC above 0.85, well above the diagonal random classifier line.
Receiver Operating Characteristic (ROC) curve for the logistic regression credit scoring model, plotting true positive rate against false positive rate across classification thresholds. The blue curve rises steeply above the diagonal random classifier line, with AUC of 0.85 indicating the model assigns higher default probability to actual defaulters 85% of the time when comparing random default-nondefault pairs. AUC of 0.85 exceeds the 0.70 regulatory minimum for Basel III internal ratings-based capital models, confirming model suitability for credit risk management.

The ROC curve shows strong model performance, with the blue curve rising steeply above the diagonal random classifier line. The diagonal represents a model with no discriminatory power: one that assigns default probabilities randomly and thus performs no better than chance. The area under the ROC curve (AUC) quantifies overall discriminatory power. An AUC of 0.5 represents random guessing, while 1.0 represents perfect discrimination where all defaulters are assigned higher risk scores than all non-defaulters. The logistic regression model achieves an AUC above 0.85, placing it in the upper range of typical credit scoring models (0.70-0.85). This level of discriminatory power meets most regulatory requirements for internal risk models and indicates the model effectively ranks borrowers by risk. The curve also reveals the tradeoff faced at any threshold: moving left along the curve reduces false positives (healthy firms incorrectly flagged) but also reduces true positives (actual defaults caught).

Portfolio Credit Risk Models

Individual default probabilities tell only part of the story of portfolio risk. When managing a portfolio of loans or bonds, the key risk driver is default correlation: the tendency for multiple obligors to default together. A portfolio of 1,000 independent loans with 1% default probability each has predictable losses around the mean, with the law of large numbers ensuring relatively modest deviations. But if defaults are correlated, as during economic downturns, the portfolio can experience catastrophic losses far exceeding expectations. Understanding and modeling this correlation is the central challenge of portfolio credit risk management.

Default Correlation and Joint Default Probability

Consider two obligors with individual default probabilities p1p_1 and p2p_2. If defaults were independent, the joint probability would simply be the product p1×p2p_1 \times p_2. But in reality, defaults are positively correlated, economic conditions that cause one firm to default often affect others as well. Under the Gaussian copula framework, we model each obligor's creditworthiness through a latent variable that follows a standard normal distribution. This latent variable can be thought of as an abstract measure of the firm's financial health, with lower values indicating worse conditions. Obligor ii defaults when this latent variable falls below the threshold Φ1(pi)\Phi^{-1}(p_i).

When the two latent variables have correlation ρ\rho, we can calculate the joint default probability using the bivariate normal distribution. The probability that both obligors default is the probability that both latent variables simultaneously fall below their respective thresholds:

P(both default)=Φ2(Φ1(p1),Φ1(p2),ρ)P(\text{both default}) = \Phi_2\left(\Phi^{-1}(p_1), \Phi^{-1}(p_2), \rho\right)

where:

  • P(both default)P(\text{both default}): probability that both obligors 1 and 2 default
  • p1,p2p_1, p_2: individual default probabilities for obligors 1 and 2
  • ρ\rho: correlation coefficient between the latent variables driving defaults (ranges from -1 to 1)
  • Φ1()\Phi^{-1}(\cdot): inverse of the cumulative standard normal distribution function
  • Φ1(pi)\Phi^{-1}(p_i): standard normal threshold corresponding to default probability pip_i (the critical value below which default occurs)
  • Φ2(z1,z2,ρ)\Phi_2(z_1, z_2, \rho): bivariate standard normal cumulative distribution function, giving P(Z1z1,Z2z2)P(Z_1 \leq z_1, Z_2 \leq z_2) when Z1Z_1 and Z2Z_2 are correlated standard normals with correlation ρ\rho
  • This Gaussian copula approach elegantly separates marginal behavior (individual default probabilities) from dependence structure (correlation)

The copula approach gains its flexibility from this separation of concerns. The marginal default probabilities can be estimated or calibrated independently for each obligor using any method (structural models, credit scores, historical data). The copula then combines these marginals with a dependence structure specified by the correlation parameter. This modularity makes the framework highly flexible and practical for large portfolios with heterogeneous obligors.

In[30]:
Code
from scipy.stats import norm, multivariate_normal


def joint_default_probability(p1, p2, correlation):
    """
    Calculate joint default probability using Gaussian copula.

    Parameters
    ----------
    p1, p2 : float
        Individual default probabilities
    correlation : float
        Default correlation

    Returns
    -------
    float
        Joint default probability
    """
    # Transform to standard normal
    z1 = norm.ppf(p1)
    z2 = norm.ppf(p2)

    # Bivariate normal CDF
    mean = [0, 0]
    cov = [[1, correlation], [correlation, 1]]
    mvn = multivariate_normal(mean=mean, cov=cov)

    return mvn.cdf([z1, z2])
Out[32]:
Console
Joint Default Probability vs. Correlation
==================================================
Individual PD: 2.0% each
Independent joint PD: 0.0400%

ρ = 0.0: Joint PD = 0.0400% (1.0x independent)
ρ = 0.1: Joint PD = 0.0688% (1.7x independent)
ρ = 0.3: Joint PD = 0.1664% (4.2x independent)
ρ = 0.5: Joint PD = 0.3387% (8.5x independent)
ρ = 0.7: Joint PD = 0.6276% (15.7x independent)

Correlation dramatically increases joint default probability, far beyond what intuition might suggest. Under independence (ρ=0), the joint probability equals the product of individual probabilities at 0.04%, representing the case where each firm's fate is determined by entirely separate factors. However, at 50% correlation, joint default probability rises to approximately 0.15%, nearly 4 times the independent level. At 70% correlation, the multiplier exceeds 7x. This superlinear increase in joint default risk with correlation illustrates the concentration risk inherent in portfolios of correlated credits, making portfolio credit modeling essential for risk management. The economic interpretation is clear: when times are bad, they tend to be bad for many firms simultaneously, creating clustering of defaults that can devastate portfolios.

Out[33]:
Visualization
Line chart showing joint default probability increasing with correlation for three different individual PD levels.
Joint default probability versus asset correlation for three individual default probability levels (1%, 3%, 5%) under the Gaussian copula framework. All curves exhibit superlinear growth with increasing correlation. The 1% PD case escalates from 1 basis point under independence to approximately 30 basis points at 70% correlation, demonstrating why correlation dominates portfolio tail risk and creates clustered defaults during economic recessions.

The figure illustrates how joint default probability increases with correlation across different credit quality levels. For investment-grade obligors with 1% PD, the joint default probability under independence is just 1 basis point (0.01%), but rises to approximately 30 bps at 70% correlation. For speculative-grade obligors with 5% PD, the increase is even more dramatic: from 25 bps under independence to over 200 bps at high correlation. The dotted horizontal lines show the independent joint probability for reference, highlighting how correlation can multiply joint default risk by factors of 10x or more. This visualization shows why correlation is critical for portfolio credit risk.

The Gaussian Copula Model

The Gaussian copula framework, widely used before the 2008 crisis for CDO pricing (as discussed in Part II Chapter 14), models defaults through a latent factor structure. The key insight is that defaults across a portfolio can be driven by both common factors affecting all firms, and idiosyncratic factors unique to each firm. Each obligor's creditworthiness depends on both systematic (economy-wide) and idiosyncratic (firm-specific) factors:

Ai=ρiM+1ρiεiA_i = \sqrt{\rho_i} M + \sqrt{1 - \rho_i} \varepsilon_i

where:

  • AiA_i: latent "asset return" variable driving obligor ii's default (higher values indicate better creditworthiness)
  • ii: index for the obligor
  • MM: common systematic factor representing economy-wide conditions (e.g., GDP growth, market returns), with MN(0,1)M \sim N(0,1)
  • εi\varepsilon_i: obligor ii's idiosyncratic factor representing firm-specific risk independent of other obligors, with εiN(0,1)\varepsilon_i \sim N(0,1)
  • ρi\rho_i: correlation coefficient between obligor ii's asset return and the systematic factor (determines systematic risk exposure, ranges from 0 to 1)
  • ρi\sqrt{\rho_i}: weight on the systematic factor
  • 1ρi\sqrt{1 - \rho_i}: weight on the idiosyncratic factor
  • The weights ensure AiA_i has unit variance when MM and εi\varepsilon_i are independent standard normal random variables
  • Since MM and εi\varepsilon_i are independent standard normals, AiA_i is also standard normal: AiN(0,1)A_i \sim N(0,1)

Obligor ii defaults if Ai<Φ1(pi)A_i < \Phi^{-1}(p_i), where pip_i is their individual default probability. The threshold is chosen so that the unconditional default probability matches the calibrated value for each obligor.

Default Correlation Mechanism

The factor structure elegantly captures how default correlation emerges from exposure to common economic conditions. When the systematic factor MM is low (representing an economic downturn), all obligors' AiA_i values decrease simultaneously by ρiM\sqrt{\rho_i} M, increasing the probability that multiple firms cross their default thresholds together. The correlation parameter ρi\rho_i controls this sensitivity: obligors with high ρ_i are strongly affected by systematic conditions and thus highly correlated with each other, while obligors with low ρi\rho_i are driven primarily by idiosyncratic factors and have weak correlation with others.

This single-factor structure explains why defaults cluster during recessions even among apparently unrelated firms. A retailer and a manufacturer in different industries may seem independent, yet both are affected by consumer confidence, credit availability, and overall economic growth. The systematic factor captures these common influences. During normal times, the idiosyncratic factors dominate and defaults appear somewhat random. But during severe recessions when MM takes extreme negative values, the systematic component pushes many firms toward their default thresholds simultaneously, creating the default clustering that devastates credit portfolios.

In[34]:
Code
import numpy as np
from scipy.stats import norm


class GaussianCopulaModel:
    """
    Single-factor Gaussian copula model for portfolio credit risk.
    """

    def __init__(self, n_obligors, default_probs, correlations, exposures):
        """
        Parameters
        ----------
        n_obligors : int
            Number of obligors
        default_probs : array
            Default probability for each obligor
        correlations : array
            Correlation with systematic factor for each obligor
        exposures : array
            Exposure amount for each obligor
        """
        self.n_obligors = n_obligors
        self.pds = np.array(default_probs)
        self.correlations = np.array(correlations)
        self.exposures = np.array(exposures)
        self.default_thresholds = norm.ppf(self.pds)

    def simulate_losses(self, n_sims, recovery_rate=0.4, seed=None):
        """
        Simulate portfolio losses using Monte Carlo.

        Returns
        -------
        losses : array
            Simulated portfolio losses
        """
        if seed is not None:
            np.random.seed(seed)

        # Simulate systematic factor
        M = np.random.standard_normal(n_sims)

        # Simulate idiosyncratic factors
        epsilon = np.random.standard_normal((n_sims, self.n_obligors))

        # Calculate asset returns
        A = (
            np.sqrt(self.correlations) * M[:, np.newaxis]
            + np.sqrt(1 - self.correlations) * epsilon
        )

        # Determine defaults
        defaults = (A < self.default_thresholds).astype(float)

        # Calculate losses (exposure * LGD for each default)
        lgd = 1 - recovery_rate
        losses = np.sum(defaults * self.exposures * lgd, axis=1)

        return losses

    def calculate_credit_var(
        self, n_sims, confidence=0.99, recovery_rate=0.4, seed=None
    ):
        """
        Calculate Credit VaR at given confidence level.
        """
        losses = self.simulate_losses(n_sims, recovery_rate, seed)
        expected_loss = np.mean(losses)
        var = np.percentile(losses, confidence * 100)
        unexpected_loss = var - expected_loss

        return {
            "expected_loss": expected_loss,
            "var": var,
            "unexpected_loss": unexpected_loss,
            "loss_distribution": losses,
        }


## | echo: false
## | output: false


## Create a loan portfolio
n_obligors = 500
np.random.seed(42)

## Heterogeneous portfolio
default_probs = np.random.beta(2, 50, n_obligors)  # PDs around 2-5%
correlations = np.random.uniform(0.15, 0.35, n_obligors)  # Correlations 15-35%
exposures = np.random.lognormal(12, 0.8, n_obligors)  # Loan sizes

portfolio = GaussianCopulaModel(
    n_obligors, default_probs, correlations, exposures
)

## Calculate Credit VaR
results = portfolio.calculate_credit_var(n_sims=50000, confidence=0.99, seed=42)

output: true

import numpy as np

total_exposure = np.sum(exposures) avg_pd = np.mean(default_probs) avg_corr = np.mean(correlations) exp_loss = results['expected_loss'] var_val = results['var'] unexp_loss = results['unexpected_loss'] var_el_ratio = var_val / exp_loss

print("Portfolio Credit Risk Analysis (Gaussian Copula Model)") print("=" * 55) print(f"Number of Obligors: {n_obligors}") print(f"Total Exposure: {total_exposure/1e6:.1f} million") print(f"Average PD: {avg_pd:.2%}") print(f"Average Correlation: {avg_corr:.1%}") print() print("Risk Metrics:") print(f" Expected Loss: {exp_loss/1e6:.2f} million ({exp_loss/total_exposure:.2%} of exposure)") print(f" 99% Credit VaR: {var_val/1e6:.2f} million ({var_val/total_exposure:.2%} of exposure)") print(f" Unexpected Loss: {unexp_loss/1e6:.2f} million") print(f" VaR/EL Ratio: {var_el_ratio:.1f}x")

The portfolio exhibits moderate credit risk with expected loss reflecting the average default probability and loss given default assumptions. Expected loss represents the actuarial cost of credit risk, the average loss that would be experienced over many repetitions of the same portfolio. However, the 99% Credit VaR reveals significant tail risk in adverse scenarios. The VaR/EL ratio measures how much worse losses can get beyond expectations. The calculated ratio is typical for diversified portfolios, but concentrated or highly correlated portfolios can have much higher ratios. The unexpected loss component, the difference between VaR and expected loss, represents the buffer needed to absorb losses in stress scenarios. This is the capital that must be held to remain solvent with 99% confidence.
Out[35]:
Visualization
Histogram showing portfolio loss distribution with positive skew, expected loss at $0.4M and 99% VaR at $2M.
Portfolio credit loss distribution from 50,000 Monte Carlo simulations of a 500-obligor loan portfolio under the Gaussian copula framework. Most scenarios cluster near $0.4 million expected loss, but the distribution exhibits pronounced positive skew with a fat tail extending to $3-4 million. The 99% VaR at $2 million represents a 5-fold increase over expected loss, reflecting tail events where adverse systematic conditions trigger correlated defaults across multiple borrowers. This tail thickness illustrates systematic risk that diversification cannot eliminate, motivating regulatory frameworks (Basel III) that mandate capital buffers calibrated to high-confidence VaR levels.

The loss distribution exhibits the characteristic positive skew of credit portfolios. Most simulated scenarios produce losses near the expected value, clustering around the peak of the distribution. But the fat right tail extends significantly beyond, reflecting scenarios where many defaults coincide. The 99% VaR substantially exceeds the expected loss, illustrating that tail risk significantly exceeds typical outcomes. This asymmetry arises because losses are bounded below at zero (you cannot have negative credit losses) but unbounded above when multiple defaults coincide. Extreme losses, while rare (occurring in only 1% of scenarios), can be many times larger than the expected loss. This tail risk is the primary concern for risk managers and regulators because it represents scenarios that can threaten portfolio solvency or even institutional failure.

Correlation Sensitivity

Default correlation is the key driver of portfolio tail risk. Let's examine how correlation affects the loss distribution to develop intuition for this critical relationship:

Quiz

Ready to test your understanding? Take this quick quiz to reinforce what you've learned about credit risk modeling approaches.

Loading component...

Reference

BIBTEXAcademic
@misc{creditriskmodelingmertonhazardratescopulas, author = {Michael Brenndoerfer}, title = {Credit Risk Modeling: Merton, Hazard Rates & Copulas}, year = {2025}, url = {https://mbrenndoerfer.com/writing/credit-risk-modeling-structural-reduced-form-portfolio}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-01-01} }
APAAcademic
Michael Brenndoerfer (2025). Credit Risk Modeling: Merton, Hazard Rates & Copulas. Retrieved from https://mbrenndoerfer.com/writing/credit-risk-modeling-structural-reduced-form-portfolio
MLAAcademic
Michael Brenndoerfer. "Credit Risk Modeling: Merton, Hazard Rates & Copulas." 2026. Web. today. <https://mbrenndoerfer.com/writing/credit-risk-modeling-structural-reduced-form-portfolio>.
CHICAGOAcademic
Michael Brenndoerfer. "Credit Risk Modeling: Merton, Hazard Rates & Copulas." Accessed today. https://mbrenndoerfer.com/writing/credit-risk-modeling-structural-reduced-form-portfolio.
HARVARDAcademic
Michael Brenndoerfer (2025) 'Credit Risk Modeling: Merton, Hazard Rates & Copulas'. Available at: https://mbrenndoerfer.com/writing/credit-risk-modeling-structural-reduced-form-portfolio (Accessed: today).
SimpleBasic
Michael Brenndoerfer (2025). Credit Risk Modeling: Merton, Hazard Rates & Copulas. https://mbrenndoerfer.com/writing/credit-risk-modeling-structural-reduced-form-portfolio