Uncertainty Quantification: Monte Carlo Through the Solver Chain¶

Fusion performance predictions depend on empirical scaling laws whose coefficients carry non-trivial statistical uncertainty. The IPB98(y,2) H-mode confinement scaling, for example, was fitted to a multi-machine database of ~2,800 discharges, and a Bayesian re-analysis (Verdoolaege et al., Nucl. Fusion 61 076006, 2021) showed that the posterior distributions of the exponents are wider than many projection studies assume.

This notebook demonstrates the full-chain Monte Carlo UQ pipeline in scpn_fusion.core.uncertainty, which propagates parameter uncertainties through three stages:

  1. IPB98 scaling-law exponents — sampled from Gaussian posteriors
  2. Equilibrium boundary perturbation — captures shape reconstruction error
  3. Transport model coefficients — gyro-Bohm diffusivity and EPED pedestal height

The result is a statistically grounded confidence envelope on confinement time $\tau_E$, fusion power $P_{\mathrm{fus}}$, energy gain $Q$, and normalised beta $\beta_N$.

Key insight: a point prediction of $Q = 10$ is less informative than a statement like "$Q \in [6.8,\, 14.2]$ at 90% confidence." The latter is what regulators and investors need.


References:

  • ITER Physics Basis, Nucl. Fusion 39 (1999) 2175
  • G. Verdoolaege et al., Nucl. Fusion 61 (2021) 076006
  • SCPN Fusion Core v3.5.0 — uncertainty.py module

Open In Colab Binder


In [ ]:
import numpy as np
import matplotlib.pyplot as plt
import json

# Add the repo root to path if running from examples/
import sys, os
sys.path.insert(0, os.path.join(os.path.dirname(os.getcwd()), 'src'))

from scpn_fusion.core.uncertainty import (
    PlasmaScenario,
    UQResult,
    FullChainUQResult,
    EquilibriumUncertainty,
    TransportUncertainty,
    IPB98_CENTRAL,
    IPB98_SIGMA,
    ipb98_tau_e,
    fusion_power_from_tau,
    quantify_uncertainty,
    quantify_full_chain,
    summarize_uq,
)

# Plotting defaults
plt.rcParams.update({
    'figure.dpi': 120,
    'font.size': 11,
    'axes.grid': True,
    'grid.alpha': 0.3,
})
print("Imports OK.")

IPB98(y,2) Scaling Law¶

The energy confinement time in H-mode is empirically described by the IPB98(y,2) scaling:

$$ \tau_E = C \;\cdot\; I_p^{\,\alpha_I} \;\cdot\; B_t^{\,\alpha_B} \;\cdot\; P_{\mathrm{heat}}^{\,\alpha_P} \;\cdot\; \bar{n}_e^{\,\alpha_n} \;\cdot\; R^{\,\alpha_R} \;\cdot\; A^{\,\alpha_A} \;\cdot\; \kappa^{\,\alpha_\kappa} \;\cdot\; M^{\,\alpha_M} $$

where $I_p$ is in MA, $B_t$ in T, $P_{\mathrm{heat}}$ in MW, $\bar{n}_e$ in $10^{19}\,\mathrm{m}^{-3}$, $R$ in m, and $M$ in AMU.

Each exponent has a central value (from the original ITER Physics Basis fit) and a 1-sigma uncertainty (from the Verdoolaege 2021 Bayesian re-analysis). The uncertainties are not negligible: for instance, $\alpha_R = 1.97 \pm 0.08$ means the size-scaling exponent could plausibly range from 1.81 to 2.13 at the 2-sigma level.

In [ ]:
# Display IPB98 central values and uncertainties as a formatted table
print("IPB98(y,2) Scaling Law Coefficients")
print("=" * 55)
print(f"{'Parameter':<18} {'Central':>10} {'1-sigma':>10} {'Rel. (%)':>10}")
print("-" * 55)
for key in IPB98_CENTRAL:
    central = IPB98_CENTRAL[key]
    sigma = IPB98_SIGMA[key]
    rel_pct = abs(sigma / central) * 100 if abs(central) > 1e-12 else 0.0
    print(f"{key:<18} {central:>10.4f} {sigma:>10.4f} {rel_pct:>9.1f}%")
print("=" * 55)
print("\nSource: Verdoolaege et al., Nucl. Fusion 61 (2021) 076006")

ITER Reference Scenario¶

We use the ITER baseline scenario as our primary test case:

Parameter Value Unit
$I_p$ 15.0 MA
$B_t$ 5.3 T
$P_{\mathrm{heat}}$ 50.0 MW
$\bar{n}_e$ 10.1 $10^{19}\,\mathrm{m}^{-3}$
$R$ 6.2 m
$A$ 3.1 —
$\kappa$ 1.7 —
$M$ 2.5 AMU

The expected Q = 10 target corresponds to $P_{\mathrm{fus}} \approx 500$ MW.

In [ ]:
iter_scenario = PlasmaScenario(
    I_p=15.0,
    B_t=5.3,
    P_heat=50.0,
    n_e=10.1,
    R=6.2,
    A=3.1,
    kappa=1.7,
    M=2.5,
)

# Compute the central (point) estimate first
tau_central = ipb98_tau_e(iter_scenario)
pfus_central = fusion_power_from_tau(iter_scenario, tau_central)
q_central = pfus_central / iter_scenario.P_heat

print("ITER Baseline — Central (Point) Estimates")
print("=" * 45)
print(f"  tau_E   = {tau_central:.3f} s")
print(f"  P_fus   = {pfus_central:.1f} MW")
print(f"  Q       = {q_central:.2f}")

Single-Parameter UQ (IPB98 Only)¶

The simplest form of UQ samples only the scaling-law exponents from their Gaussian posteriors, keeping the plasma parameters fixed at their nominal values. This captures uncertainty in the empirical model itself — i.e., how confident we are in the scaling law given the multi-machine database.

For each Monte Carlo draw $k = 1, \ldots, N$:

  1. Sample $\alpha_I^{(k)} \sim \mathcal{N}(0.93,\, 0.04^2)$, etc. for all 9 coefficients
  2. Enforce physical constraints: $C > 0$, $\alpha_P < -0.1$
  3. Compute $\tau_E^{(k)}$, $P_{\mathrm{fus}}^{(k)}$, $Q^{(k)}$

The resulting ensemble gives us percentile-based confidence intervals.

In [ ]:
# Run single-parameter UQ with 5000 MC samples
uq_ipb98 = quantify_uncertainty(iter_scenario, n_samples=5000, seed=42)

print("IPB98-Only UQ Results (ITER Baseline, 5000 samples)")
print("=" * 55)
print(f"  tau_E   = {uq_ipb98.tau_E:.3f} +/- {uq_ipb98.tau_E_sigma:.3f} s")
print(f"  P_fus   = {uq_ipb98.P_fusion:.1f} +/- {uq_ipb98.P_fusion_sigma:.1f} MW")
print(f"  Q       = {uq_ipb98.Q:.2f} +/- {uq_ipb98.Q_sigma:.2f}")
print()
pct_labels = ['p5', 'p25', 'p50', 'p75', 'p95']
print(f"  {'Percentile':<12} {'tau_E (s)':>10} {'P_fus (MW)':>12} {'Q':>8}")
print(f"  {'-'*12:<12} {'-'*10:>10} {'-'*12:>12} {'-'*8:>8}")
for i, label in enumerate(pct_labels):
    print(f"  {label:<12} {uq_ipb98.tau_E_percentiles[i]:>10.3f}"
          f" {uq_ipb98.P_fusion_percentiles[i]:>12.1f}"
          f" {uq_ipb98.Q_percentiles[i]:>8.2f}")

# --- Histogram of tau_E samples ---
# To plot the histogram, we re-run the MC to collect raw samples.
# (quantify_uncertainty returns aggregates; we replicate the loop here.)
rng = np.random.default_rng(42)
tau_samples = np.zeros(5000)
for i in range(5000):
    params = {k: rng.normal(IPB98_CENTRAL[k], IPB98_SIGMA[k]) for k in IPB98_CENTRAL}
    params['C'] = max(params['C'], 1e-4)
    params['alpha_P'] = min(params['alpha_P'], -0.1)
    tau_samples[i] = max(ipb98_tau_e(iter_scenario, params), 1e-6)

fig, ax = plt.subplots(figsize=(9, 5))
ax.hist(tau_samples, bins=60, density=True, color='steelblue', alpha=0.7,
        edgecolor='white', linewidth=0.5, label='MC samples')

# Percentile lines
p5, p50, p95 = np.percentile(tau_samples, [5, 50, 95])
for pval, clr, lbl in [(p5, 'crimson', f'p5 = {p5:.3f} s'),
                        (p50, 'gold', f'p50 = {p50:.3f} s'),
                        (p95, 'crimson', f'p95 = {p95:.3f} s')]:
    ax.axvline(pval, color=clr, linestyle='--', linewidth=2, label=lbl)

ax.set_xlabel(r'Confinement Time $\tau_E$ (s)', fontsize=12)
ax.set_ylabel('Probability Density', fontsize=12)
ax.set_title('IPB98-Only UQ: ITER Confinement Time Distribution\n'
             '(5000 MC samples over scaling-law exponents)', fontsize=13)
ax.legend(fontsize=10)
plt.tight_layout()
plt.show()

Full-Chain UQ: Equilibrium $\rightarrow$ Transport $\rightarrow$ Fusion Power¶

The IPB98-only UQ captures scaling-law uncertainty but ignores two other major error sources:

  1. Equilibrium reconstruction uncertainty — the plasma boundary shape is measured with finite precision. Perturbing the major radius $R$ by its reconstruction uncertainty ($\sigma_R / R \approx 2\%$) propagates into $\tau_E$ through the $R^{1.97}$ dependence.

  2. Transport model uncertainty — the gyro-Bohm diffusivity coefficient and the EPED pedestal height prediction each carry $\sim 20$--$30\%$ fractional uncertainty. Higher transport (larger $\chi_{\mathrm{gB}}$) reduces confinement; a taller pedestal improves it.

The quantify_full_chain function samples all three sources simultaneously and additionally computes the normalised beta $\beta_N$ as a stability metric.

In [ ]:
# Full-chain UQ: equilibrium + transport + scaling-law
fc_result = quantify_full_chain(
    iter_scenario,
    n_samples=1000,
    seed=42,
    chi_gB_sigma=0.3,       # 30% gyro-Bohm uncertainty (log-normal)
    pedestal_sigma=0.2,     # 20% pedestal height uncertainty
    boundary_sigma=0.02,    # 2% major-radius boundary uncertainty
)

print("Full-Chain UQ Results (ITER Baseline, 1000 samples)")
print("=" * 60)
print(f"  tau_E     = {fc_result.tau_E:.3f} +/- {fc_result.tau_E_sigma:.3f} s")
print(f"  P_fus     = {fc_result.P_fusion:.1f} +/- {fc_result.P_fusion_sigma:.1f} MW")
print(f"  Q         = {fc_result.Q:.2f} +/- {fc_result.Q_sigma:.2f}")
print()
print("  90% Confidence Bands [5th, 50th, 95th percentile]:")
print(f"    psi_nrmse : [{fc_result.psi_nrmse_bands[0]:.4f},"
      f" {fc_result.psi_nrmse_bands[1]:.4f},"
      f" {fc_result.psi_nrmse_bands[2]:.4f}]")
print(f"    tau_E (s) : [{fc_result.tau_E_bands[0]:.3f},"
      f" {fc_result.tau_E_bands[1]:.3f},"
      f" {fc_result.tau_E_bands[2]:.3f}]")
print(f"    P_fus (MW): [{fc_result.P_fusion_bands[0]:.1f},"
      f" {fc_result.P_fusion_bands[1]:.1f},"
      f" {fc_result.P_fusion_bands[2]:.1f}]")
print(f"    Q         : [{fc_result.Q_bands[0]:.2f},"
      f" {fc_result.Q_bands[1]:.2f},"
      f" {fc_result.Q_bands[2]:.2f}]")
print(f"    beta_N    : [{fc_result.beta_N_bands[0]:.3f},"
      f" {fc_result.beta_N_bands[1]:.3f},"
      f" {fc_result.beta_N_bands[2]:.3f}]")

# Also show the JSON-serializable summary
print("\n--- JSON-serialisable summary ---")
summary = summarize_uq(fc_result)
print(json.dumps(summary, indent=2))

Benchmark¶

In [ ]:
%%timeit -n1 -r3
quantify_full_chain(iter_scenario, n_samples=1000, seed=42)

Visualizing Uncertainty Propagation¶

The four panels below show how uncertainty fans out through the prediction chain. Notice that the fusion power and Q distributions are significantly wider than the $\tau_E$ distribution — this is because $P_{\mathrm{fus}} \propto \tau_E^2$, so uncertainty is amplified quadratically.

Each histogram includes:

  • Red dashed lines: 5th and 95th percentiles (90% confidence interval)
  • Gold dashed line: 50th percentile (median)
In [ ]:
# Re-run full chain to collect raw samples for histograms
n_viz = 5000
rng_viz = np.random.default_rng(42)

tau_arr = np.zeros(n_viz)
pfus_arr = np.zeros(n_viz)
q_arr = np.zeros(n_viz)
beta_n_arr = np.zeros(n_viz)

for i in range(n_viz):
    # IPB98 exponent perturbation
    params = {k: rng_viz.normal(IPB98_CENTRAL[k], IPB98_SIGMA[k]) for k in IPB98_CENTRAL}
    params['C'] = max(params['C'], 1e-4)
    params['alpha_P'] = min(params['alpha_P'], -0.1)

    # Transport perturbation
    chi_factor = rng_viz.lognormal(0.0, 0.3)
    ped_factor = max(rng_viz.normal(1.0, 0.2), 0.1)

    # Equilibrium boundary perturbation
    R_pert = max(iter_scenario.R * (1.0 + rng_viz.normal(0.0, 0.02)), 0.5)
    pert = PlasmaScenario(
        I_p=iter_scenario.I_p, B_t=iter_scenario.B_t,
        P_heat=iter_scenario.P_heat, n_e=iter_scenario.n_e,
        R=R_pert, A=iter_scenario.A,
        kappa=iter_scenario.kappa, M=iter_scenario.M,
    )

    tau = max(ipb98_tau_e(pert, params) * ped_factor / chi_factor, 1e-6)
    pfus = max(fusion_power_from_tau(pert, tau), 0.0)
    q = pfus / iter_scenario.P_heat

    # beta_N
    a_pert = R_pert / iter_scenario.A
    V_approx = 2.0 * np.pi**2 * R_pert * a_pert**2 * iter_scenario.kappa
    p_avg = iter_scenario.P_heat * tau * 1e6 / V_approx
    mu0 = 4.0 * np.pi * 1e-7
    beta_t_pct = 2.0 * mu0 * p_avg / iter_scenario.B_t**2 * 100.0
    I_N = iter_scenario.I_p / (a_pert * iter_scenario.B_t)
    beta_n_arr[i] = beta_t_pct / I_N if I_N > 1e-6 else 0.0

    tau_arr[i] = tau
    pfus_arr[i] = pfus
    q_arr[i] = q

# --- 2x2 subplot ---
fig, axes = plt.subplots(2, 2, figsize=(13, 10))

datasets = [
    (tau_arr, r'$\tau_E$ (s)', 'Confinement Time', 'steelblue'),
    (q_arr, '$Q$', 'Energy Gain', 'darkorange'),
    (pfus_arr, r'$P_{\mathrm{fus}}$ (MW)', 'Fusion Power', 'seagreen'),
    (beta_n_arr, r'$\beta_N$', 'Normalised Beta', 'mediumpurple'),
]

panel_labels = ['(a)', '(b)', '(c)', '(d)']

for ax, (data, xlabel, title, color), label in zip(
    axes.flat, datasets, panel_labels
):
    ax.hist(data, bins=60, density=True, color=color, alpha=0.7,
            edgecolor='white', linewidth=0.5)

    p5, p50, p95 = np.percentile(data, [5, 50, 95])
    ax.axvline(p5, color='crimson', ls='--', lw=1.8, label=f'p5 = {p5:.2f}')
    ax.axvline(p50, color='gold', ls='--', lw=2.0, label=f'p50 = {p50:.2f}')
    ax.axvline(p95, color='crimson', ls='--', lw=1.8, label=f'p95 = {p95:.2f}')

    ax.set_xlabel(xlabel, fontsize=12)
    ax.set_ylabel('Probability Density', fontsize=11)
    ax.set_title(f'{label} {title}', fontsize=13)
    ax.legend(fontsize=9, loc='upper right')

fig.suptitle('Full-Chain UQ: ITER Baseline (5000 MC Samples)',
             fontsize=15, fontweight='bold', y=1.01)
plt.tight_layout()
plt.show()

SPARC Comparison¶

CFS/MIT's SPARC tokamak takes a fundamentally different approach to ITER: high field ($B_t = 12.2$ T via HTS magnets) in a compact device ($R = 1.85$ m). The strong-field scaling ($\tau_E \propto B^{0.15}$) combined with the weak size dependence means SPARC aims for $Q \geq 2$ with much smaller absolute uncertainty in the plasma parameters.

Let us run the same UQ pipeline on both machines and compare the confidence envelopes side by side.

Parameter ITER SPARC
$I_p$ (MA) 15.0 8.7
$B_t$ (T) 5.3 12.2
$P_{\mathrm{heat}}$ (MW) 50.0 25.0
$\bar{n}_e$ ($10^{19}\,\mathrm{m}^{-3}$) 10.1 3.1
$R$ (m) 6.2 1.85
$A$ 3.1 3.1
$\kappa$ 1.7 1.97
$M$ (AMU) 2.5 2.5
In [ ]:
sparc_scenario = PlasmaScenario(
    I_p=8.7,
    B_t=12.2,
    P_heat=25.0,
    n_e=3.1,
    R=1.85,
    A=3.1,
    kappa=1.97,
    M=2.5,
)

# Full-chain UQ for both
fc_iter = quantify_full_chain(iter_scenario, n_samples=5000, seed=42)
fc_sparc = quantify_full_chain(sparc_scenario, n_samples=5000, seed=42)

# --- Comparison table ---
print("Full-Chain UQ Comparison: ITER vs SPARC (5000 MC samples each)")
print("=" * 70)
print(f"{'Metric':<20} {'ITER (p5/p50/p95)':>24} {'SPARC (p5/p50/p95)':>24}")
print("-" * 70)

def _band_str(bands, fmt=".2f"):
    return f"[{bands[0]:{fmt}} / {bands[1]:{fmt}} / {bands[2]:{fmt}}]"

print(f"{'tau_E (s)':<20} {_band_str(fc_iter.tau_E_bands, '.3f'):>24}"
      f" {_band_str(fc_sparc.tau_E_bands, '.3f'):>24}")
print(f"{'P_fus (MW)':<20} {_band_str(fc_iter.P_fusion_bands, '.1f'):>24}"
      f" {_band_str(fc_sparc.P_fusion_bands, '.1f'):>24}")
print(f"{'Q':<20} {_band_str(fc_iter.Q_bands):>24}"
      f" {_band_str(fc_sparc.Q_bands):>24}")
print(f"{'beta_N':<20} {_band_str(fc_iter.beta_N_bands, '.3f'):>24}"
      f" {_band_str(fc_sparc.beta_N_bands, '.3f'):>24}")
print("=" * 70)

# --- Side-by-side Q histograms ---
# Collect raw Q samples for both machines
def _collect_q_samples(scenario, n=5000, seed=42):
    rng = np.random.default_rng(seed)
    qs = np.zeros(n)
    for i in range(n):
        params = {k: rng.normal(IPB98_CENTRAL[k], IPB98_SIGMA[k]) for k in IPB98_CENTRAL}
        params['C'] = max(params['C'], 1e-4)
        params['alpha_P'] = min(params['alpha_P'], -0.1)
        chi_f = rng.lognormal(0.0, 0.3)
        ped_f = max(rng.normal(1.0, 0.2), 0.1)
        R_p = max(scenario.R * (1.0 + rng.normal(0.0, 0.02)), 0.5)
        pert = PlasmaScenario(
            I_p=scenario.I_p, B_t=scenario.B_t,
            P_heat=scenario.P_heat, n_e=scenario.n_e,
            R=R_p, A=scenario.A, kappa=scenario.kappa, M=scenario.M)
        tau = max(ipb98_tau_e(pert, params) * ped_f / chi_f, 1e-6)
        pfus = max(fusion_power_from_tau(pert, tau), 0.0)
        qs[i] = pfus / scenario.P_heat
    return qs

q_iter_samples = _collect_q_samples(iter_scenario)
q_sparc_samples = _collect_q_samples(sparc_scenario)

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 5))

for ax, qs, name, color in [
    (ax1, q_iter_samples, 'ITER', 'steelblue'),
    (ax2, q_sparc_samples, 'SPARC', 'darkorange'),
]:
    ax.hist(qs, bins=60, density=True, color=color, alpha=0.7,
            edgecolor='white', linewidth=0.5)
    p5, p50, p95 = np.percentile(qs, [5, 50, 95])
    ax.axvline(p5, color='crimson', ls='--', lw=1.8, label=f'p5 = {p5:.2f}')
    ax.axvline(p50, color='gold', ls='--', lw=2.0, label=f'p50 = {p50:.2f}')
    ax.axvline(p95, color='crimson', ls='--', lw=1.8, label=f'p95 = {p95:.2f}')
    ax.set_xlabel('Q (Energy Gain)', fontsize=12)
    ax.set_ylabel('Probability Density', fontsize=11)
    ax.set_title(f'{name}: Q Distribution', fontsize=13)
    ax.legend(fontsize=10)

fig.suptitle('ITER vs SPARC — Energy Gain Uncertainty (Full-Chain UQ)',
             fontsize=14, fontweight='bold', y=1.02)
plt.tight_layout()
plt.show()

Sensitivity Analysis¶

Not all scaling-law exponents contribute equally to the overall uncertainty. A one-at-a-time (OAT) sensitivity analysis perturbs each exponent by its $\pm 1\sigma$ while holding all others at their central values. The resulting change in $\tau_E$ tells us which parameters dominate the error budget.

This is complementary to the full MC analysis: OAT identifies the individual drivers, while MC captures the joint effect including correlations (which we have assumed independent here, following Verdoolaege 2021).

In [ ]:
# OAT sensitivity: perturb each exponent by +/- 1 sigma
tau_nominal = ipb98_tau_e(iter_scenario)

sensitivities = {}  # param_name -> (delta_tau_minus, delta_tau_plus)
for key in IPB98_CENTRAL:
    # +1 sigma
    params_plus = dict(IPB98_CENTRAL)
    params_plus[key] = IPB98_CENTRAL[key] + IPB98_SIGMA[key]
    tau_plus = ipb98_tau_e(iter_scenario, params_plus)

    # -1 sigma
    params_minus = dict(IPB98_CENTRAL)
    params_minus[key] = IPB98_CENTRAL[key] - IPB98_SIGMA[key]
    tau_minus = ipb98_tau_e(iter_scenario, params_minus)

    delta_plus = (tau_plus - tau_nominal) / tau_nominal * 100  # percent
    delta_minus = (tau_minus - tau_nominal) / tau_nominal * 100
    sensitivities[key] = (delta_minus, delta_plus)

# Sort by total swing (|delta_minus| + |delta_plus|)
sorted_keys = sorted(sensitivities.keys(),
                     key=lambda k: abs(sensitivities[k][0]) + abs(sensitivities[k][1]))

# --- Tornado chart ---
fig, ax = plt.subplots(figsize=(10, 7))

y_pos = np.arange(len(sorted_keys))
delta_neg = [sensitivities[k][0] for k in sorted_keys]
delta_pos = [sensitivities[k][1] for k in sorted_keys]

# Plot negative perturbation bars (going left)
ax.barh(y_pos, delta_neg, height=0.6, color='#4c72b0', alpha=0.8,
        label=r'$-1\sigma$ perturbation')
# Plot positive perturbation bars (going right)
ax.barh(y_pos, delta_pos, height=0.6, color='#dd8452', alpha=0.8,
        label=r'$+1\sigma$ perturbation')

ax.set_yticks(y_pos)
# Format labels nicely
nice_labels = []
for k in sorted_keys:
    sigma_str = f'{IPB98_CENTRAL[k]:.4f} +/- {IPB98_SIGMA[k]:.4f}'
    nice_labels.append(f'{k}  ({sigma_str})')
ax.set_yticklabels(nice_labels, fontsize=10)
ax.set_xlabel(r'Change in $\tau_E$ (%)', fontsize=12)
ax.set_title(r'Tornado Chart: Sensitivity of $\tau_E$ to IPB98 Exponents'
             '\n(ITER Baseline, OAT $\pm 1\sigma$)', fontsize=13)
ax.axvline(0, color='black', linewidth=0.8)
ax.legend(fontsize=10, loc='lower right')
plt.tight_layout()
plt.show()

# Print numeric values
print("\nSensitivity of tau_E to each IPB98 exponent (+/- 1 sigma):")
print(f"{'Parameter':<18} {'delta(-1s) (%)':>15} {'delta(+1s) (%)':>15} {'Total swing':>12}")
print("-" * 62)
for k in reversed(sorted_keys):  # most sensitive at top
    dm, dp = sensitivities[k]
    print(f"{k:<18} {dm:>+14.2f}% {dp:>+14.2f}% {abs(dm)+abs(dp):>11.2f}%")

Summary¶

Key Takeaways¶

  1. Scaling-law uncertainties are not negligible. The IPB98(y,2) exponents carry $\sim 3$--$8\%$ relative uncertainties, which propagate into $\sim 20$--$40\%$ uncertainty on $Q$ depending on the scenario.

  2. Full-chain UQ is essential. Adding transport ($\chi_{\mathrm{gB}}$, pedestal height) and equilibrium (boundary shape) uncertainties widens the confidence interval significantly beyond the scaling-law-only estimate.

  3. $\alpha_R$ and $\alpha_I$ dominate the error budget. The size-scaling exponent ($\alpha_R = 1.97 \pm 0.08$) and the current exponent ($\alpha_I = 0.93 \pm 0.04$) are the leading contributors to $\tau_E$ uncertainty. Future experiments at ITER-relevant parameters will tighten these constraints most effectively.

  4. SPARC's compact design produces lower absolute fusion power but with narrower relative uncertainty bands on $Q$, because the strong-field scaling partially decouples from the most uncertain exponents.

  5. Monte Carlo UQ is computationally cheap. Running 5000 full-chain samples takes only seconds on a laptop, making it practical to include in every prediction workflow.

Confidence Intervals at a Glance¶

Machine $Q$ (p5) $Q$ (p50) $Q$ (p95)
ITER see output above
SPARC see output above

References¶

  1. ITER Physics Expert Group, "ITER Physics Basis," Nucl. Fusion 39 (1999) 2175.
  2. G. Verdoolaege et al., "Robust regression of the ITER multi-machine database," Nucl. Fusion 61 (2021) 076006.
  3. P. Rodriguez-Fernandez et al., "SPARC design studies," J. Plasma Phys. 86 (2020) 865860503.
  4. SCPN Fusion Core v3.5.0 — scpn_fusion.core.uncertainty module.

Notebook: 10_uncertainty_quantification.ipynb | SCPN Fusion Core v3.5.0