Model recipes¶
This page is a cookbook of common model configurations (API + YAML). It is intentionally example-driven; see Configuration reference for the full schema.
1) Baseline BVAR (NIW + legacy Minnesota-style shrinkage)¶
YAML:
model:
p: 12
include_intercept: true
prior:
family: "niw"
method: "minnesota_legacy"
sampler:
draws: 2000
burn_in: 500
thin: 2
seed: 123
Good starting points:
config/minimal_config.yamlconfig/demo_config.yaml
2) Legacy Minnesota-style shrinkage + stochastic volatility (linear SV benchmark)¶
Use SV when forecast uncertainty changes over time.
If you want equation-specific canonical Minnesota shrinkage instead, switch to:
prior:
family: "niw"
method: "minnesota_canonical"
Use that canonical path only for homoskedastic models and diagonal SV. Triangular and factor SV
should stay on method: "minnesota_legacy".
For a bounded experimental bridge on diagonal SV, use:
prior:
family: "niw"
method: "minnesota_tempered"
minnesota:
tempered_alpha: 0.25
YAML (AR(1) log-vol + triangular covariance):
model:
p: 12
include_intercept: true
volatility:
enabled: true
dynamics: "ar1"
covariance: "triangular"
q_prior_var: 1.0
Example:
config/carriero2025_backtest_15var_linear_sv.yaml
3) Shadow-rate VAR (ELB augmentation)¶
Enable ELB when an observed rate is censored at an effective lower bound.
model:
p: 12
include_intercept: true
elb:
enabled: true
bound: 0.25
applies_to: ["FEDFUNDS"]
If you also want SV:
model:
elb: { enabled: true, bound: 0.25, applies_to: ["FEDFUNDS"] }
volatility: { enabled: true, dynamics: "ar1", covariance: "triangular" }
Example:
config/carriero2025_backtest_15var_shadow.yaml
4) Steady-state VAR (SSP)¶
SSP replaces the intercept with a steady-state mean mu.
model:
p: 2
include_intercept: true
steady_state:
mu0: [0.0, 0.0] # length N
v0_mu: 0.1 # scalar or length N
Example:
config/ssp_demo_config.yaml
5) Variable selection and shrinkage (SSVS / BLASSO / DL)¶
SSVS (spike-and-slab selection over predictors):
prior:
family: "ssvs"
ssvs:
spike_var: 0.0001
slab_var: 100.0
inclusion_prob: 0.5
fix_intercept: true
Bayesian LASSO:
prior:
family: "blasso"
blasso:
mode: "global"
tau_init: 10000
lambda_init: 2.0
Dirichlet–Laplace:
prior:
family: "dl"
dl:
abeta: 0.5
dl_scaler: 0.1
6) Backtest scaling recipe (memory-friendly)¶
For large backtests, prefer streaming metrics and disable plots:
output:
save_plots: false
store_forecasts_in_memory: false
This writes metrics.csv without retaining per-origin forecast draws in RAM.
7) Structural IRFs (Cholesky)¶
Compute Cholesky-identified impulse responses from posterior draws:
from srvar.analysis import irf_cholesky
irf = irf_cholesky(
fit_res,
horizons=24, # includes horizon 0
shock_scale="one_sd", # or "unit" for unit-impact normalization
)
Notes:
ordering=[...]changes the recursive identification ordering.For stochastic volatility models, the impact matrix uses the last volatility state in each draw.
8) Sign-restricted structural IRFs¶
Compute sign-restricted structural IRFs by drawing random orthonormal rotations and accepting those that satisfy your sign constraints:
from srvar.analysis import irf_sign_restricted
irf_sr = irf_sign_restricted(
fit_res,
horizons=24,
draws=1000,
max_attempts=5000,
restrictions={
# Shock names are in dict insertion order; shocks not listed are unrestricted.
"mp": {
"FEDFUNDS": {0: "+", 1: "+", 2: "+"},
# Object-style spec supports multi-horizon + cumulative restrictions:
"CPI": {"sign": "-", "horizons": [0, 1, 2], "cumulative": False},
},
},
)
Notes:
restrictionshorizons are IRF horizons (0 = impact response).Diagnostics (acceptance rates, attempts) are returned in
irf_sr.metadata.
9) FEVD (Cholesky)¶
Compute forecast error variance decompositions from Cholesky-identified posterior draws:
from srvar.analysis import fevd_cholesky
fevd = fevd_cholesky(
fit_res,
horizons=[1, 4, 8, 12], # steps ahead (1-indexed)
)
Notes:
FEVD at horizon
huses IRF horizons0..h-1(cumulative sum of squared responses).shock_scale="one_sd"vs"unit"matchessrvar.analysis.irf_cholesky.
10) Historical decomposition (Cholesky)¶
Decompose historical movements into baseline dynamics and structural shock contributions:
from srvar.analysis import historical_decomposition_cholesky
hd = historical_decomposition_cholesky(
fit_res,
draws=200,
)
Notes:
The decomposition is computed for dates
t=p..T-1(the firstpobservations are lag initial conditions).For ELB models, this defaults to using the latent dataset (
fit_res.latent_dataset) unlessuse_latent=False.
11) Conditional / scenario forecasting (hard constraints)¶
Generate predictive paths conditional on a future path for selected variables:
from srvar.scenario import conditional_forecast
fc_cond = conditional_forecast(
fit_res,
horizons=[1, 4, 8, 12],
constraints={
# Horizons are 1-indexed steps ahead: 1 means t+1.
"FEDFUNDS": {1: 0.25, 2: 0.25, 3: 0.25},
},
draws=2000,
)
Notes:
This currently supports homoskedastic (time-invariant covariance) VARs.
When ELB is enabled, constraints are applied to the latent (unfloored) process used for simulation.