Configuration guide¶
This page describes how to configure models in srvar-toolkit.
For a complete list of YAML keys and defaults, see Configuration reference. For backtest scoring conventions, see Evaluation and scoring conventions.
Core objects¶
Most workflows use four objects:
Dataset: data container (values + variable names + time index)ModelSpec: model structure (lag order, intercept, ELB, stochastic volatility)PriorSpec: prior family and hyperparameters (NIW, SSVS, BLASSO, or DL)SamplerConfig: MCMC controls (draws, burn-in, thinning)
YAML configuration (CLI)¶
In addition to the Python API, the CLI supports running a full fit/forecast/plot pipeline from a YAML config:
srvar validate config/demo_config.yaml
srvar run config/demo_config.yaml
# Backtesting (rolling/expanding refit + forecast)
srvar backtest config/backtest_demo_config.yaml
# Fetch FRED data to a cached CSV (then use it in `data.csv_path`)
srvar fetch-fred config/fetch_fred_demo_config.yaml
Where to start¶
config/demo_config.yaml: comment-rich templateconfig/minimal_config.yaml: minimal runnable configconfig/fetch_fred_demo_config.yaml: fetch data from FRED to a cached CSV
Schema overview¶
The top-level keys map directly to the core Python objects:
data: input CSV and variable selectionmodel:ModelSpec(lag order, intercept, optionalelb, optionalvolatility)prior:PriorSpec(e.g. NIW defaults or legacy Minnesota-style NIW shrinkage)sampler:SamplerConfig(draws/burn-in/thin/seed)forecast(optional): forecast horizons/draws/quantilesbacktest(optional): rolling/expanding refit settings and forecast horizonsevaluation(optional): backtest evaluation settings (coverage/PIT/CRPS + ELB-censored scoring + metrics export)output: output directory and which artifacts to saveplots(optional): which variables to plot and quantile bands
Output artifacts¶
When you run srvar run, the toolkit writes outputs into output.out_dir (or --out):
config.yml(exact config used)fit_result.npz(posterior draws)forecast_result.npz(if forecasting enabled)shadow_rate_*.png,volatility_*.png,forecast_fan_*.png(if plot saving enabled)
When you run srvar backtest, the toolkit writes outputs into output.out_dir (or --out):
config.yml(exact config used)metrics.csv(CRPS/RMSE/MAE + coverage columns)coverage_all.png,coverage_<var>.png(coverage by horizon)pit_<var>_h<h>.png(PIT histograms for selected variables/horizons)crps_by_horizon.pngbacktest_summary.json
The backtest config is intentionally CLI-first and is designed to be reproducible:
expanding backtests grow the estimation sample over time.
rolling backtests use a fixed window length (configure
backtest.window).
Choosing the lag order p¶
Larger
pincreases the number of regressorsKand typically increases runtime.A common starting point in macro data is
p=4(quarterly) orp=12(monthly), but you should validate using forecast performance.
Choosing a prior family¶
NIW (conjugate)¶
Use NIW when you want fast, stable inference and do not need variable selection.
Use
PriorSpec.niw_default(k=..., n=...)for a simple default prior.Use
PriorSpec.niw_minnesota_legacy(p=..., y=..., include_intercept=...)for the current legacy Minnesota-style NIW shrinkage path.Use
PriorSpec.niw_minnesota_canonical(p=..., y=..., include_intercept=...)when you need equation-specific own-vs-cross Minnesota shrinkage and the model is homoskedastic or diagonal SV.Use
PriorSpec.niw_minnesota_tempered(p=..., y=..., include_intercept=..., alpha=0.25)when you want the experimental legacy-to-canonical bridge on diagonal SV.
PriorSpec.niw_minnesota(...) is kept as a backward-compatible alias for
PriorSpec.niw_minnesota_legacy(...). PriorSpec.niw_minnesota_canonical(...) is an explicit
opt-in path; triangular and factor SV remain on the legacy NIW implementation.
PriorSpec.niw_minnesota_tempered(...) is also opt-in, experimental, and currently restricted
to diagonal stochastic volatility.
SSVS¶
Use SSVS when you want posterior inclusion probabilities over predictors.
Use
PriorSpec.from_ssvs(k=..., n=..., include_intercept=...).The intercept can be forced included (
fix_intercept=True) when an intercept is present.
BLASSO¶
Use BLASSO when you want continuous shrinkage of coefficient rows.
Use
PriorSpec.from_blasso(k=..., n=..., include_intercept=..., mode='global'|'adaptive').
YAML example:
prior:
family: blasso
blasso:
mode: global
tau_init: 10000
lambda_init: 2.0
DL (Dirichlet–Laplace)¶
Use DL when you want global–local shrinkage over individual VAR coefficients.
Use
PriorSpec.from_dl(k=..., n=..., include_intercept=...).
YAML example:
prior:
family: dl
dl:
abeta: 0.5
dl_scaler: 0.1
Enabling ELB (shadow-rate augmentation)¶
ELB is controlled by ModelSpec(elb=ElbSpec(...)).
Key parameters:
ElbSpec.bound: the bound levelElbSpec.applies_to: list of variable names to constrainElbSpec.tol: tolerance used to decide if an observation is at the bound
In ELB models, the fitted object may contain:
FitResult.latent_dataset: one latent “shadow” pathFitResult.latent_draws: latent draws across kept MCMC iterations
Forecasting returns both observed and latent predictive draws:
ForecastResult.draws: observed draws (ELB floor applied)ForecastResult.latent_draws: unconstrained latent draws
Enabling stochastic volatility (SVRW)¶
Stochastic volatility is controlled by ModelSpec(volatility=VolatilitySpec(...)).
The toolkit supports:
log-volatility dynamics: random walk (
dynamics="rw") or AR(1) (dynamics="ar1")covariance structure: diagonal residual covariance (
covariance="diagonal") or a triangular factorization (covariance="triangular")
In SV models:
FitResult.h_drawscontains log-volatility state draws.
Steady-state VAR parameterization (SSP)¶
SSP replaces the explicit intercept with a steady-state mean vector mu.
Notes:
SSP requires
include_intercept: true.model.steady_state.mu0must have lengthN(number of variables).model.steady_state.v0_mucan be a scalar or a length-Nlist.
YAML example:
model:
p: 2
include_intercept: true
steady_state:
mu0: [0.02, 0.03] # length N
v0_mu: 0.01 # scalar or list length N
ssvs:
enabled: false
spike_var: 0.0001
slab_var: 0.01
inclusion_prob: 0.5
Sampler configuration¶
SamplerConfig(draws=..., burn_in=..., thin=...) controls MCMC.
Rules of thumb:
Start with small numbers to smoke-test code (
draws=200,burn_in=50,thin=2).Increase draws once the model runs and basic diagnostics look stable.
Reproducibility¶
All user-facing sampling/forecast functions accept rng: np.random.Generator.
Use a fixed seed for reproducibility.
Prefer passing a dedicated RNG instance rather than relying on global state.
Evaluation settings (backtest)¶
The evaluation block controls which backtest diagnostics and exports are produced.
Common keys:
evaluation.metrics_table: writemetrics.csvevaluation.coverage.enabled: compute/export coverage columns and plotsevaluation.crps.enabled: compute/export CRPSevaluation.pit.enabled: write PIT histograms
ELB-censored scoring¶
You can optionally apply ELB censoring at evaluation time (typically for interest-rate series), even if you are not using model.elb.
evaluation:
elb_censor:
enabled: true
bound: 0.25
variables: ["FEDFUNDS"]
censor_realized: true
censor_forecasts: false
This floors selected variables to bound when computing metrics/plots.