Review of concepts and applications covered so far!
- Exploratory Data Analysis and Preprocessing
- Visualization: Plot the raw time‐series data to inspect for trends, seasonality, and irregular fluctuations (including rolling means/variances).
- Transformations: Apply appropriate transformations (e.g. logarithmic or Box–Cox) to stabilize variance.
- Decomposition & Stationarity Testing
- Decomposition: Use methods like STL to break the series into trend, seasonal, and remainder components.
- Detrending/Deseasonalizing: Remove or model the trend and seasonal components so that the residual approximates white noise. Stationarity Tests: Apply tests (e.g., ADF, KPSS) and difference the series as needed to achieve stationarity.
- Model Identification & Fitting
- Lag Selection: Determine the optimal lag length (using criteria such as AIC or BIC) for an autoregressive or VAR model.
- Dynamic Modeling: Fit an autoregressive model (or a VAR for multivariate data) on the stationary series. Granger Causality Testing: Within the VAR framework, test whether past values of one variable significantly improve the prediction of another.
- Impulse Response Analysis
- IRF Computation: After estimating the VAR, compute impulse response functions to trace how shocks to one variable affect the system over time.
- Interpretation: Use IRFs to quantify the duration and magnitude of shock effects, complementing the Granger causality findings.
- Diagnostics & Forecasting
- Residual Analysis: Check that the model’s residuals resemble white noise (e.g., via Ljung–Box tests) to validate the model fit. Forecasting: Employ the fitted model to forecast future values, reintroducing trend and seasonal components as needed.