26  Vector Processes

26.1 Definitions

Let \(\boldsymbol y_t\) be an \(n\times 1\) vector. An vector autoregressive process is defined as

\[ \boldsymbol{y_t = \alpha + \Phi_1 y_{t-1} + \Phi_2 y_{t-2} +\cdots + \Phi_p y_{t-p} + \epsilon_t} \]

where \(\boldsymbol\epsilon_t\) is the vector white noise with \(\mathbb E(\boldsymbol\epsilon_t)=\boldsymbol 0\) and \(\mathbb E(\boldsymbol{\epsilon_t\epsilon_t'})=\boldsymbol\Omega\). In a vector form,

\[ \begin{bmatrix} y_{1,t} \\ y_{2,t} \\ \vdots \\y_{n,t} \end{bmatrix} = \begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{bmatrix} + \sum_{j=1}^{p} \begin{bmatrix} \phi_{j,11} & \phi_{j,12} & \dots & \phi_{j,1n} \\ \phi_{j,21} & \phi_{j,22} & \dots & \phi_{j,2n} \\ \vdots & \vdots & \ddots & \vdots \\ \phi_{j,n1} & \phi_{j,n2} & \dots & \phi_{j,nn} \\ \end{bmatrix} \begin{bmatrix} y_{1,t-j} \\ y_{2,t-j} \\ \vdots \\y_{n,t-j} \end{bmatrix} + \begin{bmatrix} \epsilon_{1,t} \\ \epsilon_{2,t} \\ \vdots \\ \epsilon_{n,t} \end{bmatrix} \]

Each component \(y_{j,t}\) corresponds to \(T\) observations in the data. So from the perspective of data, each VAR is represented by a dataset with \(T\) rows and \(n\) columns. To unpack the matrix notation, the first row of the vector system is

\[ \begin{aligned} y_{1,t} = \alpha_1 & + \phi_{1,11} y_{1,t-1} + \phi_{1,12} y_{2,t-1} + \cdots + \phi_{1,1n} y_{n,t-1} \\ & + \phi_{2,11} y_{1,t-2} + \phi_{2,12} y_{2,t-2} + \cdots + \phi_{2,1n} y_{n,t-2} \\ &\:\vdots \\ & + \phi_{p,11} y_{1,t-p} + \phi_{p,12} y_{2,t-p} + \cdots + \phi_{p,1n} y_{n,t-p} \\ & + \epsilon_{1,t} \end{aligned} \]

So each variable in a VAR system is a function of the lags of itself and all other variables. In the spirit of Sims, the VAR system is intended to impose minimal restrictions. All variables are treated as endogenous and influencing each other (though we can also include exogenous variables).

26.2 VAR and VMA

We can rewrite it more compactly with the lag operator:

\[ (I_n - \Phi_1 L -\Phi_2 L^2 - \cdots -\Phi_p L^p) y_t = \Phi(L) y_t = \alpha + \epsilon_t. \]

Similarly, we can generalize an MA process to the vector form:

\[ y_t = \mu + \epsilon_t + \Theta_1\epsilon_{t-1} + \Theta_2\epsilon_{t-2} + \cdots = \mu + \Theta(L) \epsilon_t. \]

Similar to scalar processes, with stationary \(y_t\), VAR and VMA processes can be converted to each other by inverting the lag polynomial

\[ \Psi(L) = \Phi^{-1}(L) \]

where the inverse is defined as

\[ [I_n - \Phi_1 L -\Phi_2 L^2 - \cdots][I_n +\Psi_1 L + \Psi_2 L^2 + \cdots] = I_n. \]

Computationally, we can expand the product of the lag polynomials

\[ I_n + (\Psi_1-\Phi_1)L + (\Psi_2-\Phi_2-\Phi_1\Psi_1)L^2 + \cdots = I_n \] The coefficients of the inverse lag polynomial can be computed recursively

\[ \begin{aligned} \Psi_1 &= \Phi_1 \\ \Psi_2 &= \Phi_2 + \Phi_1\Psi_1 \\ &\vdots \\ \Psi_s &= \Phi_1\Psi_{s-1} + \Phi_2\Psi_{s-2} + \cdots + \Phi_p\Psi_{s-p} \end{aligned} \]

with \(\Psi_0=I_n\), \(\Psi_s=0\) for \(s<0\).

26.3 Stationary Conditions

Proposition 26.1 A VAR(\(p\)) process is covariance-stationary if all roots of

\[ \text{det}|I_n- \Phi_1 z - \Phi_2 z^2 - \cdots - \Phi_p z^p| = 0 \]

lie outside the unit circle (\(z\) is a complex scalar).

The VAR is said to contain at least one unit root if

\[ \text{det}|I_n- \Phi_1 - \Phi_2 - \cdots - \Phi_p | = 0. \]

Any VMA(\(q\)) process is covariance-stationary.

Example

Consider a two-variable VAR(1) process

\[ \begin{bmatrix}y_{1t}\\y_{2t}\end{bmatrix}= \begin{bmatrix}0.7 & 0.1 \\0.3 & 0.9\end{bmatrix} \begin{bmatrix}y_{1,t-1}\\y_{2,t-2}\end{bmatrix}+ \begin{bmatrix}u_{1t}\\u_{2t}\end{bmatrix} \]

We compute the determinant of the matrix

\[ \text{det}\left| \begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}- \begin{bmatrix}0.7z & 0.1z \\0.3z & 0.9z\end{bmatrix} \right|=0 \]

\[ \begin{aligned} (1-0.7z)(1-0.9z)-0.1z\cdot0.3z=0 \end{aligned} \]

which solves to \(z_1=1\), \(z_2=\frac{5}{3}\). So the VAR process is not stationary.

If the whole VAR system is stationary it follows that every single component is stationary, but not vice versa (requires proof). Similar to the univariate case, with stationarity, standard OLS and statistical inference applies. In all the sections in the chapter, we assume stationary VARs; we address the unit roots in a VAR at the end of the chapter.

26.4 Autocovariance Matrix

The autocovariance matrix for a vector process is defined as

\[ \Gamma_j = \mathbb E [(y_t - \mu)(y_{t-j} - \mu)']. \]

For demeaned \(y_t\), we have

\[ \begin{aligned} \Gamma_j &= \mathbb{E} (y_t y_{t-j}') = \mathbb{E} \begin{bmatrix}y_{1,t}\\y_{2,t}\\\vdots\\y_{n,t}\end{bmatrix} \begin{bmatrix}y_{1,t-j}&y_{2,t-j}&\dots&y_{n,t-j}\end{bmatrix} \\[1em] &=\begin{bmatrix} \mathbb{E}(y_{1,t}y_{1,t-j}) & \mathbb{E}(y_{1,t}y_{2,t-j}) & \dots & \mathbb{E}(y_{1,t}y_{n,t-j}) \\ \mathbb{E}(y_{2,t}y_{1,t-j}) & \mathbb{E}(y_{2,t}y_{2,t-j}) & \dots & \mathbb{E}(y_{2,t}y_{n,t-j}) \\ \vdots & \vdots & \ddots & \vdots \\ \mathbb{E}(y_{n,t}y_{1,t-j}) & \mathbb{E}(y_{n,t}y_{2,t-j}) & \dots & \mathbb{E}(y_{n,t}y_{n,t-j}) \end{bmatrix} \\[1em] &=\begin{bmatrix} \gamma_{1}(j) & \gamma_{12}(j) & \dots & \gamma_{1n}(j) \\ \gamma_{21}(j) & \gamma_{2}(j) & \dots & \gamma_{2n}(j) \\ \vdots & \vdots & \ddots & \vdots \\ \gamma_{n1}(j) & \gamma_{n2}(j) & \dots & \gamma_{n}(j) \\ \end{bmatrix} \end{aligned} \]

Note that \(\Gamma_j \neq \Gamma_{-j}\), but \(\Gamma_j' = \Gamma_{-j}\).