Skip to content

Commit

Permalink
Added week 5 material
Browse files Browse the repository at this point in the history
  • Loading branch information
wmutschl committed Nov 15, 2024
1 parent 0bb89c1 commit 59de788
Show file tree
Hide file tree
Showing 11 changed files with 344 additions and 135 deletions.
14 changes: 13 additions & 1 deletion .github/workflows/dynare-6.2-matlab-r2024b-macos.yml
Original file line number Diff line number Diff line change
Expand Up @@ -83,4 +83,16 @@ jobs:
cd("progs/matlab");
AR4LagSelection;
portmanteauTest;
bootstrapCIAR1;
bootstrapCIAR1;
- name: Run week 6 codes
uses: matlab-actions/run-command@v2
with:
command: |
addpath("Dynare-6.2-arm64/matlab");
cd("progs/matlab");
matrixAlgebraEigenvalues;
matrixAlgebraKroneckerFormula;
matrixAlgebraRotation;
matrixAlgebraCholesky;
matrixAlgebraLyapunovComparison;
12 changes: 12 additions & 0 deletions .github/workflows/dynare-6.2-matlab-r2024b-ubuntu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -113,3 +113,15 @@ jobs:
AR4LagSelection;
portmanteauTest;
bootstrapCIAR1;
- name: Run week 6 codes
uses: matlab-actions/run-command@v2
with:
command: |
addpath("dynare/matlab");
cd("progs/matlab");
matrixAlgebraEigenvalues;
matrixAlgebraKroneckerFormula;
matrixAlgebraRotation;
matrixAlgebraCholesky;
matrixAlgebraLyapunovComparison;
12 changes: 12 additions & 0 deletions .github/workflows/dynare-6.2-matlab-r2024b-windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,3 +75,15 @@ jobs:
AR4LagSelection;
portmanteauTest;
bootstrapCIAR1;
- name: Run week 6 codes
uses: matlab-actions/run-command@v2
with:
command: |
addpath("D:\hostedtoolcache\windows\dynare-6.0\matlab");
cd("progs/matlab");
matrixAlgebraEigenvalues;
matrixAlgebraKroneckerFormula;
matrixAlgebraRotation;
matrixAlgebraCholesky;
matrixAlgebraLyapunovComparison;
10 changes: 6 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,6 @@ Please feel free to use this for teaching or learning purposes; however, taking
</details>


<!---
<details>
<summary> Week 6: Multivariate Time Series Concepts</summary>

Expand All @@ -140,17 +139,20 @@ Familiarize yourself with
* important matrix concepts such as Eigenvalues, Kronecker product, orthogonality, rotation matrices, Cholesky decomposition and Lyapunov equations
* multivariate notation and dimensions of vectors and matrices for VAR(p) models
* autocovariances, stability and covariance-stationarity in multivariate VAR(p) models
* the companion form
* the companion form of a VAR(p) model

### To Do

* [x] Review the solutions of [last week's exercises](https://github.com/wmutschl/Quantitative-Macroeconomics/releases/latest/download/week_5.pdf) (except the bootstrap one) and write down all your questions
* [x] Review the solutions of [last week's exercises](https://github.com/wmutschl/Quantitative-Macroeconomics/releases/latest/download/week_5.pdf) and write down all your questions
* [x] Read Kilian and Lütkepohl (2007, Ch. 2.2) and Lütkepohl (2005, Chapter 2 and Appendix A); make note of all the aspects and concepts that you are not familiar with or that you find difficult to understand
* [x] Do exercise sheet 6 (the solutions are already available)
* [x] Do exercise 1 of week 6
* [x] If you have questions, get in touch with me via email or (better) [schedule a meeting](https://schedule.mutschler.eu)

</details>


<!---
<details>
<summary> Week 7: Estimating VAR model with OLS and ML; The identification problem in SVAR models</summary>
Expand Down
17 changes: 9 additions & 8 deletions exercises/dimensions_and_var_one_representation.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
\section[Dimensions and VAR(1) representation]{Dimensions and VAR(1) representation\label{ex:DimensionsAndVAROneRepresentation}}
\section[Dimensions and VAR{(1)} representation]{Dimensions and VAR{(1)} representation\label{ex:DimensionsAndVAROneRepresentation}}
Let \(y_t\) be a K-dimensional covariance stationary random vector.
Consider the VAR(p)-process
Consider the VAR{(p)}-process
\begin{align*}
y_t = \nu + \sum_{i=1}^p A_i {y_{t-i}} + u_t
\end{align*}
Expand All @@ -10,22 +10,23 @@
\item Assume that \(E(u_t) = 0; E(u_t u_t') = \Sigma_u\) with \(\Sigma_u\) being symmetric and positive definite.
Which additional assumptions do we need to assure that \(u_t\) is a multivariate white noise process?

\item Consider a VAR(2) model with \(K=4\) variables and a constant term.
\item Consider a VAR{(2)} model with \(K=4\) variables and a constant term.
How many parameters do we need to estimate?

\item Show how to represent a VAR(3) model as a VAR(1) model. Hint: stack \(y_t, y_{t-1}\) and \(y_{t-2}\) into a vector.
\item Show how to represent a VAR{(3)} model as a VAR{(1)} model.
Hint: stack \(y_t, y_{t-1}\) and \(y_{t-2}\) into a vector.

\item Write a function to compute the \enquote{companion VAR(1) form} of any VAR(p) model with constant term.
\item Write a function to compute the \enquote{companion VAR{(1)} form} of any VAR{(p)} model with constant term.
\end{enumerate}

\paragraph{Readings}
\begin{itemize}
\item \textcite[Ch.~2.2]{Kilian.Lutkepohl_2017_StructuralVectorAutoregressive}
\item \textcite[Ch.~2]{Lutkepohl_2005_NewIntroductionMultiple}
\item \textcite[Ch.~2.2]{Kilian.Lutkepohl_2017_StructuralVectorAutoregressive}
\item \textcite[Ch.~2]{Lutkepohl_2005_NewIntroductionMultiple}
\end{itemize}

\begin{solution}\textbf{Solution to \nameref{ex:DimensionsAndVAROneRepresentation}}
\ifDisplaySolutions
\ifDisplaySolutions%
\input{exercises/dimensions_and_var_one_representation_solution.tex}
\fi
\newpage
Expand Down
27 changes: 14 additions & 13 deletions exercises/dimensions_and_var_one_representation_solution.tex
Original file line number Diff line number Diff line change
@@ -1,34 +1,35 @@
\begin{enumerate}
\item \(\nu\) snd \(u_t\) are both K-dimensional vectors: \(\nu,u_t \in \mathbb{R}^{K \times 1}\).
\item \(\nu \) and \(u_t\) are both K-dimensional vectors: \(\nu,u_t \in \mathbb{R}^{K \times 1}\).
\(A_i\) is a \(K \times K\) matrix.

\item We must have \({\Gamma_u(h)} = Cov(u_t, u_{t-h}) = 0_{K\times K}\) for any \(h\neq0\),
that is all autocovariances are zero
(except for \(h=0\) which is the covariance matrix).
Importantly: we do not need a distributional assumption!

\item General: \(K\) constants + \(K^2\cdot p\) autoregressive coefficients + \(K(K+1)/2\) covariance terms (due to symmetry).
Here: \(4+4^2\cdot2+4\cdot(4+1)/2=46\).
Here: \(4 + 4^2 \cdot 2 + 4 \cdot (4+1)/2 = 46\).
That's a lot!
Therefore we will try to restrict some parameters (e.g.\ set equal to zero or by using Bayesian priors)
or consider only small VAR systems, e.g. \(K=3\) or \(p=1\), etc.
or consider only small VAR systems, e.g.\
\(K=3\) or \(p=1\), etc.

\item VAR(3): \(y_t = \nu + A_1 y_{t-1} + A_2 y_{t-2} + A_3 y_{t-3} + u_t\).
\item VAR{(3)}: \(y_t = \nu + A_1 y_{t-1} + A_2 y_{t-2} + A_3 y_{t-3} + u_t\).
Idea: Stack \(y_t, y_{t-1}\) and \(y_{t-2}\) into a vector and note that \(y_{t-1}=y_{t-1}\) and \(y_{t-2}=y_{t-2}\).
That is
\begin{align*}
\underbrace{\begin{bmatrix} y_t\\ y_{t-1} \\ y_{t-2} \end{bmatrix}}_{\widetilde{y}_t}
\underbrace{\begin{bmatrix} y_t \\ y_{t-1} \\ y_{t-2} \end{bmatrix}}_{\widetilde{y}_t}
= \underbrace{\begin{bmatrix} \nu \\ 0 \\ 0 \end{bmatrix}}_{\widetilde{\nu}_t}
+ \underbrace{\begin{bmatrix} A_1 & A_2 & A_3 \\ I & 0 & 0\\ 0&I&0\end{bmatrix}}_{\widetilde{A}}
\underbrace{\begin{bmatrix} y_{t-1}\\ y_{t-2} \\ y_{t-3} \end{bmatrix}}_{\widetilde{y}_{t-1}}
+ \underbrace{\begin{bmatrix} u_t \\ 0 \\ 0\end{bmatrix}}_{\widetilde{u}_t}
+ \underbrace{\begin{bmatrix} A_1 & A_2 & A_3 \\ I & 0 & 0 \\ 0 & I & 0 \end{bmatrix}}_{\widetilde{A}}
\underbrace{\begin{bmatrix} y_{t-1} \\ y_{t-2} \\ y_{t-3} \end{bmatrix}}_{\widetilde{y}_{t-1}}
+ \underbrace{\begin{bmatrix} u_t \\ 0 \\ 0 \end{bmatrix}}_{\widetilde{u}_t}
\end{align*}
where \(I\) is the \(K\)-dimensional identity matrix and \(0\) the \(K\)-dimensional zero matrix.
Therefore: \(\widetilde{y}_t = \widetilde{A} \widetilde{y}_{t-1} + \widetilde{u}_t\).
This is called the Companion Form.
It is particularly useful, when checking the stability and covariance-stationarity properties of VAR(p) processes
as we can simply compute the Eigenvalues of \(\widetilde{A}\)
and check whether all of them are inside the unit circle, i.e.\ between \(-1\) and \(1\).
This is called the \emph{Companion Form}.
It is particularly useful, when checking the stability and covariance-stationarity properties of VAR{(p)} processes
as we can simply compute the Eigenvalues of \(\widetilde{A}\)
and check whether all of them are inside the unit circle, i.e.\ between \(-1\) and \(1\).
No need to find the roots of the general Lag-polynomials.

\item \lstinputlisting[style=Matlab-editor,basicstyle=\mlttfamily,title=\lstname]{progs/matlab/companionForm.m}
Expand Down
28 changes: 14 additions & 14 deletions exercises/matrix_algebra.tex
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
\item Consider the matrices D:\@ \(m\times n\), E:\@ \(n\times p\) and F:\@ \(p\times k\).
Show that
\[vec(DEF)=\left(F'\otimes D\right) vec(E),\]
where \(\otimes\) is the Kronecker product and \(vec\) the vectorization operator
where \(\otimes \) is the Kronecker product and \(vec\) the vectorization operator
either on paper or using a symbolic toolbox.

\item Show that R is an orthogonal matrix. Why is this matrix called a rotation matrix?
Expand All @@ -30,28 +30,28 @@
\begin{enumerate}
\item the Kronecker product and vectorization
\item the following iterative algorithm:
\begin{align*}
\Sigma_{y,0} &= I, A_0 = A, \Sigma_{u,0} = \Sigma_{u}\\
\Sigma_{y,i+1} &= A_i \Sigma_{y,i} A_i' + \Sigma_{u,i}\\
\Sigma_{u,i+1} &= A_i \Sigma_{u,i} A_i' + \Sigma_{u,i}\\
A_{i+1} &= A_i A_i
\end{align*}
Write a loop until either a maximal number of iterations (say 500) is reached
or each element of \(\Sigma_{y,i+1}-\Sigma_{y,i}\) is less than \(10^{-25}\) in absolute terms.
\begin{align*}
\Sigma_{y,0} &= I, A_0 = A, \Sigma_{u,0} = \Sigma_{u}\\
\Sigma_{y,i+1} &= A_i \Sigma_{y,i} A_i' + \Sigma_{u,i}\\
\Sigma_{u,i+1} &= A_i \Sigma_{u,i} A_i' + \Sigma_{u,i}\\
A_{i+1} &= A_i A_i
\end{align*}
Write a loop until either a maximal number of iterations (say 500) is reached
or each element of \(\Sigma_{y,i+1}-\Sigma_{y,i}\) is less than \(10^{-25}\) in absolute terms.
\item Compare both approaches for A and \(\Sigma_u\) given above.
\end{enumerate}
\end{enumerate}

\paragraph{Readings:}
\begin{itemize}
\item \textcite[Ch.~4.2]{Anderson.McGrattan.Hansen.EtAl_1996_MechanicsFormingEstimating}
\item \textcite[Ch.~6.7]{Anderson.Moore_1979_OptimalFiltering}
\item \textcite[App.~A]{Lutkepohl_2005_NewIntroductionMultiple}
\item \textcite[Ch.~4.10]{Uribe.Schmitt-Grohe_2017_OpenEconomyMacroeconomics}
\item \textcite[Ch.~4.2]{Anderson.McGrattan.Hansen.EtAl_1996_MechanicsFormingEstimating}
\item \textcite[Ch.~6.7]{Anderson.Moore_1979_OptimalFiltering}
\item \textcite[App.~A]{Lutkepohl_2005_NewIntroductionMultiple}
\item \textcite[Ch.~4.10]{Uribe.Schmitt-Grohe_2017_OpenEconomyMacroeconomics}
\end{itemize}

\begin{solution}\textbf{Solution to \nameref{ex:MatrixAlgebra}}
\ifDisplaySolutions
\ifDisplaySolutions%
\input{exercises/matrix_algebra_solution.tex}
\fi
\newpage
Expand Down
Loading

0 comments on commit 59de788

Please sign in to comment.