Skip to content

Commit

Permalink
fix more equations (#758)
Browse files Browse the repository at this point in the history
  • Loading branch information
jmoralez authored Jan 12, 2024
1 parent ddbf943 commit 03ecd00
Show file tree
Hide file tree
Showing 3 changed files with 72 additions and 14 deletions.
23 changes: 23 additions & 0 deletions .github/workflows/mintlify-update.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
name: Trigger Mintlify Update

on:
push:
branches: docs
workflow_dispatch:

jobs:
trigger-mintlify:
runs-on: ubuntu-latest
name: Trigger mintlify workflow
steps:
- name: Trigger mintlify workflow
uses: actions/github-script@v7
with:
github-token: ${{ secrets.DOCS_WORKFLOW_TOKEN }}
script: |
await github.rest.actions.createWorkflowDispatch({
owner: 'nixtla',
repo: 'docs',
workflow_id: 'mintlify-action.yml',
ref: 'main',
});
51 changes: 43 additions & 8 deletions nbs/docs/models/DynamicOptimizedTheta.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -75,16 +75,49 @@
"\n",
"So far, we have set $A_n$ and $B_n$ as fixed coefficients for all $t$. We will now consider these coefficients as dynamic functions; i.e., for updating the state $t$ to $t+1$ we will only consider the prior information $Y_1, \\cdots, Y_t$ when computing $A_t$ and $B_t$. Hence, We replace $A_n$ and $B_n$ in equations (3) and (4) of the notebook of the `optimized theta model` with $A_t$ and $B_t$. Then, after applying the new Eq. (4) to the new Eq. (3) and rewriting the result at time $t$ with $h=1$, we have\n",
"\n",
"$$\\hat Y_{t+1|t}=\\ell_{t}+(1-\\frac{1}{\\theta}) \\{(1-\\alpha)^t A_t +[\\frac{1-(1-\\alpha)^{t+1}}{\\alpha}] B_t \\tag{1} \\}$$\n",
"$$\n",
"\\begin{equation}\n",
" \\hat Y_{t+1|t}=\\ell_{t} + \\left(1 - \\frac{1}{\\theta} \\right) \\left( (1-\\alpha)^t A_t + \\left[ \\frac{1 - ( 1 - \\alpha)^{t+1}}{\\alpha} \\right] B_t \\tag 1 \\right)\n",
"\\end{equation}\n",
"$$\n",
"\n",
"Then, assuming additive one-step-ahead errors and rewriting Eqs. (3) (see AutoTheta Model), (1), we obtain\n",
"\n",
"$$Y_t=\\mu_t +\\varepsilon_t \\tag{2}$$\n",
"$$\\mu_t=\\ell_{t-1}+(1-\\frac{1}{\\theta}) [(1-\\alpha)^{t-1} A_{t-1} +(\\frac{1-(1-\\alpha)^{t}}{\\alpha} ) B_{t-1} \\tag{3} ]$$\n",
"$$\\ell_{t}=\\alpha Y_t+ (1-\\alpha) \\ell_{t-1} \\tag {4}$$\n",
"$$A_t=\\bar Y_t - \\frac{t+1}{2} B_t \\tag {5}$$\n",
"$$B_t=\\frac{1}{t+1} [(t-2) B_{t-1} +\\frac{6}{t} (Y_t - \\bar Y_{t-1}) ] \\tag {6}$$\n",
"$$\\bar Y_t=\\frac{1}{t} [(t-1) \\bar Y_{t-1} + Y_t ] \\tag {7}$$\n",
"$$\n",
"\\begin{equation}\n",
" Y_t=\\mu_t +\\varepsilon_t \\tag 2\n",
"\\end{equation}\n",
"$$\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" \\mu_t=\\ell_{t-1}+ \\left(1-\\frac{1}{\\theta}\\right) \\left( \\left(1-\\alpha\\right)^{t-1} A_{t-1} + \\left(\\frac{1-(1-\\alpha)^{t}}{\\alpha}\\right) B_{t-1} \\tag 3 \\right)\n",
"\\end{equation}\n",
"$$\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" \\ell_{t}=\\alpha Y_t+ (1-\\alpha) \\ell_{t-1} \\tag 4\n",
"\\end{equation}\n",
"$$\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" A_t=\\bar Y_t - \\frac{t+1}{2} B_t \\tag 5\n",
"\\end{equation}\n",
"$$\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" B_t=\\frac{1}{t+1} \\left((t-2) B_{t-1} +\\frac{6}{t} (Y_t - \\bar Y_{t-1}) \\right) \\tag 6\n",
"\\end{equation}\n",
"$$\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" \\bar Y_t=\\frac{1}{t} \\left((t-1) \\bar Y_{t-1} + Y_t \\right) \\tag 7\n",
"\\end{equation}\n",
"$$\n",
"\n",
"for $t=1, \\cdots ,n$. Eqs. (2), (3), (4), (5), (6), (7) configure a state space model with parameters $\\ell_{0} \\in \\mathbb{R}, \\alpha \\in (0,1)$, and $\\theta \\in [1,\\infty )$. The initialisation of the states is performed assuming $A_0 =B_0=B_1=\\bar Y_0 =0$. From here on, we will refer to this model as the dynamic optimised Theta model (DOTM).\n",
"\n",
Expand All @@ -95,7 +128,9 @@
"\n",
"The out-of-sample one-step-ahead forecasts produced by DOTM at origin are given by\n",
"\n",
"$$\\hat Y_{n+1|n}=E[Y_{n+1|Y_1, \\cdots, Y_n} ]=\\ell_{n} +(1-\\frac{1}{\\theta}) \\{(1-\\alpha)^n A_n + [\\frac{1-(1-\\alpha)^{n+1}}{\\alpha}] B_n \\} \\tag{8}$$\n",
"\\begin{equation}\n",
" \\hat Y_{n+1|n}=E[Y_{n+1|Y_1, \\cdots, Y_n} ]=\\ell_{n} + \\left(1-\\frac{1}{\\theta}\\right) \\left( (1-\\alpha)^n A_n + \\left(\\frac{1-(1-\\alpha)^{n+1}}{\\alpha}\\right) B_n \\right) \\tag 8\n",
"\\end{equation}\n",
"\n",
"for a horizon $h \\geq 2$, the forecast $\\hat Y_{n+2|n}, \\cdots , \\hat Y_{n+h|n}$ are computed recursively using Eqs. (3), (4), (5), (6), (7), (8) by replacing the non-observed values $Y_{n+1}, \\cdots , Y_{n+h-1}$ with their expected values $\\hat Y_{n+1|n}, \\cdots , \\hat Y_{n+h-1|n}$. The conditional variance $Var[Y_{n+h}|Y_{1}, \\cdots, Y_n ]$ is hard to write analytically. However, the variance and prediction intervals for $Y_{n+h}$ can be estimated using the bootstrapping technique, where a (usually large) sample of possible values of $Y_{n+h}$ is simulated from the estimated model.\n",
"\n",
Expand Down
12 changes: 6 additions & 6 deletions nbs/docs/models/GARCH.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -70,14 +70,14 @@
"\n",
"**Definition 1.** A $\\text{GARCH}(p,q)$ model with order $(p≥1,q≥0)$ is of the form\n",
"\n",
"$$\n",
"\\begin{equation}\n",
" \\left\\{\n",
"\t \\begin{array}{ll}\n",
"\t\t X_t =\\sigma_t \\varepsilon_t \\\\\n",
"\t\t \\sigma_{t}^2 =\\omega+ \\sum_{i=1}^{p} \\alpha_i X_{t-i}^2 + \\sum_{j=1}^{q} \\beta_j \\sigma_{t-j}^2 \\\\\n",
"\t \\end{array}\n",
"\t\\right.\n",
" \\begin{cases}\n",
" X_t = \\sigma_t \\varepsilon_t\\\\\n",
" \\sigma_{t}^2 = \\omega + \\sum_{i=1}^{p} \\alpha_i X_{t-i}^2 + \\sum_{j=1}^{q} \\beta_j \\sigma_{t-j}^2\n",
" \\end{cases}\n",
"\\end{equation}\n",
"$$\n",
"\n",
"where $\\omega ≥0,\\alpha_i ≥0,\\beta_j ≥0,\\alpha_p >0$ ,and $\\beta_q >0$ are constants,$\\varepsilon_t \\sim iid(0,1)$, and $\\varepsilon_t$ is independent of $\\{X_k;k ≤ t − 1 \\}$. A stochastic process $X_t$ is called a $\\text{GARCH}(p, q )$ process if it satisfies Eq. (1).\n",
"\n",
Expand Down

0 comments on commit 03ecd00

Please sign in to comment.