Main approaches
Often our models involve complex sets of equations that we have to differentiate.
There are several approaches
- Manual
- Finite Differences
- Symbolic Differentiation
- Automatic Differentiation
Often our models involve complex sets of equations that we have to differentiate.
There are several approaches
eps()\[ \begin{aligned} f''(x) & \approx & \frac{f'(x)-f'(x-\epsilon)}{\epsilon} \approx \frac{(f(x+\epsilon))-f(x))-(f(x)-f(x-\epsilon))}{\epsilon^2} \\ & = & \frac{f(x+\epsilon)-2f(x)+f(x-\epsilon)}{\epsilon^2} \end{aligned} \] - precision: \(o(\epsilon)\) - Generalizes to higher order but becomes more and more innacurate
Two main libraries:
Example using Symbolics:
does not provide mathematical insights but solves the other problems
can differentiate any piece of code
two flavours
Consider this simple function
Instead of rewriting source code, we can use dual numbers to perform exactly the same calculations.
Try it on function f What do we miss ?
This approach (and automatic differentiation in general) is compatible with control flow operations (if, while, …)
Let’s see it with the dual numbers defined by ForwardDiff library:
There are many flavours of automatic differenation (check JuliaDiff.org)
General formulation of a linearized model: \[ \begin{eqnarray} A s_t + B x_t + C s_{t+1} + D x_{t+1} & = & 0_{n_x} \\ s_{t+1} & = & E s_t + F x_t \end{eqnarray}\] where:
Remark:
In the neoclassical model: \[\begin{eqnarray} s_t & = & (\Delta z_t, \Delta k_t) \\ x_t & = & (\Delta i_t, \Delta c_t) \end{eqnarray}\]
The linearized system is: \[\begin{eqnarray} A & = & ...\\ B & = & ...\\ C & = & ...\\ D & = & ...\\ E & = & ...\\ F & = & \end{eqnarray}\]
What is the solution of our problem?
In the neoclassical model
The states are \(k_t\) and \(z_t\)
The controls \(i_t\) and \(c_t\) must be a function of the states
In the linearized model: \[\Delta i_t =i_z \Delta z_t + i_k \Delta k_t\] \[\Delta c_t =c_z \Delta z_t + c_k \Delta k_t\]
Replacing in the system:
\[\Delta x_t = X \Delta s_t\] \[\Delta s_{t+1} = E \Delta s_t + F X \Delta s_t\] \[\Delta x_{t+1} = X \Delta s_{t+1}\] \[A \Delta s_t + B \Delta x_t + C \Delta s_{t+1} + D \Delta x_{t+1} = 0\]
If we make the full substitution:
\[( (A + B X) + ( D X + C) ( E + F X ) ) \Delta s_t = 0\]
This must be true for all \(s_t\). We get the special Ricatti equation:
\[(A + B {\color{red}{X}} ) + ( D {\color{red}{X}} + C) ( E + F {\color{red}X} ) = 0\]
This is a quadratic, matrix ( \(X\) is 2 by 2 ) equation:
requires special solution method
there are multiple solutions: which should we choose?
today: linear time iteration selects only one solution
The time iteration operator associated to the linearized model, associates to any function function \(\tilde{\varphi}\) the function \(\varphi\) such that:
\[D \tilde{\varphi}( E s + F \varphi(s) ) + C ( E s + F \varphi(s)) + B \varphi(s) + A s = 0\]
To find the solution we have iterated this operator in the space of linear functions (with \(\varphi(\overline{s})=\overline{x}\)), but time-iteration can be defined in a more general space.
In particular, we can study it in the space of affine functions \(\varphi(\overline{s})=x_0 + x_1 (\overline{s} - \overline{s})\).
In this space of functions one can show that:
\(\implies\) forward and backward stability are equivalent to Blanchard-Kahn conditions