Consider a function \(f: \mathbb{R}^n\rightarrow \mathbb{R}^n\) and a recursive sequence \((x_n)\) defined by \(x_0\in \mathbb{R}^n\) and \(x_n = f(x_{n-1})\).
We want to compute a fixed point of \(f\) and study its properties.
Today: Some methods for the case \(n=1\).
Another day:

Solow growth model:
| capital accumulation | \(k_t = (1-\delta)k_{t-1} + i_{t-1}\) |
| production | \(y_t = k_t^\alpha\) |
| consumption | \(c_t = (1-{\color{red}s})y_t\) |
| investment | \(i_t = s y_t\) |
For a given value of \({\color{red} s}\in\mathbb{R}^{+}\) ( \({\color{red} s}\) is a decision rule) \[k_{t+1} = f(k_t, {\color{red} s})\]
Questions:
Basic New Keynesian model (full derivation if curious )
Solving the system: \(\begin{bmatrix}\pi_t \\\\ y_t \end{bmatrix} = {\color{red} c} z_t\)
The system is forward looking:
\(\mathcal{T}: \underbrace{c_{n}}_{t+1: \; \text{tomorrow}} \rightarrow \underbrace{c_{n+1}}_{t: \text{today}}\) is the time-iteration operator (a.k.a. Coleman operator)
Questions:
:::
🤔 Wait: does a fixed point exist?
In theoretical work, there are many fixed-point theorems to choose from.
For example:
Brouwer fixed point: assume \(f\) is continuous and there is an interval such that \(f([a,b])\subset[a,b]\). Then we know there exists \(x\) in \([a,b]\) such that \(f(x)=x\). (There can be many such points.)
Multiple equilibria in the growth model
In the growth model, if we change the production function: \(y=k^{\alpha}\) for a nonconvex/nonmonotonic one, we can get multiple fixed points.
Given \(f: \mathbb{R} \rightarrow \mathbb{R}\) consider the sequence \(x_n = f(x_{n-1})\)
Stability criterion:
Think of the graph of \(y=f(x)\) and the 45-degree line \(y=x\).
To get the intuition about local convergence, assume you have an initial point \(x_n\) close to the steady state and consider the following expression:
\[x_{n+1} - x = f(x_n) - f(x) = f^{\prime}(x) (x_n-x) + o(x_n-x)\]
If one sets aside the error term (which one can do with full mathematical rigour), the dynamics for very small perturbations are given by:
\[|x_{n+1} - x| \approx |f^{\prime}(x)| |x_n-x|\]
When \(|f^{\prime}(x)|<1\), the distance to the target decreases at each iteration and we have convergence.
When \(|f^{\prime}(x)|>1\) there is local divergence.
This is conform to Cobweb intuition.
When \(|f'(x^*)|=1\), the linear approximation does not tell you whether you converge.
Sometimes, we are interested in tweaking the convergence speed: \[x_{n+1} = (1-\lambda) x_n + \lambda f(x_n)\]
Suppose there exists an interval \(I=[a,b]\) such that \(f(I)\subseteq I\) and \[\sup_{x\in I} |f'(x)| \le L < 1.\]
This is the one-dimensional version of the contraction logic used for many “time-iteration” operators in macro.
How do we derive an error bound? Suppose that we have \(\overline{\lambda}>|f^{\prime}(x_k)|\) for all \(k\geq k_0\):
\[|x_t - x| \leq |x_t - x_{t+1}| + |x_{t+1} - x_{t+2}| + |x_{t+2} - x_{t+3}| + ... \]
\[|x_t - x| \leq |x_t - x_{t+1}| + |f(x_{t}) - f(x_{t+1})| + |f(x_{t+1}) - f(x_{t+2})| + ... \]
\[|x_t - x| \leq |x_t - x_{t+1}| + \overline{\lambda} |x_t - x_{t+1}| + \overline{\lambda}^2 |x_t - x_{t+1}| + ... \]
\[|x_t - x| \leq \frac{1} {1-\overline{\lambda}} | x_t - x_{t+1} |\]
\[\frac{|x_t - x_{t-1}|} {|x_{t-1} - x_{t-2}|} \sim |f^{\prime}(x_{t-1})| \]
corresponds to the case of linear convergence (kind of slow).
When convergence is geometric, we have: \[ \lim_{t\rightarrow \infty}\frac{ x_{t+1}-x}{x_t-x} = \lambda \in \mathbb{R}^{\star}\]
Which implies:
\[\frac{ x_{t+1}-x}{x_t-x} \sim \frac{ x_{t}-x}{x_{t-1}-x}\]
Take \(x_{t-1}, x_t\) and \(x_{t+1}\) as given and solve for \(x\):
\[x = \frac{x_{t+1}x_{t-1} - x_{t}^2}{x_{t+1}-2x_{t} + x_{t-1}}\]
or after some reordering
\[x = x_{t-1} - \frac{(x_t-x_{t-1})^2}{x_{t+1}-2 x_t + x_{t-1}}\]
It can be shown that the sequence generated from Steffensen’s method converges quadratically, that is
\(\lim_{t\rightarrow\infty} \frac{x_{t+1}-x_t}{(x_t-x_{t-1})^2} \leq M \in \mathbb{R}^{\star}\)
Rate of convergence of series \(x_t\) towards \(x^{\star}\) is:
Remark: in the case of linear convergence with \(\mu\in(0,1)\),
\[\mu = {\lim}_{t\rightarrow\infty} \frac{|x_{t+1}-x^{\star}|}{|x_{t}-x^{\star}|} = {\lim}_{t\rightarrow\infty} \frac{|x_{t+1}-x_t|}{|x_t-x_{t-1}|}.\]
Moreover, asymptotically one has the useful heuristic bound \[|x_t-x^{\star}| \approx \frac{1}{1-\mu}\,|x_{t+1}-x_t|.\]
Consider the relaxed iteration \[x_{n+1}=g(x_n)=(1-\lambda)x_n+\lambda f(x_n).\]
Locally around a fixed point \(x^*\), \[g'(x^*) = 1-\lambda+\lambda f'(x^*).\]