## Tuesday, March 29, 2016

### Comment for Roger

This post is an expanded follow on comment for Roger Sparks in this thread.

In particular, for an LTI system $A_d = e^{A_c T_s} \approx I + A_c T_s$ (the approximation holding when the matrix norm $\lVert A_c T_s \rVert = \lVert A_c \rVert T_s \ll 1$, ... a generalization of the approximation you may recall from this comment I left you on John Handley's blog). The subscripts $d$ and $c$ denote discrete and continuous time, respectively. Note that $A_d$ is different for every sample period $T_s$ used. $B_d$ is generally more complicated, but can be approximated as $B_d \approx A^{-1}_c (A_d - I) B_c$. This approximation is exact when $\mathbf{u}$ is a "zero order hold" (ZOH) function, meaning it stays constant over each sample period. More generally, for LTI systems with an arbitrary input $\mathbf{u}$ you must integrate:
$$\mathbf{x}_{n+1} = e^{A_c T_s} \mathbf{x}_{n} + \int\limits_{0}^{T_s} e^{A_c (T_s - \tau)} B_c\, \mathbf{u}(\tau + n T_s) d \tau \tag 9$$
and substituting for time-invariant $A_d$ from above:
$$\mathbf{x}_{n+1} = A_d \mathbf{x}_{n} + \int\limits_{0}^{T_s} e^{A_c (T_s - \tau)} B_c\, \mathbf{u}(\tau + n T_s) d \tau \tag {9a}$$
or more generally still, for a time-varying linear system:
$$\mathbf{x}_{n+1} = \Phi_{(n+1) T_s, n T_s} \mathbf{x}_{n} + \int\limits_{n T_s}^{(n+1) T_s} \Phi_{(n+1) T_s, \tau} B_c (\tau) \mathbf{u}(\tau) d \tau \tag {10}$$
and now substituting for a time varying version of the $A_d$ matrix (indexed by the sample period number $n$):
$$\mathbf{x}_{n+1} = A_{dn} \mathbf{x}_{n} + \int\limits_{n T_s}^{(n+1) T_s} \Phi_{(n+1) T_s, \tau} B_c (\tau) \mathbf{u}(\tau) d \tau \tag {10a}$$
where $\Phi_{t_2, t_1}$ is the state transition matrix from time $t_1$ to time $t_2$. Or you can, in principal, avoid the integration by doing a Taylor expansion on $\mathbf{u}$ and find a separate $B_{di}$ for each $i^{th}$ term. As Brian has previously pointed out, a numerical integration accomplishes much the same thing, only instead of Taylor terms sampled at one time, it uses $\mathbf{u}$ sampled at several times over a single nominal sample period.

Additionally, as I mentioned in a comment above, some non-linear dynamic systems can be approximated over each sample period as a linear system. For example, you may have a non-linear system such as $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x},\mathbf{u} ,t$), where $\mathbf{f}, \mathbf{x}$ and $\mathbf{u}$ are generally vectors. One way to accomplish this is to approximate $\mathbf{f}$ as $A_c$ and $B_c$ where these matrices are the Jacobians of $\mathbf{f}$ wrt $\mathbf{x}$ (i.e. $A_c = \partial \mathbf{f}/\partial\mathbf{x}$) and $\mathbf{u}$ (i.e. $B_c = \partial \mathbf{f}/\partial\mathbf{u}$) respectively, and to rewrite the differential equation in terms of a small perturbation of $\mathbf{x}$ (call it $\delta{\mathbf{x}}$) about a nominal solution $\mathbf{x}^*$, i.e. $\delta{\mathbf{x}} \equiv \mathbf{x} - \mathbf{x}^*$. Similarly define $\delta{\mathbf{u}} \equiv \mathbf{u} - \mathbf{u}^*$. Then we can write for an approximate perturbation to this non-linear differential equation $\delta{\dot{\mathbf{x}}} = A_c \delta\mathbf{x} + B_c \delta\mathbf{u}$. The discrete time approximation I gave above ($A_d \approx I + A_c T_s$) can be seen in this light as a two term vector Taylor expansion of $\mathbf{f}$ wrt $\mathbf{x}$, i.e. $A_d \approx I + (\partial\mathbf{f}/\partial\mathbf{x}) T_s$. Note that the "region of validity" for $A_c$ and $B_c$ may be small, and typically they are recalculated around different nominal values $\mathbf{x}^*$, $\mathbf{u}^*$ and time $t$ as the system evolves in time. An application might be, for example, a model of perturbations of a rocket about a nominal desired trajectory for purposes of implementing a feedback control system to keep it on course. As long as it never is allowed to wander too far off course, the approximation may be adequate, provided the system is "re-linearized" on a regular basis. This might suggest an economics application of a central bank "feedback control" rule for setting interest rates or determining the "high powered" money supply to "keep the economy on course" about some non-linear pre-determined solution to a set of non-linear differential equations, based on feedback from other measured macro variables, such as the inflation rate, the NGDP level or the unemployment rate.

Another huge advantage of linear systems is the ability to analyze and manipulate them in the "frequency domain" rather than the time domain analysis I give above. This means the ability to use continuous and discrete time Fourier transforms, Laplace transforms and  Z-transforms. They also have advantages in terms of being able to analyze their response to noise, modeling errors and other uncertainties, and to design optimal "state estimators" (e.g. Kalman filters) perhaps designed to work in conjunction with optimal feedback or open loop control laws, which can effectively change the natural modes of the system to more desirable ones: for example, to stabilize an unstable system by moving its unstable eigenvalues to the "right half plane" (i.e. move modes with positive real components such that they instead have negative real components... in what's known with regard to Laplace transforms as the "S-plane"). The discrete time equivalent is to move unstable eigenvalues of $A_d$ from outside the "unit circle" to inside the unit circle (this unit circle drawn in the "Z-plane" associated with the Z-transform, which is the discrete time counterpart to the Laplace transform). Sometimes you hear economists speak of "unit roots." What they're referring to are "roots" (in the sense of roots to the "characteristic polynomial" of $A_d$, which are the eigenvalues, AKA "poles" of $A_d$) on the unit circle (corresponding to the imaginary axis of the S-plane in the continuous time world), which produces "marginally stable" outputs, such as non-rising non-decaying step impulse responses (the response of a perfect "integrator") or constant amplitude oscillations (sin and cos waves, or more generally "complex exponentials," i.e. $c\, e^{i \omega n T_s}$ with $i \equiv \sqrt{-1}$, $c$ a constant and $\omega$ the unitless frequency of oscillation (which you can practically take to be radians per sample)). Everything in quotes can be Googled for more information (those are all common terms).

I think you can probably see how important the $A$ matrix ($A_c$ or $A_d$) is to a linear system. It characterizes all the dynamics of the system, such as whether or not it's naturally stable or likely to oscillate, how fast it will respond to inputs (the time constants), and which modes are dominant (i.e. which modes are weighted with the largest coefficients). Of course the other matrices ($B$, $C$ and $D$) are also important, and you need them, for example, to analyze steady state responses or to determine if a system is "controllable" (can we control the states?) or "observable" (do we have enough sensors to observe all the states?). But $A$ is really the heart of the system.

-----------------------------------------------------------------------------------------------------------------

Appendix A: Details about one way to calculate discrete time matrices for non-linear systems

For the non-linear approximate discrete time case, it's sometimes beneficial to numerically calculate $A_d$ and $B_d$ from numerical solutions to the non-linear equations directly rather than via the partial differential Jacobian technique discussed above, especially if you have some idea of the covariance of $\delta \mathbf{x}$ (call it $C_\mathbf{x}$) and $\delta \mathbf{u}$ (call it $C_\mathbf{u}$). For example, say the eigenvalues (variances) of $C_\mathbf{x}$ are $\{\sigma^2_{\mathbf{x}, i}\}$, for $i = 1,...,n$ (assuming $\mathbf{x}$ is $n$-dimensional), and the corresponding orthonormal matrix of eigenvectors is $V_\mathbf{x} = [\mathbf{v}_{\mathbf{x}, i}]$. These eigenvectors are orthonormal since covariances are always positive semi-definite, and we'll assume positive definite (i.e. a full $n$ degrees of freedom (equivalently all variances are > 0)) for this exercise. Then a good way to calculate $A_{d n}$ which captures some of the effects of the higher order terms of the Taylor expansion as well as the dependence of $\mathbf{f}$ on $t$ (which we've so far assumed to be negligible over a sample period) is the following sum of outer products:
$$A_{d n} = \sum_{i=1}^n [\mathbf{x}_i (t_{n+1}) - \mathbf{x}^* (t_{n+1})] \cdot \mathbf{v}^T_{\mathbf{x}, i} /\sigma_{\mathbf{x}, i} \tag {11}$$
where $t_{m} \equiv m T_s$ and $\mathbf{x}_i (t_{n+1})$ is the numerical solution at time $t_{n+1}$ to the non-linear differential equation after perturbing the nominal solution $\mathbf{x}^* (t_{n})$ by $\mathbf{v}_{\mathbf{x}, i} \sigma_{\mathbf{x}, i}$ and propagating it forward by one sample period $T_s$ to time $t_{n+1}$. In other words we form $\mathbf{x}_i (t_n) = \mathbf{x}^* (t_{n}) + \delta \mathbf{x}_{i, n}$ where $\delta \mathbf{x}_{i, n} = \mathbf{v}_{\mathbf{x}, i} \sigma_{\mathbf{x}, i}$ and propagate that forward one time step to get $\mathbf{x}_i (t_{n+1})$. In general $C_\mathbf{x}$ and $C_\mathbf{u}$ will also be functions of time, and thus so will their eigenvalues and eigenvectors (though I've neglected a time index for them in the above). An analogous procedure exists for finding $B_{d n}$ using a formula similar to $(11)$ but with the eigenvalues and eigenvectors of $C_\mathbf{u}$ rather than $C_\mathbf{x}$. This procedure (for both $A_d$ and $B_d$) scales the non-linear effects by the size of the standard deviations ($\sigma$) of each degree of freedom in each covariance, thus in some sense capturing the expected size of the effects of higher order terms / non-linearities. Note that $(11)$ is a one-sided difference, whereas a more robust method would use both positive and negative perturbations ($\pm \delta$).

One way to picture what's going on in $(11)$ is to imagine that $C_\mathbf{x}$ describes a multivariate spatial Gaussian probability distribution that's long and narrow (i.e. its 1-$\sigma$ ellipsoidal iso-probability density contour is cigar shaped). Then a perturbation in $\mathbf{x}$ from the nominal is most likely to be located ~$1 \sigma$ one direction or the other along the major axis of this distribution (with $\mathbf{x}^*$ right in the middle of course). For the sake of argument, say it's exactly $1 \sigma$ along the positive major axis (it actually works for any principal axis of the ellipsoid, but imagine the major one for the sake of argument), and it has no perturbation components along the other axes. Then the inner product of this perturbation and the row vector $\mathbf{v}^T_{\mathbf{x}, j}/\sigma_{\mathbf{x}, j}$ will be $1$ and it will be $0$ for $\mathbf{v}^T_{\mathbf{x}, i}/\sigma_{\mathbf{x}, i},\; i \ne j$ where we're assuming the major axis is the $j^{th}$. That's because $V_\mathbf{x}$ is orthornormal (i.e. its columns are unit length and mutually orthogonal). Such a perturbation will give an exact result for the resultant perturbation propagated to time $t_{n+1}$ (assuming $\delta \mathbf{u} = 0$). In other words, such a perturbation at time $t_n$ (call it $\delta \mathbf{x}_j (t_n)$) will give an exact result for $\delta \mathbf{x}_j (t_{n+1}) = A_{d n} \delta \mathbf{x}_j (t_n)$. This is because, substituting $(11)$ for $A_{d n}$ we have:
\begin{align} A_{d n} \delta \mathbf{x}_j (t_n) & = \left( \sum_{i=1}^n [\mathbf{x}_i (t_{n+1}) - \mathbf{x}^* (t_{n+1})] \cdot \mathbf{v}^T_{\mathbf{x}, i, n} /\sigma_{\mathbf{x}, i, n} \right) \cdot \delta \mathbf{x}_i (t_n)\\ & = \left( \sum_{i=1}^n [\mathbf{x}_i (t_{n+1}) - \mathbf{x}^* (t_{n+1})] \cdot \mathbf{v}^T_{\mathbf{x}, i, n} /\sigma_{\mathbf{x}, i, n} \right) \cdot \mathbf{v}_{\mathbf{x}, j, n} \sigma_{\mathbf{x}, j, n}\\ & = \sum_{i=1}^n [\mathbf{x}_i (t_{n+1}) - \mathbf{x}^* (t_{n+1})] (\mathbf{v}^T_{\mathbf{x}, i, n} \cdot \mathbf{v}_{\mathbf{x}, j, n}) ( \sigma_{\mathbf{x}, j, n} /\sigma_{\mathbf{x}, i, n}) \\ & = [\mathbf{x}_j (t_{n+1}) - \mathbf{x}^* (t_{n+1})] (\mathbf{v}^T_{\mathbf{x}, j, n} \cdot \mathbf{v}_{\mathbf{x}, j, n}) ( \sigma_{\mathbf{x}, j, n}/\sigma_{\mathbf{x}, j, n})\\ & = \mathbf{x}_j (t_{n+1}) - \mathbf{x}^* (t_{n+1})\\ & = \delta \mathbf{x}_j (t_{n+1}) \tag {12} \end{align}
where I've added time indices where appropriate. Thus, in some sense, a "likely" perturbation gives exact results, non-linearities and all. The Jacobian method does not capture this, and may be equally difficult to calculate (although when closed form expressions are available, it's likely easier) [1]. Of course, this exact "likely" perturbation still has probability = 0, so we don't expect to actually avoid non-linear distortions altogether, but in some sense we can expect to account for them in a way not captured by the Jacobian method. Doing so is perhaps a bit conservative, but if our purpose is (for example) to keep track of how errors propagate through the system (e.g. with a Kalman filter), it's generally better to err on the conservative side, lest we fool ourselves into thinking we have a better state estimate than we actually do (which can cause us to weight new information ("innovations") with smaller gains than we should at future times). You can think of this problem as wrongly adopting overly strong "priors." If a more conservative result is desired, the perturbations can be made to be greater than 1-$\sigma$ in size.

[1]  In fact, you can think of this method as a crude numerical estimate of $e^{(\partial\mathbf{f}/\partial\mathbf{x}) T_s}$, where the $\partial \mathbf{x}$ deviations (and thus the implicit $\partial \mathbf{f}$ deviations)  aren't very infinitesimal at all, and where $T_s$ is implicitly folded into the differencing estimate directly rather than as a separate multiplier. But the point isn't to approximate the matrix exponential, but to find something better. However, if the system were actually LTI, it would be an exact representation of $e^{(\partial\mathbf{f}/\partial\mathbf{x}) T_s}$. And if the system were linear but time varying, it would produce exactly $\Phi_{n+1, n}$ where I've simplified the time indices from $(10)$ above.

-----------------------------------------------------------------

See comment below:

Below: Example of error propagation (falling objects with initial horizontal velocity): blue curve includes air resistance, and the red curve doesn't. Blue trajectory is non-linear and red is linear. Error ellipses are 1-sigma.

## Tuesday, March 22, 2016

### SIM7: determining both the time constant (Tc) and the steady state values with the SIM parameters (α1, α2 and θ)

This post was inspired by this post by Jason Smith.

In the case of Godley & Lavoie's SIM model, given an equivalent continuous time system with $h$ the equivalent of $H$ and $\dot{g}$ the equivalent of $G$ we have:
$$(1)\:\dot{h} = ah + b\dot{g}$$
Defining
$$X \equiv 1 - \alpha_{1}(1 - \theta)$$
We have
$$a = \frac{1}{T_{s}} log (1 - \frac{\alpha_{2}\theta}{X})$$
$$b = \frac{a(\theta - X)}{\alpha_{2}\theta}$$
Where $T_{s}$ is the sample period of SIM (specified by G&L as 1 "period"). For the outputs (continuous time equivalents to $Y$, $T$, $Y_{D}$ and $C$) we have:
$$\dot{y} = \frac{\alpha_{2}}{X}h + \frac{1}{X}\dot{g}$$
$$\dot{t} = \frac{\alpha_{2}\theta}{X}h + \frac{\theta}{X}\dot{g}$$
$$\dot{y}_{D} = \frac{\alpha_{2}(1 - \theta)}{X}h + \frac{1 - \theta}{X}\dot{g}$$
$$\dot{c} = \frac{\alpha_{2}}{X}h + (\frac{1}{X} - 1)\dot{g}$$
Where $\dot{h}$, $\dot{g}$, $\dot{y}$, $\dot{t}$, $\dot{y}_{D}$ and $\dot{c}$ are all rates expressed in dollars per sample period $T_{s}$ and $h$ is expressed in dollars. Then solving for (steady state $h$) $\equiv h_{ss}$ by setting $\dot{h} = 0$ in (1), we have:
$$(2)\:h_{ss} = K_{h}\dot{g}_{ss}$$
Where the steady state gain $K_{h}$ is:
$$(3)\:K_{h} \equiv \frac{h_{ss}}{\dot{g}_{ss}} = -\frac{b}{a} = \frac{X - \theta}{\alpha_{2}\theta} = \frac{1 - \alpha_{1}(1 - \theta) - \theta}{\alpha_{2}\theta}$$
And $\dot{g}_{ss} \equiv$ (the steady state value of exogenous input $\dot{g}$). Substituting $h_{ss}$ for $h$ and $\dot{g}_{ss}$ for $\dot{g}$ in the above equations for $\dot{y}$, $\dot{t}$, $\dot{y}_{D}$ and $\dot{c}$ to find the corresponding steady state values, and then solving for the corresponding steady state system gains we have:
$$(4)\:K_{\dot{y}} \equiv \frac{\dot{y}_{ss}}{\dot{g}_{ss}} = \frac{1}{\theta}$$
$$(5)\:K_{\dot{t}} \equiv \frac{\dot{t}_{ss}}{\dot{g}_{ss}} = 1$$
$$(6)\:K_{\dot{y}_{D}} \equiv \frac{\dot{y}_{Dss}}{\dot{g}_{ss}} = \frac{1 - \theta}{\theta}$$
$$(7)\:K_{\dot{c}} \equiv \frac{\dot{c}_{ss}}{\dot{g}_{ss}} = \frac{1 - \theta}{\theta}$$
And finally, expressing the system time constant $T_{c}$ as a function of $\alpha_{1}$, $\alpha_{2}$, $\theta$ and sample period $T_{s}$, we have:
$$(8)\:T_{c} = -\frac{1}{a} = -\frac{T_{s}}{log(1 - \frac{\alpha_{2}\theta}{X})} = -\frac{T_{s}}{log(1 - \frac{\alpha_{2}\theta}{1 - \alpha_{1}(1 - \theta)})}$$
Now culling the above equations for the interesting bits, and expressing in the more familiar discrete time notation we conclude that we have only four interesting equations for our desired values of $K_{H}$, $K_{Y}$, $K_{Y_{D}}$ and $T_{c}$ with three unknown parameters ($\alpha_{1}$, $\alpha_{2}$, and $\theta$) and solving (8) for the ratio of the system time constant to the sample period $T_{c}/T_{s}$:
$$(3A)\:K_{H} = \frac{1 - \alpha_{1}(1 - \theta) - \theta}{\alpha_{2}\theta}$$
$$(4A)\:K_{Y} = \frac{1}{\theta}$$
$$(6A)\:K_{Y_{D}} = K_{C} = \frac{1 - \theta}{\theta}$$
$$(8A)\:\frac{T_{c}}{T_{s}} = \frac{-1}{log(1 - \frac{\alpha_{2}\theta}{1 - \alpha_{1}(1 - \theta)})}$$
Clearly we can choose to determine $\theta$ with either (4A) or (6A) (but not both) and then we can solve (3A) and (8A) for $\alpha_{1}$ and $\alpha_{2}$, thus determining both the steady state and the time constant of the system.

Without loss of generality, we can set $T_{s} = 1$ and choose to let (4A) determine $\theta$. Thus if we select ahead of time $K_{Y}$, $K_{H}$, and $T_{c}$ we can determine $\alpha_{1}$, $\alpha_{2}$, and $\theta$ as follows:
$$(9)\:\theta = \frac{1}{K_{Y}}$$
$$(10)\:\alpha_{1} = \frac{K_{H}(e^{-\frac{1}{T_{c}}} - 1) + 1 - \theta}{(K_{H}(e^{-\frac{1}{T_{c}}} - 1) + 1)(1 - \theta)}$$
$$(11)\:\alpha_{2} = \frac{1 - \alpha_{1}(1 - \theta) - \theta}{K_{H}\theta}$$

To see a demonstration of how this works, I've updated SIM6 with the ability to set these parameters via these equations (look for the green cells under the results table). You'll note that not all combinations of $K_{Y}$, $K_{H}$, and $T_{c}$ are valid or work very well. For example, setting $K_{Y} = 1$ causes $\theta = 1$ which makes (10) blow up. I discuss more of these problems in this comment.

## Monday, March 21, 2016

### Japan Philips Curve Regression 2003.12 to 2013.12

I created this version of this post to address the 2014 VAT tax Jason mentioned here (by sliding the time frame forward to 2003.12 to 2013.12 to avoid 2014 altogether).

## Sunday, March 20, 2016

### Japan Philips Curve Regression 2005.02 to 2015.02

This is me trying to recreate either Jason's or John's results from this post, and in particular, this comment.

The data I do the regression on is copied up to the top (yellow cells) under the right side of the chart in columns H and I (starting at row 30) labeled ΔEMP and ΔCPI. The ΔEMP represents a 12 month difference. These cells were copied from the big table to the left, down at the bottom (stopping at row 380) and also colored yellow. My results don't seem to match either Jason or John. Not a surprise since I'm probably doing it wrong.

Instructions for repeating what I did:

First download the data from FRED (following John's link) in Excel format. Or just download my spreadsheet (icon in lower right corner of sheet). From here on I'll assume you've downloaded my sheet (so the differences have been computed and data copied to look like mine). Select the "Data Analysis" option on the "Data" tab and then the "Regression" sub-option. You may need to download and install a Data Analysis  plug-in first. For the y values select the column of yellow cells including the  ΔCPI label at the top starting at I30. For  the x values select the column of yellow cells immediately to the left of that including the ΔEMP label at the top (starting at H30). I selected the option check-box to output the analysis on a new tab (which I called "Regression"). That's all.

For the plot (which I did separately) I used the linear regression ("show trend?") built directly into the scatter plot options. Another option there lets you display the the equation for the linear trend and the R squared value. I found that after making the plot and then looking at the options associated with the data series. The answers should match the Data Analysis results!

## Tuesday, March 15, 2016

### SIM6: updated SIM to preserve time constant (Tc) and G&L's accounting equations

This post updates SIM to preserve the system time constant (thus making the dynamics invariant to the sample period). And like SIM, all G&L's expressions are satisfied. This is accomplished by using the same mechanism as was used in SIM, but replacing not just alpha2 when Ts is changed, but alpha1 as well.

I updated the sheet so that as an option you can edit the "master" green cells under the results table to set Ky, Tc and Kh, which in turn determine the green cells in the upper left (alpha1_0, alpha2_0 and theta). This is to demonstrate the equations in SIM7. Or you can just edit the cells in the upper left directly (thus breaking their link to the lower "master" green cells -- to reestablish the link, reload the page).

Scroll to the right to see the columns which do the calculations. The columns immediately under the chart are what's plotted.  Note that Γ is the Γ that Jason defined as ΔH/(G - T) :=  Γ, not the Γ I define in SIM4. The goal is to keep Γ = 1 here (to satisfy G&L's expression). As T approaches G as t -> inf, this ratio becomes numerically unstable (0/0), so it's really only valid before that happens (at the early times).

The only curve that truly represents an accumulated value is the single state variable H (money supply or cash in the economy). The curves G, Y, T, YD and C are all scaled by 1/T_ratio when plotted. You may notice the curve C and some of the others seem to change slightly (other than just the interpolation between more widely separated sample points). I'm not sure why that is [Update: see paragraph below], but it may be because there's one more degree of freedom I haven't yet tried to adjust: namely the balance between alpha1 and theta. [Update: I don't think so because it works with theta the way it is now (see below)]. But the main dynamics captured by curve H are truly invariant [Update: as are the others, it's just that they represent average rates over each interval, so they are fundamentally different than H and dependent on the sample times: see below] to the sample period: the points sampled are always from the same underlying (and unchanging) continuous curve, with only the linear interpolation between them (on the plot) showing any difference (which is entirely an artifact of the plotting procedure).

"I'm not sure why that is,..." At least part of the reason is Y, T, YD and C are still plotted at rates (so is G, but it won't have this problem as explained above): rates effectively estimated from differencing underlying and uncalculated integrals of each of them between sample times. Thus even if those underlying integrated curves were exactly the same for different sample periods, estimated rates calculated in that manner would still differ from one another if Y, T, YD and C are anything except constants (G is a constant, and thus does not have this problem) for t >= 0, assuming they are all 0 for t < 0. As an example, consider the function y(t) = t^2. If sampled with period = 2 at t=0 and t=2, the answers would be y(0) = 0 and y(2) = 4. If I calculated a rate based on differencing and plotted that at the end of the sample period I'd have at t=2, ydot(2) = 4/2 = 2. Now imagine the sample period is 1. Then I'd get y(1) = 1, y(2) = 4. An estimated rate based on those samples plotted at t=2 gives me ydot(2) = (4-1)/1 = 3. The only fair way to compare them is in terms of y, not ydot. The ydot's represent accurate average rates over each sample period. So if I just had access to the ydots from each set of samples, I could use them to reconstruct y in both cases, and those should match (assuming ydot(0) = 0 in both cases).

Sample period = 2:
y(2) = ydot(2)*2 = 4

Sample period = 1:
y(2) = ydot(2)*1 + ydot(1)*1 = 3 + 1 = 4

I need to do that exercise (calculate integrated Y, T, YD and C curves) for two different sample periods, one an integral multiple of the other, and see if they match at those sample times that they have in common. I still suspect that there may be a problem with my extra degree of freedom (I mention above), but we'll see. [UPDATE: 2016.04.22: I don't think there's a problem with this so-called "extra" degree of freedom: see equations 4, 4A, 6, 6A and 9 in SIM7. They demonstrate that theta, and theta alone determines the steady state gain of Y, YD and C (at a fixed sample period). Scaling the sample period should scale those steady state gains by the same value, thus there's no room for us to fool with theta, else it would change the steady state gain for Y, YD and C to a wrong value if we did. So the answer is there is no extra degree of freedom! In fact, one way to think about what's going on in this post (SIM6) is in terms of  SIM7: here in SIM6, we're effectively using equation 10 from SIM7 to set alpha1 (given we don't want to change any steady state gains, and thus we can't change theta). We do that by changing the time constant Tc in equation 10. You might object saying "I thought the goal of this post was to keep the time constant fixed!" Yes, that's true, but you can think of changing Tc in eq. 10 such that it's the value of the old (fixed) time constant but expressed in terms of our new sample period. For example, say originally Tc = 6 original sample periods, and we want to change the sample period from 1 to 2, then in terms of the NEW sample period, Tc looks like just 3 (3 new sample periods, which are twice as big as the old sample periods). So after figuring out what Tc will be expressed in new sample periods (our goal sample period), then use SIM7 equation 10 to find alpha1, and then use SIM7 equation 11 to find alpha2. I *think* that's the same as what I'm doing here (but I haven't actually done the math to verify it)].

UPDATE: Just did that exercise. You can find the results by scrolling out further to the extreme right on the spreadsheet, where I compare Y, T, YD and C integrated and then sampled with sample period Ts against the next table to its left showing a copy of  the values integrated with default Ts = Ts_0 = 1 (the yellow cells in both tables). They are the same at identical sample times (chose Ts as an integral multiple or fraction of Ts_0 so some sample times will coincide), so my underlying integrated versions of Y, T, YD and C are invariant with sample period.

Also note that G_Rate in fixed units of dollars/(Ts_0=1 year) can be edited in the green User Input cells in the upper left of the spreadsheet, but you can also just overwrite the values in the green cells in the 2nd table (the results calculation table) immediately to the right of the table of plotted values under the chart. More information about how to do that in a sample period invariant way in this comment on Brian Romanchuk's blog repeated here for convenience:

I did not attempt to make this work with any other functions for G other than a scaled unit step. You can choose G&L's $20/period to match theirs, or any other. More general G functions addressed here. But even with the spread sheet above, you could make it work for G at a constant rate per period, by typing over the G values in the calculation table: so long as you matched the function precisely: for example originally you may have G be$20/period for period 1 and then $40/period for period 2. If you halved the sample rate to 0.5 periods, then you'd leave G at$20/period (=$10/half-period in the spreadsheet) for the 1st 2 sample periods, and then switch it to$40/period (=\$20/half-period in the spreadsheet) for the next two, etc.

--------------------------------------------------------------------------------------------------------

Note in a comment here by me about Jason's formulation of ΔH = Γ*(G - T) and how Γ is equivalent to the approximate adjustment for a new sample period (Ts2) in my original SIM formulation, except without  changing the sample times on the plot (thus approximately scaling the time constant). In this case Γ plays the role of the alpha2 and B scalar Ts2/Ts1 = T_ratio.

--------------------------------------------------------------------------------------------------------

Expressions from the spreadsheet's table repeated below for convenience:

 Matrix Element Name Expression A (1,1) A_0 1-theta*alpha2_0/(1-alpha1_0*(1-theta)) B (1,1) B_0 1-theta/(1-alpha1_0*(1-theta)) Cm (1,1) CY alpha2/(1-alpha1*(1-theta)) (2,1) CT theta*alpha2/(1-alpha1*(1-theta)) (3,1) CYD (1-theta)*alpha2/(1-alpha1*(1-theta)) (4,1) CC alpha2/(1-alpha1*(1-theta)) D (1,1) DY 1/(1-alpha1*(1-theta)) (2,1) DT theta/(1-alpha1*(1-theta)) (3,1) DYD (1-theta)/(1-alpha1*(1-theta)) (4,1) DC 1/(1-alpha1*(1-theta)) - 1 Description Name Expression sample prd ratio T_ratio Ts/Ts_0 A(T_ratio) A A_0^T_ratio B(T_ratio) B (B_0*(A-1)/(A_0-1))/T_ratio alpha1(T_ratio) alpha1 ((1-B)-theta)/((1-B)*(1-theta)) alpha2(T_ratio) alpha2 (1-A)*(1-alpha1*(1-theta))/theta orig. time const. Tc_0 -Ts_0/LN(A_0) Tc(T_ratio) Tc -Ts/LN(A)  [note: you should have Tc = Tc_0]

## Sunday, March 13, 2016

### SIM5: summary and to-do

This post is meant to be a stand-in for an eventual fully fleshed out post tying my previous posts on this subject together: namely SIM, SIM2, Answer for Henry #1, SIM3 and SIM4. A brief summary follows:

SIM: 1st attempt to code up G&L's SIM in a spreadsheet in a discrete time state space format. It worked the 1st time with the sample period (Ts) set to 1. H is the sole state variable producing the system dynamics, and at first I interpreted the exogenous input G and the outputs (system "measurements") all as rates in dollars/original-period (and I named the original period (Ts1) a "year" for convenience). The outputs are Y, T, YD and C. Ramanan, commenting on Jason's blog, suggested scaling the parameter alpha2 by Ts2/Ts1 to change sample periods, where Ts1 = 1 and Ts2 is the new desired sample period. This didn't work for me at first because I was interpreting G, Y, T, YD and C as rates and I wasn't changing their units to match the new sample period. Naturally Ramanan assumed I would think of these as quantities, not rates, so that their values would scale with the sample period (which amounts to keeping them as rates and changing their units to dollars per Ts2). Once I sorted that out, I redid the spreadsheet and found that scaling alpha2 by Ts2/Ts1 was an OK approximation, but not perfect: the system time constant (Tc) does change, and the reason is G&L are doing one of the following:
1. Assuming the "compounding times" always precisely coincide with the sample times, so Tc does change when the sample period changes (because it means a different compounding rate).
2. Approximating continuous compounding by approximating exp(-t/Tc) as 1 - t/Tc
Another problem with SIM the way G&L formulate it is that you cannot select a Ts2 >= Ts1*alpha1/alpha2_orig (where alpha2_orig is the the original value of alpha2 before scaling it by Ts2/Ts1). You can set it to what you like of course, but they specify in their text that 0 < theta < 1 and that 0 < alpha2 < alpha1 < 1. This means starting with their default values for the parameters, Ts2 must be < 1.5*Ts1 = 1.5 periods. You can actually make it bigger (of course) and it works OK up until about Ts2 = 6.5... then the resulting A parameter in the difference equation goes 0 and then it goes negative as Ts2 > 6.5. This results in wild oscillations in the curves as Ts2 gets up to values like 10 or 20. But following G&L's restrictions, then varying Ts2 over (0,1.5) brings a variation in time constant (Tc) of about 14% (from a low of about  5.7 at Ts2 = 1.5 to a high of 6.5 as Ts2 -> 0). The default value of Tc with Ts = 1 is about 6 "periods."

SIM2: I started this one when I 1st apparently failed to get SIM to be sample period invariant by using Ramanan's method of scaling alpha2 by Ts2/Ts1. Essentially what I did here was find the equivalent continuous time system and use the time constant from that to find the discrete time state space parameters A and B for Ts2. This amounted to replacing the approximation for exp(-t/Tc) with the exact value. This version did not require changing my interpretation of G, Y, T, YD and C as rates expressed in dollars/(period=Ts1), so naturally I was excited to see that the curves were indeed invariant to the sample period. However, looking at it later and calculating Jason's Γ  = ΔH/(G - T) I realized that Γ = 1 only when Ts=1. So essentially, SIM2 violates stock-flow consistency (SFC), even though H, G, Y, T, YD and C are all sample period invariant. However, I did add an alternative means of calculating the constants CT and DT which are used to produce T: namely T[n] = CT*H[n-1] + DT*G[n]. Using these alternative expressions produces a different T curve, which I added to the plot. It is precisely the same as the original T curve when Ts=1, but differs slightly when Ts ≠ 1. In addition SIM2 has no upper bound on Ts2 (no oscillation problem as in SIM1).

Answer for Henry #1: Not much to say here, except that this is where I introduce  block diagrams of both the discrete and continuous time system equivalents. Note that my naming convention for G and T (and I suppose for the other outputs Y, YD and C) are a bit confusing on the plots up through this one, and in my block diagram as well. That's because of the confusion about G being dollars, or dollars per period=Ts1 or dollars per period=Ts2, etc.

SIM3: Here I was determined to get the best of both SIM and SIM2: Namely match G&L's SIM with Ts=1, have sample period invariance and satisfy all of G&L's equations over any sample period (or more generally, between any two arbitrary sample times). I accomplished that goal with some qualifications: 1st I restrict Gdot (Gdot a rate of government spending measured in constant units of dollars/(period=Ts1) to be a scaled unit step function: 0 for t < 0, and constant for all t >= 0. Then H takes the form of Hss*(1 - exp(-t/Tc)), where Hss = steady state H = Gdot*B/(1-A). G (now interpreted as the sum from -inf to t = sum from 0 to t of all government spending) is 0 for t < 0 and Gdot*t for t >= 0. Thus I define T (the sum from 0 to t of all taxes collected) as T = G - H. This has the effect of both making the system sample period invariant and forcing it to satisfy G&L's accounting equation (i.e. Γ = 1, once you  use the right version of G and T (the confusion with symbols in this post continues)), and (of course) forcing it to agree with G&L when Ts=1. However, I didn't check Y, YD or C, only H, G and T.

SIM4: This post does not include a spreadsheet, but it does document the math required to extend SIM3 to accept more general G functions and to turn it back into an iterative state space discrete time difference equation. In this post I explicitly show the connection between the continuous time formulas and discrete time formulas for a broad class of G functions (really g functions). Also, I make an attempt to provide consistent variable names: both internally consistent and consistent with G&L. Namely, G and T become quantities again, measured in dollars, and thus dependent on the sample period length. The math is mostly done in relation to g = sum total of all dollars spent by the government, and it's derivative gdot (or g') (and a lower and upper case gamma (γ and Γ respectively), and some Dirac delta components of g', which I refer to by gm, giving us the ability to introduce step functions into g). Like SIM3, this model does not attempt to sort out or check what Y, YD or C are or ensure I have the correct expressions for them so that they always satisfy G&L's accounting equations.

---------------------------------------------------------------------------------------------------------

SIM5: Which brings us to the point of this post. The bit I'm missing in all but the non-sample-period-invariant SIM above is ensuring that ALL of G&L's accounting equations are always satisfied between any arbitrary sample times. I can get ΔH = G - T to be satisfied in all of them (using T-alt in SIM2), but not necessarily the rest. This got me thinking about a couple of strategies:
1.  Starting with SIM 1, which guarantees that all G&L's accounting equations will always be satisfied, it occurred to me that I can adjust alpha2 for a new sample period (Ts2) from the original sample period (Ts1=1) in a way other than scaling it by Ts2/Ts1: Given that A1 is the original feedback parameter (see SIM1), I can calculate a new alpha2 like this: alpha2 = (1 - A1^(T1/T2))*(1 - alpha1*(1 - theta))/theta (where alpha1 and theta are the original values). This preserves the time constant (Tc), but probably messes up the steady state value, so I'd need to adjust B1 as well (this change to alpha2 will produce a new A, which I'll call A2). But this may lead to having to change all the "C" and "D" parameters (the former used to scale H[n-1] and the latter used to scale G[n] such that their sum produce the set of outputs Y, T, YD and C). Thus really, this will probably lead to having to adjust alpha1 and theta as well, so that all the accounting equations are always satisfied. [Update 2016.04.22: Just alpha1 actually. See the update this date in SIM6]
2. Starting with the alternate variables I added to the SIM2 spreadsheet (alpha2alt, T-alt, CTalt and DTalt), I can actually solve for an alpha1alt and theta1alt [Update 2016.04.22: No, this doesn't actually work for solving for alpha1alt and theta1alt]. I have a feeling doing so will result in the same solution as for strategy 1 above. Note that when Ts=1 these alternative values are the same as the regular values. The original intent was to see if I could find a new CT and a new DT when Ts = Ts2 ≠ 1 that would always satisfy ΔH/(G - T) := Jason's Γ = 1. (Note that Jason's Γ Γ in SIM4). I could by starting with T = G - ΔH and solving for CTalt and DTalt. Then I can use the fact that CTalt = alpha2*DTalt when Ts=1 (since when Ts=1, CT = alpha2*DT) to find a new alpha2alt = CTalt/DTalt when Ts ≠ 1. I can then use other expressions in the table of expressions for coefficients in the lower left of the spreadsheet in SIM to solve for thetaalt in terms of alpha1alt, and then to solve for alpha1alt, and back-substitute to find a value for thetaalt. Essentially I'll be finding a new set of parameters to go with the new sample period (Ts2), and thus all G&L's accounting formulas should continue to be satisfied.
3. It occurred to me that I could adjust theta, alpha1 and alpha2 for a new sample period Ts2 by assuming they are currently representing the effective rate for continuously compounding over the default sample periods Ts1 = 1. G&L's (Ramanan's) procedure of only scaling alpha2 by Ts2/Ts1 is an approximation to this, assuming continuous compounding. They only scale alpha2 because it's the only one multiplying the total accumulated cash (H) in the economy, while both alpha1 and theta scale quantities which only apply that particular sampler period. Thus for example, if Ts2  = 1/4, then they scale alpha2 by (1/4)/1 = 1/4. But if we assume that exp(x*Ts1) = 1+alpha2, then x = log(1+alpha2)/Ts1, and scaling to Ts2 would give alpha2_new = alpha2(Ts2) = alpha2^(Ts2/Ts1)-1. Scaling theta and alpha1 would be slightly different (they'd maintain values much closer to their starting values), but I'm not sure what that would be yet. I don't have a lot of hope that this strategy (strategy 3) will actually work. [Update 2016.04.22: a lack of hope for this strategy is probably a good thing. See SIM6, SIM7, SIM9 and SIM10]
[Update: it's strategy 1. that actually worked. See Update #1 below for more details and a new post demonstrating this]

I did a little more preliminary work on Strategy 2 above and came up with the following:

Using the equation for the original B, I can put thetaalt in terms of alpha1alt:
thetaalt = [(1-Balt) + alpha1alt*(1-Balt)]/(1 + alpha1alt*(1-Balt))

Also this is useful:
1-thetaalt = Balt/(1 + alpha1alt*(1-Balt))

Then using the equation for the original A, I already have alpha2alt (see Strategy 2 above), and replacing thetaalt with the above expression, I get an expression only in alpha1alt, which looks like a quadratic, which I can then solve for alpha1alt and back-substitute into the above to find thetaalt:

(1-Aalt)*(1+alpha1alt)*(1+alpha1alt*(1-Balt)) = alpha2alt*(1-Balt+alpha1alt*(1-Balt))*(1+alpha1alt*(1-Balt))

Then I should have a complete new set of parameters which give me the same time constant Tc (and thus sample period invariance) and because they are a complete new set, simultaneously cause all variables (H, ΔH, G, Y, T, YD and C) to satisfy all G&L's equations. [Update 2016.04.22: Nope]

I'll leave that for another day!

Note that Brian Romanchuk has been doing some posts on related issues, here and here.

UPDATE #1: March 15, 2016 10:17 PM
Another day has arrived!
Note that what I crossed out above I think had an error in it (or at least it didn't reduce to a quadratic or any other equation that would uniquely determine the parameters, as I'd anticipated). Also I found that theta and alpha1 were not uniquely determined by the equations. But leaving theta as the original value seemed to work best, so I just adjust alpha1 and alpha2 now [Update 2016.04.22: See the SIM6 update of this date about  why "leaving theta as the original" seemed to work best]. The resulting spreadsheet is a direct modification of SIM called SIM6 and it seems to be successful in being sample period invariant and satisfying all G&L's expressions (which I verified by calculated integrated values for Y, T, YD and C for an arbitrary Ts and comparing those with integrated values for Ts=1).
END UPDATE #1

------------------------------------------------------------------------------------------------------------

UPDATE #2: March 17, 2016:

SIM6: A modified version of my SIM post which adjusts alpha1 and alpha2 to keep the time constant fixed when the sample period (Ts) changes. Because  SIM obeys all G&L's expressions by it's construction, then so does SIM6. Also it matches G&L's results with their default parameters (alpha1=0.6, alpha2=0.4, theta=0.2, Ts=1). Tables (that you must scroll all the way to the right to see, using the scroll bar along the bottom of the embedded spreadsheet) integrate Y, T, YD and C so that you can compare these integrated (accumulated) values with a static table for which this was done with G&L's default parameters. This allows you to see that indeed all of G&L's expressions must be satisfied.

To-do: I list some things that could be on a to-do list (for SIM7?) in this comment on Brian Romanchuk's blog: (repeated here for convenience):

"Or you could just keep the continuous time model without any extra terms in b. Then you can do an observability and controllability analysis (easy in this case), do pole and zero placement with a fixed feedback law, add noise to the analysis, design a Kalman filter, design a time varying optimal feedback control law for various objective functions or design a robust control law to achieve a minimum performance level over a whole set of plants that could exist due to our uncertainty about the plant. These things could be done in either discrete or continuous time, but the model is simpler in continuous time"

## Saturday, March 12, 2016

### SIM4: the math for an updated SIM3 for a broader class of government spending functions.

The below model fleshes out a pair of comments made to Henry here as well as one made to Bill here. If it looks fuzzy, zoom in with your browser (CTRL+) and then refresh the page (it worked for me). It's essentially an update of SIM3.

To do: I'm positive this will match G&L's SIM at Ts = 1 with dg/dt = a constant = G per period. And I'm sure that it will be time invariant if we calculate A and B as shown, while we continue to satisfy ΔH = G - T over any pair of sample times. (I know, because I can force T = G - ΔH). What I don't know is if the other accounting formulas for Y, YD and C will continue to hold at different sample periods (given the way they are all expressed now as a linear combination of H and G with coefficients which are only functions of theta, alpha1 and alpha2) [Update: see new post here]. See here for a listing of what these coefficients are at Ts=1. I suspect I may need to change theta, alpha1 alpha2, or the whole expressions to match different sample periods. This would be akin to interpreting their current values as the result of continuous compounding over Ts=1.

Note that incorporating steps in g(t) (equivalent to scaled and shifted Dirac deltas in g'(t)) in the discrete time version is a bit of a pain in the ass because you have to keep track of when those steps are in time, and constantly check to see if the last sample period (i*Ts to (i+1)*Ts) included any... and then you have to adjust the state transition function (exp(a*((i+1)*Ts-tm)) for each one in the interval (assuming they can happen anywhere (t=tm) withing the sample period). The corresponding element of row vector B for the mth step in g (called gm) would be b*exp(a*((i+1)*Ts-tm)). So if there were a finite number (M) of such steps, you could augment B and Γ each with M more entries. The M extra entries in Γ would always be 0 except when a step occurred over the sample period, in which case the entries corresponding to the steps happening in that sample period would take a value of gm, as per the expressions above (m being the index to gm and tm, ranging from 0 to M-1). So it's not impossible, but messy. At least it's a linear system and the effects from all that stuff just adds together. Allowing a single step at t=0 might be a good compromise (for simulations restricted to t >= 0): it'd be easy to accommodate, and it would allow for easily maintaining sample period invariance. (You could fairly easily allow them at each sample time, but then it wouldn't be so fun if you changed to a new arbitrary sample period).

Also note that I've used a confusing array of notation for variables related to G&L's G (and T) over the past few posts. I think the above document is my most consistent take on it. But just to be explicit:

G[i] = integral of g'(t) over one sample period = g((i+1)*Ts) - g(i*Ts)

T will have a similar definition, and unfortunately lower case t is going to be a mess. So instead I'll define  (ΔH)[i] = H[i] - H[i-1], and from that set T[i] = G[i] - (ΔH)[i]. I didn't do any of that above, but based on SIM3, I know that doing so will satisfy ΔH = G - T (obviously) for all sample periods, but also match G&L's T at Ts=1. I have everything I need above and in the expressions for Y, T, YD and C here to find an expression for T and see if theta, alpha1 and alpha2 need adjustment when Ts changes. I'll leave that as an exercise for another day (SIM5?). Ultimately I want to be able to say what G&L should do to preserve sample period invariance while still satisfying all their equations. If that means the parameters have to be adjusted somehow, then so be it.

If you spot an error, do me a favor and let me know in the comments. Or better yet, download the document (in the bar at the bottom), correct it and email it back to me at brown.tom5@gmail.com. Or if you see a way to improve my notation, let me know. I struggled with allowing step functions in g(t) while  also doing a Taylor expansion of g'(t)... thus I introduced big and little gamma. Perhaps there's a much cleaner way to state the whole thing, while preserving G as G&L define it.

UPDATE 1:
Jason Smith put a post up with an electrical circuit analogy. I solve it with particular values for R and L. Also Brian Romanchuk has another post specifically on G&L's SIM model.
END UPDATE 1.

Note to self: The embedded document above has height set to 1370 and width at 1060 (in case accidentally erase it again!)

## Wednesday, March 9, 2016

### SIM3: 1) Matches G&L's SIM @ Ts=1; 2) Invariant to Ts; 3) Always satisfies ΔH = G - T

1. Matches G&L's SIM with the sample period Ts = 1 (year)

2. Sample period (Ts) invariant

3. Satisfies ΔH = ΔG - ΔT over all intervals [1]

To do: calculate Y, YD and C and check that they also satisfy G&L's equations over any pair of sample times.

Limitations: Gdot := dG/dt restricted to scaled step function with step at t=0. Note that this is not a fundamental limitation of the approach, it just represents more work to add generality, and this was just a quick and easy check to make sure I could do numbers 1, 2 and 3 above.

UPDATE 1: March 12, 2016, 4:54 PM
I generalized the approach below in a new post called SIM4 to handle a much broader class of government spending functions. I just did the math at this point. If you find an error, let me know (see the note under the document of equations).
END UPDATE 1

Notes:
[1] My ΔG here is called G in G&L, and my ΔT is called T in G&L. I reserve G for the integrated total of all government spending since t=-inf, and T as the integrated total taxes collected since t=-inf.

## Sunday, March 6, 2016

Below is my answer to a question posed by Henry on Jason's blog.

"My question is do these terms make sense"

Examining those terms in isolation is misleading. I'm NOT saying the whole expression makes sense (as per Jason's point, I think), but looking at those terms in isolation and examining their sensical-ness (new word) is the wrong approach. For example, what if I told you that x(t) -> inf as t -> 0 and y(t) -> inf as t->0, is it possible that x(t) - y(t) could approach something meaningful as t -> 0? Of course it is, because I'm interested in their difference, not the individual terms.

Allow me to take a stab at explaining the issue in a way that perhaps isn't as general as Jason's approach, but which I nonetheless find to be clearer. Going back to his definition:

γ := rate of government spending, in dollars/year (say)

But instead of making ξ := rate of taxation in dollars/year, let's leave that out of it. Now say

ΔH[n+1,n] = H[n+1] - H[n] = γ*Δt - T (abandoning your <> notation in favor of [] because of technical difficulties)

Now lets say (as is the case in the SIM model if you do the algebra) T = r*H, with r a fraction (a constant fraction to make it simple, say on (0,1), defined as a rate per unit of time τ, which initially happens to be τ = Δt, but of course will not hold if Δt changes), or putting indices on it:

T[n+1,n] = r*H[n]

Now we have:

ΔH
[n+1,n] = H[n+1] - H[n] = γ*Δt - T = γ*Δt - r*H[n]

To simplify further, set γ = 0 for now, so we have:

H[n+1] = (1-r)*H[n] = A*H[n] where A = 1-r

This has a well known solution:

H[k] = H0*(A^k), for all k >= 0, and where H0 = H[0] (i.e. H0 is the initial allotment of money in the economy).

And because r is on (0,1) then so is A, so this will be an exponentially decaying function (a sampled exponential).

Now what happens if we change our sample period Δt? Well r doesn't change, nor does τ it's defined over, but

(1) H[n+1] =
A*H[n] = (1-r)*H[n]

No longer holds. Why? Imagine that it did hold, and that say r = A = 0.5. So in one time step Δt, half the money disappears from the economy through taxation. If Δt = τ = 1 year to start out, then in 1 year half the money is gone. Now say Δt is reduced to 1 day... then if equation (1) still held, then in one day half the money is gone. Thus (1) would NOT be describing the same system. We're sampling an underlying continuous time system, so we have to account for that. The real rate of decay (time constant) of the system should be independent of the sample period. The answer is we need to adjust A as Δt changes, because A was always an implicit function of Δt (i.e. A in (1) is A(
τ), and so is only valid when Δt = τ). We can write a valid version of (1) if we adjust A for when Δt τ as follows:

(2) A(Δt) = A(τ)^(Δt/τ)

So then using (2) our valid version of (1) for any Δt is as follows:

(3) H[n+1] = A(Δt)*H[n]

Now in this simple case, it turns out that 1-r*Δt/τ
A(Δt), but it's not exact. Also, in the limit as Δt -> 0 we have A(Δt) -> 1, so our difference equation (1) breaks down. We need to replace it with a differential equation:

dh/dt = (ln(A)/τ)*h = a*h, where ln() is natural log

and where a = ln(A)/τ, and H[n] = h(n*Δt). Note that a < 0 in this example because A is on (0,1)

The solution to this equation is:

h(t) = exp(a*t)*H0

That's the underlying continuous time system. Since a < 0, it's a decaying exponential, as before. We can sample it at any Δt we choose:

H[n] = (exp(a*Δt)^n)*H0

or in difference formula form:

H[n+1] = exp(a*Δt)*H[n]

Which suggests an alternative formulation of equation (2) when the continuous-time time constant -1/a is known [1]:

(4) A(Δt) = exp(a*Δt)

I leave as an exercise to the reader the case where γ > 0. :D

[Hint, try this]

Or if you want to see it worked out for the SIM example, look at how I calculate B here at the bottom.

What G&L are doing amounts to assuming that (2) below (the first two terms of (1) (also below)) is a  good approximation, and I think they try to restrict alpha1, apha2 and theta to accomplish that. But even with those restrictions, (2) can sometimes be a terrible approximation for (1). Either that or they are assuming the rate of compounding always matches the sample rate. Note that in the more general case where we substitute square matrix A for  a, (1) and (2) still hold, provided we take A raised to n to represent A matrix multiplied by itself n times in (1), and substitute the identity matrix I for 1 in (2).

 Discrete and continuous time system equivalents, where H[n] = h(n*Ts), etc. Ts being the sample period

NOTES:

[1]  -1/a is defined as the time constant of the decaying exponential. It's the time t  > 0 by which the exponential has decreased from it's value at t=0 (unity) by 63%, or equivalently, decreased down to 37% of 1: since if t = -1/a, then exp(a*t) = exp(-1) 0.37.

## Saturday, March 5, 2016

### SIM2: Interactive spreadsheet implementing Godley & Lavoie's SIM model (now w/ sample period invariance! [Hint: the secret is expm(A*Ts) & its integral have Ts parameters!])

tl;dr version: Everything you need to know is right here (except when A is singular, in which case you'll need this too to find discrete time B) which tells you about converting back and forth from continuous to discrete time in a problem like this, namely, given the system:

dx/dt = A∙x + B∙u

for vectors x(t) and u(t) and constant matrices A and B and u(t) held constant over each sample period T, we have the discrete time equivalent:
Notice that the discrete time constant matrix coefficients Ad and Bd are dependent on the sample period T.
end tl;dr

Success!! (I think). I changed my previous spreadsheet to make it truly invariant to the sample period (Ts in the green box below). Please refer to the previous version and the description underneath the spreadsheet for what the formulas are doing and the discrete time state space representation of the system, and to links to Jason and Ramanan's blogs (from whence this whole exercise originated). Below the spreadsheet on this page I describe what I'm doing differently. Feel free to play with the values: I only planned on people editing the green cells (in this embedded version), and if you get it irrevocably screwed up, just refresh the page (you can't hurt my original copy). Also you can download your own copy using the tool on the right side of the black toolbar running along the bottom of the sheet. Note that setting Ts < 0.1 will limit the data to less than 100 years (so the plot won't be filled out) because the number of time samples in my table is fixed at 1000.

Note the names and values with the modifier "alt" in them below don't affect anything: they're part of an experiment to get SIM2 to always satisfy all of G&L's accounting expressions. I've had success with the ΔH = G - T expression by using T-alt (in columns to the right, you have to scroll to see). I did add T-alt to the plot so you can see how it differs slightly from T. More in Update 5 below and here.

NOTE (update of 2016.04.22): in the above sheet to calculate the so-called experimental "alt" version of values use CTalt = (1-A)/Ts and DTalt = (Ts-B)/Ts and alpha2alt = CTalt/DTalt, where A and B are a function of the new sample period Ts (assuming the old one Ts0 = 1). These expressions come from T as a function of Ts := T(Ts) = T*Ts = G(Ts) - ΔH(Ts) = G[n]*Ts - H[n](Ts) + H[n-1] = G[n]*Ts - A*H[n-1] - B*G[n] + H[n-1] = (1-A)*H[n-1] + (Ts-B)*G[n] = CTalt*H[n-1] + DTalt*G[n], assuming H[n-1] didn't change with Ts, and that A and B are functions of Ts, and that B requires multiplication by G expressed in old units (i.e. per Ts0). Then our new T expressed in old units as a rate (i.e. dollars per Ts0) is just T = T(Ts)/Ts = ((1-A)/Ts)*H[n-1] + ((Ts-B)/Ts)*G[n], thus matching terms  we have  CTalt = (1-A)/Ts and DTalt = (Ts-B)/Ts, and because C/D = alpha2 (where C is Cm, not SIM's consumption output C), then alpha2alt = CTalt/DTalt. I think the idea came from SIM3. Also see SIM5. Talt is calculated in terms of CTalt and DTalt in the columns you need to scroll to the right to see. The alpha2alt isn't actually used for anything (I don't think). It's a given that this experiment will "work" and that Jason's Γ = 1 for all Ts (the purpose of the "experiment"), given that we could just directly calculate Talt from the other columns: namely Talt = G - ΔH/Ts where the Talt and G columns are expressed as dollars per Ts0. Thus ΔH/Ts is just ΔH expressed in those same units. But ultimately the alpha2alt I calculate here doesn't lead anywhere. In contrast, SIM6 is a successful approach: it adjusts both alpha1 and alpha2 (but not theta) to change sample periods without affecting steady state values (expressed in dollars per Ts0 and in dollars for the case of H) or the time constant Tc (expressed in Ts0). An update there of the date of this update explains why you adjust alpha1 and alpha2 but not theta to accomplish that. Given that Ts is changed by changing these parameters, then it's a given that Jason's Γ = 1 for all Ts in SIM6 as well. That's not really the case here in SIM2 because the resultant alpha2alt is not used (because there's probably not a way to make it useful). Thus really Jason's Γ 1 for all Ts here (i.e. using the method here to change sample  periods). And no, alpha2alt here is not equal to the alpha2 from SIM6. However, that's not to say that the SIM3 idea didn't ultimately lead to a greater success: see SIM9  and SIM10  for example.

The problem with my previous sheet was that the H curve changed when I changed the sample period (Ts). This was in spite of modifying the alpha2 parameter as per Ramanan's instructions.

I've changed tactics here: instead of scaling alpha2 (from a user provided alpha2_R parameter), I instead just use the User Parameters (in the green cells) directly without doing any modifications, (e.g. modifications such as scaling alpha2 by the ratio of change in the sample period (Ts/Ts_orig)).

What I do instead is record the original Ts setting (Ts_orig = 1 years). Then from the alpha1, alpha2 and theta parameters I derive continuous time parameters for the system. In particular I do the following:

Starting with the original discrete time system with A_orig and B_orig based on alpha1, alpha2 and theta, we have:

H[n+1] = A_orig*H[n] + B_orig*G[n+1]

From this I calculate the parameters for an equivalent continuous time system assuming G is held constant over each sample period:

dH(t)/dt = H'(t) = a_cont*H(t) + b_cont*G(t)

From this continuous time system, I re-descretize it with a new sample period (the user input sample period Ts in the green cells in the upper left). This gives me a new A and a new B, which I simply call A and B like in the old spreadsheet, and use them as follows in the results table:

H[n+1] = A*H[n] + B*G[n+1]

Here's how I do the calculations. First I assume the A_orig and B_orig resulted from a discetization of a continuous time system with an original sample period Ts = 1 year (Ts_orig = 1). Thus:

A_orig = expm(a_cont*Ts_orig) = exp(a_cont*Ts_orig)  (matrix exponentiation expm() reduces to scalar exponentiation exp() in this case, since the system has a single state).

B_orig = (A_orig - 1)*b_cont/a_cont  (to see why this is, look here).

I solve for a_cont and b_cont from the above, and then use these same formulas with a new sample period (Ts) to calculate A and B.

Some of these calculations are performed in the cells to the right of the chart, and the rest are in the cells for the A and B matrices respectively.

Matrices C and D (used for calculating the measurement vector [Y T YD C]' at sample n+1 by scaling H[n] and G[n+1] respectively), were not changed.

Now (you may notice) instead of H changing with Ts, ΔH changes with Ts. Is that a problem? I don't think so, because it SHOULD change with Ts: each Ts period of time now adds less to the total stock of money in the economy. H should NOT be changing with Ts, because the sample period should not affect the total stock of money accumulated in the economy at a fixed time in years.

---------------------------------------------------------------------------------------------------------------------------
Further information about calculating A and B as a function of Ts/Ts_orig:
DoΔ Δ
Starting with A_orig and B_orig and doing a little algebra reveals:

(1) A_orig = A(Ts_orig) = 1 - theta * alpha2 / (1 - alpha1 * (1 - theta))

(2) B_orig = B(Ts_orig) = 1 - theta / (1 - alpha1 * (1 - theta))

I write A(Ts_orig) and B(Ts_orig) because they are implicit functions of Ts_orig! Equations (1) and (2) were our starting point for A and B on the previous spreadsheet. Now what happens when we change Ts_orig to Ts?:

(3) A(Ts) = A_orig^(Ts/Ts_orig)

(4) B(Ts) = (B_orig/(A_orig - 1)) * (A(Ts) - 1) = -[(1 - alpha1*(1 - theta) - theta)/(theta*alpha2)] * (A(Ts) - 1)

Note: I didn't bother with that algebra in the spreadsheet above: I just used exp() and ln() according the the development above equation (1).

Enjoy!

UPDATE 1: March 6, 2016 1:02 AM
I made a few changes to the plot to eliminate apparent sensitivity to Ts. Basically two things: I start the plot at t=0 years, but I start the table at -Ts. Secondly, instead of plotting H (which I leave in the main table), I plot H[n-1] (H delayed by one). H[n-1] takes the place of x[n] in a normal (canonical) discrete time state space formulation, so I suspected that I was plotting the wrong sequence against the others (off by one), and sure enough, once I changed that it helped quite a bit. The plot now shows near complete Ts invariance, as it should.
UPDATE 1: END

UPDATE 2: March 7, 2016 9:06 PM
Note that I resolved my problem with the old method of changing sample periods suggested by Ramanan on that older post, and I've updated it accordingly (read Update #2 there at the top). However that doesn't mean that the alpha2 scaling method is the same as this method: it is for small values of Ts, but try Ts >= 10 on this sheet and that one to see how they still differ.
UPDATE 2: END

UPDATE 3: March 8, 2016 12:33 AM
Note the  Γ below is Jason Smith's Γ from his March 2016 posts on SIM.

UPDATE 3: END

UPDATE 4: March 9, 2016, 5:21 PM
I made a new version called SIM3 posted here which combines the best of the version above (SIM2) with the old version (SIM), for the price of restricting the functional form of G.
UPDATE 4: END

UPDATE 5: March 13, 2016, 2:50 PM
I added some "alternative" parameters and variables to the spreadsheet. They don't affect any of the previous parameters or the plot, except that I've plotted T-alt to the plot. It differs slightly from T when Ts is changed. T-alt satisfies G&L's ΔH = G - T (actually ΔH = Ts*(G - T) since G and T are always rates expressed in dollars/(Ts_orig = Ts1 = 1 period = 1 year) in SIM2). Eventually there will be alternate versions of theta (thetaalt) and alpha1 (alpha1alt) as well. The idea is to eventually satisfy ALL of G&L's accounting equations, while maintaining sample period invariance, and matching G&L at Ts=1 (at least with G functions restricted to the form of their example). I have more information about it under the heading "SIM5" on a new post called SIM5.
UPDATE 5: END

UPDATE 6: March 17, 2016, 10:17 AM
As I explain in a new post (SIM6), and in a comment on Jason's blog, think Jason's Γ (above) represents both an approximate change to a new sample time (without actually changing the sample time values in the table), thus a time constant change without a change in steady states (what I explain in the comment). And, as I explain in the text of SIM6 below the spreadsheet) Jason's Γ as calculated above represents the difference between a sampled rate variable (like I treat G, Y, T, YD and C in the spreadsheet in this post) and an average rate calculated across a sample period interval (how they are treated in SIM and SIM6). As Ts gets larger, that difference grows.
UPDATE 6: END