## Tuesday, March 29, 2016

### Comment for Roger

This post is an expanded follow on comment for Roger Sparks in this thread.

In particular, for an LTI system $A_d = e^{A_c T_s} \approx I + A_c T_s$ (the approximation holding when the matrix norm $\lVert A_c T_s \rVert = \lVert A_c \rVert T_s \ll 1$, ... a generalization of the approximation you may recall from this comment I left you on John Handley's blog). The subscripts $d$ and $c$ denote discrete and continuous time, respectively. Note that $A_d$ is different for every sample period $T_s$ used. $B_d$ is generally more complicated, but can be approximated as $B_d \approx A^{-1}_c (A_d - I) B_c$. This approximation is exact when $\mathbf{u}$ is a "zero order hold" (ZOH) function, meaning it stays constant over each sample period. More generally, for LTI systems with an arbitrary input $\mathbf{u}$ you must integrate:
$$\mathbf{x}_{n+1} = e^{A_c T_s} \mathbf{x}_{n} + \int\limits_{0}^{T_s} e^{A_c (T_s - \tau)} B_c\, \mathbf{u}(\tau + n T_s) d \tau \tag 9$$
and substituting for time-invariant $A_d$ from above:
$$\mathbf{x}_{n+1} = A_d \mathbf{x}_{n} + \int\limits_{0}^{T_s} e^{A_c (T_s - \tau)} B_c\, \mathbf{u}(\tau + n T_s) d \tau \tag {9a}$$
or more generally still, for a time-varying linear system:
$$\mathbf{x}_{n+1} = \Phi_{(n+1) T_s, n T_s} \mathbf{x}_{n} + \int\limits_{n T_s}^{(n+1) T_s} \Phi_{(n+1) T_s, \tau} B_c (\tau) \mathbf{u}(\tau) d \tau \tag {10}$$
and now substituting for a time varying version of the $A_d$ matrix (indexed by the sample period number $n$):
$$\mathbf{x}_{n+1} = A_{dn} \mathbf{x}_{n} + \int\limits_{n T_s}^{(n+1) T_s} \Phi_{(n+1) T_s, \tau} B_c (\tau) \mathbf{u}(\tau) d \tau \tag {10a}$$
where $\Phi_{t_2, t_1}$ is the state transition matrix from time $t_1$ to time $t_2$. Or you can, in principal, avoid the integration by doing a Taylor expansion on $\mathbf{u}$ and find a separate $B_{di}$ for each $i^{th}$ term. As Brian has previously pointed out, a numerical integration accomplishes much the same thing, only instead of Taylor terms sampled at one time, it uses $\mathbf{u}$ sampled at several times over a single nominal sample period.

Additionally, as I mentioned in a comment above, some non-linear dynamic systems can be approximated over each sample period as a linear system. For example, you may have a non-linear system such as $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x},\mathbf{u} ,t$), where $\mathbf{f}, \mathbf{x}$ and $\mathbf{u}$ are generally vectors. One way to accomplish this is to approximate $\mathbf{f}$ as $A_c$ and $B_c$ where these matrices are the Jacobians of $\mathbf{f}$ wrt $\mathbf{x}$ (i.e. $A_c = \partial \mathbf{f}/\partial\mathbf{x}$) and $\mathbf{u}$ (i.e. $B_c = \partial \mathbf{f}/\partial\mathbf{u}$) respectively, and to rewrite the differential equation in terms of a small perturbation of $\mathbf{x}$ (call it $\delta{\mathbf{x}}$) about a nominal solution $\mathbf{x}^*$, i.e. $\delta{\mathbf{x}} \equiv \mathbf{x} - \mathbf{x}^*$. Similarly define $\delta{\mathbf{u}} \equiv \mathbf{u} - \mathbf{u}^*$. Then we can write for an approximate perturbation to this non-linear differential equation $\delta{\dot{\mathbf{x}}} = A_c \delta\mathbf{x} + B_c \delta\mathbf{u}$. The discrete time approximation I gave above ($A_d \approx I + A_c T_s$) can be seen in this light as a two term vector Taylor expansion of $\mathbf{f}$ wrt $\mathbf{x}$, i.e. $A_d \approx I + (\partial\mathbf{f}/\partial\mathbf{x}) T_s$. Note that the "region of validity" for $A_c$ and $B_c$ may be small, and typically they are recalculated around different nominal values $\mathbf{x}^*$, $\mathbf{u}^*$ and time $t$ as the system evolves in time. An application might be, for example, a model of perturbations of a rocket about a nominal desired trajectory for purposes of implementing a feedback control system to keep it on course. As long as it never is allowed to wander too far off course, the approximation may be adequate, provided the system is "re-linearized" on a regular basis. This might suggest an economics application of a central bank "feedback control" rule for setting interest rates or determining the "high powered" money supply to "keep the economy on course" about some non-linear pre-determined solution to a set of non-linear differential equations, based on feedback from other measured macro variables, such as the inflation rate, the NGDP level or the unemployment rate.

Another huge advantage of linear systems is the ability to analyze and manipulate them in the "frequency domain" rather than the time domain analysis I give above. This means the ability to use continuous and discrete time Fourier transforms, Laplace transforms and  Z-transforms. They also have advantages in terms of being able to analyze their response to noise, modeling errors and other uncertainties, and to design optimal "state estimators" (e.g. Kalman filters) perhaps designed to work in conjunction with optimal feedback or open loop control laws, which can effectively change the natural modes of the system to more desirable ones: for example, to stabilize an unstable system by moving its unstable eigenvalues to the "right half plane" (i.e. move modes with positive real components such that they instead have negative real components... in what's known with regard to Laplace transforms as the "S-plane"). The discrete time equivalent is to move unstable eigenvalues of $A_d$ from outside the "unit circle" to inside the unit circle (this unit circle drawn in the "Z-plane" associated with the Z-transform, which is the discrete time counterpart to the Laplace transform). Sometimes you hear economists speak of "unit roots." What they're referring to are "roots" (in the sense of roots to the "characteristic polynomial" of $A_d$, which are the eigenvalues, AKA "poles" of $A_d$) on the unit circle (corresponding to the imaginary axis of the S-plane in the continuous time world), which produces "marginally stable" outputs, such as non-rising non-decaying step impulse responses (the response of a perfect "integrator") or constant amplitude oscillations (sin and cos waves, or more generally "complex exponentials," i.e. $c\, e^{i \omega n T_s}$ with $i \equiv \sqrt{-1}$, $c$ a constant and $\omega$ the unitless frequency of oscillation (which you can practically take to be radians per sample)). Everything in quotes can be Googled for more information (those are all common terms).

I think you can probably see how important the $A$ matrix ($A_c$ or $A_d$) is to a linear system. It characterizes all the dynamics of the system, such as whether or not it's naturally stable or likely to oscillate, how fast it will respond to inputs (the time constants), and which modes are dominant (i.e. which modes are weighted with the largest coefficients). Of course the other matrices ($B$, $C$ and $D$) are also important, and you need them, for example, to analyze steady state responses or to determine if a system is "controllable" (can we control the states?) or "observable" (do we have enough sensors to observe all the states?). But $A$ is really the heart of the system.

-----------------------------------------------------------------------------------------------------------------

Appendix A: Details about one way to calculate discrete time matrices for non-linear systems

For the non-linear approximate discrete time case, it's sometimes beneficial to numerically calculate $A_d$ and $B_d$ from numerical solutions to the non-linear equations directly rather than via the partial differential Jacobian technique discussed above, especially if you have some idea of the covariance of $\delta \mathbf{x}$ (call it $C_\mathbf{x}$) and $\delta \mathbf{u}$ (call it $C_\mathbf{u}$). For example, say the eigenvalues (variances) of $C_\mathbf{x}$ are $\{\sigma^2_{\mathbf{x}, i}\}$, for $i = 1,...,n$ (assuming $\mathbf{x}$ is $n$-dimensional), and the corresponding orthonormal matrix of eigenvectors is $V_\mathbf{x} = [\mathbf{v}_{\mathbf{x}, i}]$. These eigenvectors are orthonormal since covariances are always positive semi-definite, and we'll assume positive definite (i.e. a full $n$ degrees of freedom (equivalently all variances are > 0)) for this exercise. Then a good way to calculate $A_{d n}$ which captures some of the effects of the higher order terms of the Taylor expansion as well as the dependence of $\mathbf{f}$ on $t$ (which we've so far assumed to be negligible over a sample period) is the following sum of outer products:
$$A_{d n} = \sum_{i=1}^n [\mathbf{x}_i (t_{n+1}) - \mathbf{x}^* (t_{n+1})] \cdot \mathbf{v}^T_{\mathbf{x}, i} /\sigma_{\mathbf{x}, i} \tag {11}$$
where $t_{m} \equiv m T_s$ and $\mathbf{x}_i (t_{n+1})$ is the numerical solution at time $t_{n+1}$ to the non-linear differential equation after perturbing the nominal solution $\mathbf{x}^* (t_{n})$ by $\mathbf{v}_{\mathbf{x}, i} \sigma_{\mathbf{x}, i}$ and propagating it forward by one sample period $T_s$ to time $t_{n+1}$. In other words we form $\mathbf{x}_i (t_n) = \mathbf{x}^* (t_{n}) + \delta \mathbf{x}_{i, n}$ where $\delta \mathbf{x}_{i, n} = \mathbf{v}_{\mathbf{x}, i} \sigma_{\mathbf{x}, i}$ and propagate that forward one time step to get $\mathbf{x}_i (t_{n+1})$. In general $C_\mathbf{x}$ and $C_\mathbf{u}$ will also be functions of time, and thus so will their eigenvalues and eigenvectors (though I've neglected a time index for them in the above). An analogous procedure exists for finding $B_{d n}$ using a formula similar to $(11)$ but with the eigenvalues and eigenvectors of $C_\mathbf{u}$ rather than $C_\mathbf{x}$. This procedure (for both $A_d$ and $B_d$) scales the non-linear effects by the size of the standard deviations ($\sigma$) of each degree of freedom in each covariance, thus in some sense capturing the expected size of the effects of higher order terms / non-linearities. Note that $(11)$ is a one-sided difference, whereas a more robust method would use both positive and negative perturbations ($\pm \delta$).

One way to picture what's going on in $(11)$ is to imagine that $C_\mathbf{x}$ describes a multivariate spatial Gaussian probability distribution that's long and narrow (i.e. its 1-$\sigma$ ellipsoidal iso-probability density contour is cigar shaped). Then a perturbation in $\mathbf{x}$ from the nominal is most likely to be located ~$1 \sigma$ one direction or the other along the major axis of this distribution (with $\mathbf{x}^*$ right in the middle of course). For the sake of argument, say it's exactly $1 \sigma$ along the positive major axis (it actually works for any principal axis of the ellipsoid, but imagine the major one for the sake of argument), and it has no perturbation components along the other axes. Then the inner product of this perturbation and the row vector $\mathbf{v}^T_{\mathbf{x}, j}/\sigma_{\mathbf{x}, j}$ will be $1$ and it will be $0$ for $\mathbf{v}^T_{\mathbf{x}, i}/\sigma_{\mathbf{x}, i},\; i \ne j$ where we're assuming the major axis is the $j^{th}$. That's because $V_\mathbf{x}$ is orthornormal (i.e. its columns are unit length and mutually orthogonal). Such a perturbation will give an exact result for the resultant perturbation propagated to time $t_{n+1}$ (assuming $\delta \mathbf{u} = 0$). In other words, such a perturbation at time $t_n$ (call it $\delta \mathbf{x}_j (t_n)$) will give an exact result for $\delta \mathbf{x}_j (t_{n+1}) = A_{d n} \delta \mathbf{x}_j (t_n)$. This is because, substituting $(11)$ for $A_{d n}$ we have:
\begin{align} A_{d n} \delta \mathbf{x}_j (t_n) & = \left( \sum_{i=1}^n [\mathbf{x}_i (t_{n+1}) - \mathbf{x}^* (t_{n+1})] \cdot \mathbf{v}^T_{\mathbf{x}, i, n} /\sigma_{\mathbf{x}, i, n} \right) \cdot \delta \mathbf{x}_i (t_n)\\ & = \left( \sum_{i=1}^n [\mathbf{x}_i (t_{n+1}) - \mathbf{x}^* (t_{n+1})] \cdot \mathbf{v}^T_{\mathbf{x}, i, n} /\sigma_{\mathbf{x}, i, n} \right) \cdot \mathbf{v}_{\mathbf{x}, j, n} \sigma_{\mathbf{x}, j, n}\\ & = \sum_{i=1}^n [\mathbf{x}_i (t_{n+1}) - \mathbf{x}^* (t_{n+1})] (\mathbf{v}^T_{\mathbf{x}, i, n} \cdot \mathbf{v}_{\mathbf{x}, j, n}) ( \sigma_{\mathbf{x}, j, n} /\sigma_{\mathbf{x}, i, n}) \\ & = [\mathbf{x}_j (t_{n+1}) - \mathbf{x}^* (t_{n+1})] (\mathbf{v}^T_{\mathbf{x}, j, n} \cdot \mathbf{v}_{\mathbf{x}, j, n}) ( \sigma_{\mathbf{x}, j, n}/\sigma_{\mathbf{x}, j, n})\\ & = \mathbf{x}_j (t_{n+1}) - \mathbf{x}^* (t_{n+1})\\ & = \delta \mathbf{x}_j (t_{n+1}) \tag {12} \end{align}
where I've added time indices where appropriate. Thus, in some sense, a "likely" perturbation gives exact results, non-linearities and all. The Jacobian method does not capture this, and may be equally difficult to calculate (although when closed form expressions are available, it's likely easier) . Of course, this exact "likely" perturbation still has probability = 0, so we don't expect to actually avoid non-linear distortions altogether, but in some sense we can expect to account for them in a way not captured by the Jacobian method. Doing so is perhaps a bit conservative, but if our purpose is (for example) to keep track of how errors propagate through the system (e.g. with a Kalman filter), it's generally better to err on the conservative side, lest we fool ourselves into thinking we have a better state estimate than we actually do (which can cause us to weight new information ("innovations") with smaller gains than we should at future times). You can think of this problem as wrongly adopting overly strong "priors." If a more conservative result is desired, the perturbations can be made to be greater than 1-$\sigma$ in size.

  In fact, you can think of this method as a crude numerical estimate of $e^{(\partial\mathbf{f}/\partial\mathbf{x}) T_s}$, where the $\partial \mathbf{x}$ deviations (and thus the implicit $\partial \mathbf{f}$ deviations)  aren't very infinitesimal at all, and where $T_s$ is implicitly folded into the differencing estimate directly rather than as a separate multiplier. But the point isn't to approximate the matrix exponential, but to find something better. However, if the system were actually LTI, it would be an exact representation of $e^{(\partial\mathbf{f}/\partial\mathbf{x}) T_s}$. And if the system were linear but time varying, it would produce exactly $\Phi_{n+1, n}$ where I've simplified the time indices from $(10)$ above.

-----------------------------------------------------------------

See comment below:

Below: Example of error propagation (falling objects with initial horizontal velocity): blue curve includes air resistance, and the red curve doesn't. Blue trajectory is non-linear and red is linear. Error ellipses are 1-sigma.

#### 30 comments :

1. I just discovered this as I was trying to understand how you placed Excel spreadsheets on the blog. (I think you are using a Microsoft account and the online app?)

This will take some study! Thank you very much for placing it here.

With a quick glance, I notice that you relate linear systems to the "frequency domain". I spent some time with filters and the Fourier Transform a couple of years back. I never attained a grasp of the math behind the transform but was successful in embedding prepackaged code into a C++ program that I still use. I had a lot of fun with the source code of Amaseis which is an earthquake recording software package written by Alan Jones. This was (and is) in conjunction with an amateur seismic station that I built a few years back. Too much. I will stop here.

Thanks again, for all of this.

1. Hi Roger. I responded to your latest on Brian's blog telling you all the gory details of how I embedded the Excel spreadsheets. It's actually shockingly easy to do, and last I checked (a couple years ago?), way easier and better than the Google Drive version.

Yes, being able to analyze systems in the frequency domain is a big plus for linear systems!

I've gone way overboard with this post... clearly it's for my own amusement more than anything else at this point. But I do hope it helps you out. I added a huge section in the middle about a method I like for finding discrete time approximations to non-linear continuous time systems... which you probably don't care about at all. I think I'll move that down to a footnote. Lol.

2. Good Morning Tom, I am beginning to get a glimmer of understanding here. But just a glimmer.

I am back at your very first comment on Brian's post. Here you define the terms and in particular

X = 1 - α1(1-θ)

Here X is a constant. Am I correct in thinking that A and B will also be constants? Or are they variables that act within X space in the sense that α1 and θ move in a relationship such that X is constant?

Does this get to the heart of the difference between linear and non-linear?

I know these are dumb questions but I must build on the fragments already within my grasp.

Your comment on Excel technique has not yet shown up when I refresh Brian's blog. I am looking forward to seeing that. Thanks.

Back to bed here. Too early to stay up with a busy day forthcoming.

This is all fascinating to me. I thank you so much for presenting it.

1. I don't know what happened to that comment: here's the gist: I save the Excel file on MS OneDrive. You *might* be able to get a free account (I did in the past, but have since purchased Excel, which comes with it). Find your file on OneDrive with your browser (I could not figure a way to do this from within Excel). Select the file (put a check on it). Then choose "embed" from the menu at the top. It generates a special HTML line of code for you with the word "iframe" in it. Simply paste the result into your HTML on your blog (using the HTML editor). It's very easy and it works great!

2. A and B are constants as long as you don't change the parameters (alpha1, alpha2, theta).

"Does this get to the heart of the difference between linear and non-linear?"

No, even if A, B, C and D changed every time step, the system would still be linear (just not time invariant).

Non-linear means that the derivative of H (in the case of SIM, or little x (the vector) in the literature) is dependent on something other than a linear combination of x terms each step. For example it could be x^2 terms.

x' = x^2

is a simple scalar non-linear example (where ' mean derivative).

x' = a*x

is linear, even if a = non linear function of t.

3. X = 1 - α1(1-θ)

Was just a convenience. I probably shouldn't have called it "X." But because SIM stole a bunch of my usual names (Y, C, H) I figured X would be a safe constant name, but that's what it is. To me it has no special meaning, but maybe it shows up in SFC models all the time: I really don't know.

You *could* make a model where α1, α2 and θ are indexed by time (i.e. are functions of time) and then you'd have a linear time-varying system.

Like I mentioned before, the big advantage of linear systems (time varying, or time-invariant (which is a reference to the coefficient matrices BTW, not the variables x, y and u)), is superposition. Here's what I mean by that. Suppose you have the following simple scalar linear system (where x' := dx/dt)

x'(t) = a(t)x(t) + b(t)u(t)

And say you find the following four specific solutions based on different x(0) values and u(t) curves:

Two homogeneous solutions:
x1(t) = solution with x(0) = x01, u(t) = 0
x2(t) = solution with x(0) = x02, u(t) = 0

Two heterogeneous solutions:
x3(t) = solution with x(0) = 0, u(t) = u1(t)
x4(t) = solution with x(0) = 0, u(t) = u2(t)

Then the solution when x(0) = c1*x01 + c2*x02 and u(t) = c3*u1(t) + c4*u2(t), when all the c are constants is:

x(t) = c1*x1(t) + c2*x2(t) + c3*x3(t) + c4*x4(t)

Just a superposition of all four! One solution doesn't affect the others. This is not generally true of non-linear differential equations. What I describe above is a means of linearizing non-linear equations near a particular nominal state. It's an approximation that only holds in the vicinity of that particular state. The classic example is linearizing the equations of motion for a pendulum for small oscillations. Google "linearizing the motion of a pendulum" and you'll see tons of examples. If you want to linearize the equations of motion for the pendulum out where the swing angle is 90 degrees rather than near 0, you'll have a different result. Thus if you were trying to make a linear model of a pendulum swinging by +/- 90 degrees, you'd need a whole series of different linearizations (a time-varying linear system) for different angles, whereas if it stays near 0, you can get away with just 1 (an LTI system).

Now what are the advantages of an LTI system over a linear time-varying system?

1) Fixed A, B, C and D (simplicity)
2) Easy to calculate the transition matrix (matrix exponential)
3) Impulse response (i.e. to a Dirac delta function) completely characterizes the system
4) Ability to do simple steady state analysis
5) Ability to use frequency domain analysis

But superposition trumps all those, and both LTI and linear time-varying have it. Also, for discrete time systems, there's no difference between linear time-varying and LTI when you're analyzing a single time step.

4. Non-linear systems are a WHOLE other beast. You can't often say much about them, and you're forced to linearize them (over small regions of validity) if you want to be able to use most of the tools available. I'm sure there are particular cases that are different. One example is the time-varying continuous time Kalman filter gain equations... a classic time-varying non-linear differential equation (Riccati equation). However I've never heard of anyone actually trying to solve that... they usually just solve the steady state (difficult enough, and also called a Riccati equation), or they use discrete time, which has a straight forward solution (even in the time-varying case).

BTW, you mention Fourier transforms above: those come in both continuous and discrete time varieties. The "fast Fourier transform" (FFT) is a popular example (discrete time). Here's the connection to the Laplace and Z-transform I mentioned above:

The continuous time Fourier transform is just the Laplace transform restricted to the imaginary axis of the S-plane.

The discrete time Fourier transform is just the Z-transform restricted to the unit circle of the Z-plane. The unit circle is just a unit radius circle centered at 0.

You might guess that transitioning between discrete and continuous time involves a mapping of the imaginary axis to the unit circle (and vica versa), and you'd be correct.

The S-plane and the Z-plane are both just complex planes: real values on the horizontal axis and imaginary values on the vertical. Take the S-plane for example: you can think of it as a bunch of Fourier transforms all parallel to the vertical imaginary axis. Those to the right are weighted by a decaying real exponential (they need to be, else they won't converge). Those to the left are weighted by a rising exponential (they can tolerate such a weighting, and still converge). An analogous weighting exists for concentric circles both outside and inside the unit circle. The function plotted over these planes is complex valued, and represents a generalized frequency response of the LTI system under consideration, which is almost always a ratio of two polynomials. The denominator polynomial is the "characteristic function" of A, and is (in the continuous time case):

det(A - I*s) = characteristic polynomial in the complex valued variable s.

Where "det" stands for determinant, s is the value at any one point in the S-plane, and I is the identity matrix. Where that polynomial goes to zero (it's roots) are where the ratio of polynomials plotted over the S-plane goes to infinity. That's why they're called "poles" ... think of a tent pole, with the polynomial describing the surface of the tent. The location of those poles on the S-plane characterize the modes of the system. Your weighted Fourier transform cannot include a pole anywhere to its right (in the case of the S-plane) or anywhere outside it (in the case of the Z-plane), else it won't converge (i.e. it cannot be calculated). As you may know, Fourier transforms exist for sample data (with no specific reference to a dynamic system). The corresponding sample data in the case of LTI systems is it's impulse response (which also completely characterizes the system).

The case for the Z-plane and discrete time LTI systems is exactly analogous.

5. BTW, here's one way to think of Fourier transforms that might make them easier to understand. Are you familiar with a coordinate transform? Think of measuring the location of every object in a room wrt some set of perpendicular axes, say centered in one corner of a room (East, West and Up, say). Every object will then have three coordinates associated with it (x, y and z) (assume the objects are small compared to the room, and can be represented by points). Now imagine you want to take all those coordinates and express them in a new reference frame. The new frame also originates in the same corner of the room, and its axes are also orthogonal, but the new x-axis (call it x') is in a different direction (say up 5 degrees and to the South 37 degrees), and similarly for y' and z'. If you take unit vectors along x', y' and z' and express them in the old reference frame, and stack them in a matrix like this R = [x'|y'|z'], then you can take any point p represented in the old frame and translate it to coordinates in the new frame like this: p' = R'*p, where the ' in this case represents the transpose (so R' is R transposed). R is a square matrix obviously.

A discrete Fourier transform (DFT) is exactly this: it's a particular coordinate transform, and it can be represented by such a matrix R. An FFT is just a DFT when it's dimensions are equal powers of 2 (e.g. R is 512x512). So say you had a time domain set of samples 512 long called x (a 512x1 vector). Then you can calculate the FFT (inefficiently) by first calculating R and then finding fx = R'*x (where fx is also a 512x1 vector). The inverse FFT is just multiplication by R instead, so x = R*fx. Like all such coordinate transforms, R'*R = R*R' = I. Rather than doing an FFT this way, it's much more efficient to use an FFT algorithm, but conceptually that's all that's going on. For DFTs and FFTs, the elements of R are in general complex, which is why a real valued vector x results in complex values in fx.

Fourier transforms for both continuous and discrete time systems are in general continuous functions (I didn't make that clear above), so a DFT is a bit of a different beast (having a finite number of values). But you can think of the continuous valued Fourier transform (in either case) as just begin the result of a really really big R matrix.

What's useful about this particular coordinate transform is that the new coordinates are expressed in terms of orthogonal sinusoids (complex exponentials actually, but it amounts to the same thing). Complex exponentials are what are called "eigenfucntions" of LTI systems: if you stimulate an LTI system with a complex exponential (i.e. u = complex exponential), you'll get that same frequency sinusoid out, just scaled by a constant (in the steady state)). Here's the relation between complex exponentials and sin and cos: real(exp(i*w*t)) = cos(w*t). imag(exp(i*w*t)) = sin(w*t). So exp(i*w*t) = cos(w*t) + i*sin(w*t). The complex exponential just goes around and around the unit circle at the rate w, and a cos is just the projection of that onto the real axis and the sin is the projection of it onto the imaginary axis.

If you stimulate a system with a sinusoid and get anything out other than a scaled and phase shifted version of that sinusoid (in the steady state) at that same frequency w, you're likely dealing with a non-linear system (I don't think you can rule out a linear time-varying system). Unfortunately it turns out almost all systems are non-linear, but many have large regions of validity where a linear approximation is fine (i.e. the "spurious" outputs at frequencies other than w are greatly attenuated).

3. Roger, I may be leading you astray with all this linear system theory! I don't really know. If your interest is primarily in economic modeling, I really am no expert on that. Where it overlaps with things I know I can go on at length (as evidenced above!). I'm not sure how much of what I wrote here applies to what you're interested in.

You can take my series of posts (starting with SIM and going through SIM7 and this one) as a cautionary tale of what can go wrong when an electrical engineer takes a cursory glance at SFC models and thinks he knows all about them based on his knowledge of linear systems. I do think I understand SFC models a bit better now, and I understand why I had miscommunications with people like Ramanan better now, but I'm FAR from an expert.

From my engineer's viewpoint, actually staying stock-flow consistent doesn't seem that important, as long as we're approximately stock-flow consistent. I eventually figured out how to stay SFC while changing sample periods (for example), but big-picture, I think I had it basically correct the 1st time (or the 2nd time: SIM2). But maybe there's something important I'm missing. I definitely have not read all of G&L's book (I've really only absorbed a portion of Chapter 3).

Of course my other purpose here was to understand Jason (the physicist's) complaint ... something I have not fully accomplished, but I'm OK w/ that.

1. Tom, No, not leading astray, just leading. And I thank you profusely for that. I have read it all, some of it many times. Now begins the slow process of integrating the concepts, some of which are still obscure to me. I love to keep books on the shelf that have knowledge past my present understanding. Your comments constitute another one of those books. Thanks for making it available.

As a (nearly) civil engineer and amateur radio hobbyist, I understand your electrical engineering background but don't have complete overlapping competency. I am more like an aspiring electrical engineer.

That said, I like the SFC models for their emphasis on the mechanical aspects of economics. I think this helps keep the models better "tuned" to the actual economy. I am sure that you noticed how the theta parameter has a ubiquitous presence in the SIM equations. My life experience has brought me to the belief that government mirrors a father-child relationship, building (forcing?) an economic framework. SFC encourages assignment into groups of decision makers.

I try to follow Jason's work but don't understand the nuances. I earlier thought he was trying to measure the economy using a second, universal, scale. Having read your linear system theory material, I still think that is correct but now I add linear system theory as (potentially) representing the second scale. I think he is making a valuable contribution despite my inability to completely comprehend his effort.

My SIM model now has the ability of choice between beginning at zero or steady state, and choice of a jump at year 10. The jump can be from tax rate change or government spending change. My goal is to add more choices.

Yesterday was a busy day. Today is mostly at my pace.

2. Well, I'm glad you're enjoying it. I majored in electrical engineering, but I don't really do circuit design or RF work. I too had a HAM license in Junior High, but I never actually used it (we had a class at our Junior high on radio electronics, geared toward getting your license). My skills are more along the lines of system engineering, digital signal processing, FGPA and software design. Most of the above is applicable to many engineering disciplines where there's some kind of dynamics happening (space structures vibrating, aircraft, chemical flows, robotics etc). My dad is a civil engineer (retired), who did surveying work as well on the side. I spent many hours helping him survey using his 19th century surveying tools. Ha!

I'm really kind of giving you a hap-hazard presentation though, and not filling in all the blanks. Sometimes that can be helpful I think, but maybe also confusing.

My particular emphasis in schools was control systems, which generally meant feedback control systems. I've only rarely actually designed control systems in my career though, but still the background has served me well, especially the state estimation half of control systems (you have to estimate the states before you can control them). That's why I keep referring to Kalman filters: they are a kind of optimal state estimator for linear systems which are corrupted by (potentially spatially correlated) additive white Gaussian noise (in both the input channels (i.e. "plant noise" or "process noise") and in the measurements ("measurement noise")). That's the part of your GPS that estimates how big the error is (if you've ever seen that). In general that would be an 3-D error ellipsoid, but it's usually just displayed as a circle (a "circular error probable") projected on the ground. It turns out that state estimation is the "dual" of the feedback control problem.

Most people with my educational background (e.g. Brian Romanchuk) have had the experience of "balancing an inverted pendulum" in their lab classes using many of the concepts I discuss above regarding finding a linearized model of a system (an unstable system in this case), and then designing a feedback controller to stabilize it. Once you wrap feedback around the loop (and potentially have a controller with modes of it's own), then you change the modes of the overall system. I never did anything *THIS* advanced, but perhaps you've seen this demonstration of a controlled triple inverted pendulum (I think you can assume the joints between the segments are just passive pivot points, else it'd be cheating!). Inverted pendulums are now a common product of course (SegWays & related "hover boards").

Linear algebra is the underpinning concept for a lot of this stuff. Once you understand vector spaces, linear operators (e.g. matrices), inner products & inner product spaces, basis vectors, rank, null spaces, singularity, determinants, invertibility, orthogonality, eigenvalues, eigenvectors, singular values, norms, quadratic surfaces (conics) and different kinds of linear operators (symmetric, anti-symmetric, hermitian symmetric, projections, rotations, reflections, unitary, etc, factorizations (Gaussian substitution, QR, Cholesky, SVD) the rest comes relatively easy. =)

If you don't already have something like Matlab, you can get a free package like Octave that uses almost the same command set, and play around with this stuff. I'm not really a mathwiz, and a lot of times I use the questionable method of "proof by Matlab" to "prove" things to myself. Trying to do it in Excel is possible (with the version you pay for), but not very convenient. Even if I work the math out, I never trust it until I've "proven" it in Matlab.

3. Here's a fun little exercise you can do to "prove" some linear algebra concepts to yourself regarding covariances (that I bring up in the Appendix to the post): generate spatially correlated white Gaussian random noise vectors, with something like Matlab or Octave:

A = randn(3,3); % A a 3x3 matrix, each element drawn from a N(0,1) Gaussian distribution (mean=0,var=1).
C = A'*A; % C a valid covariance matrix: real and symmetric and thus positive semi-definite (A*A' also works)
G = chol(C)'; % generate a matrix that will map N(0,1) spatially uncorrelated vectors into ones with cov = C
x = G*randn(3,1000); % generate 1000 spatially correlated random 3x1 vectors with cov(x') = C, mean(x') = 0
% check the sample mean and cov (no semicolon means they'll print, and the ' mark just takes a transpose):
mean(x')
[cov(x') C]
% make some scatter plots of one element against the other (the 3D error ellipsoid projected to 2D)
m = ceil(max(abs(x(:)))); % find our plot limits
figure(1),plot(x(1,:),x(2,:),'.'),xlim([-m m]),ylim([-m m]),axis square; % element 1 vs 2
figure(2),plot(x(1,:),x(3,:),'.'),xlim([-m m]),ylim([-m m]),axis square; % element 1 vs 3
figure(3),plot(x(2,:),x(3,:),'.'),xlim([-m m]),ylim([-m m]),axis square; % element 2 vs 3
% Now find the analytic 3-sigma 2D error ellipse of element 1 vs 2 and put on the same plot for comparison:
theta = [0:0.001:1]*2*pi; % 1001 numbers from 0 to 2*pi
circle = [cos(theta);sin(theta)]; % vectors around the circle:
ellipse = G(1:2,1:2)*circle*3; % 3-sigma ellipse of element 1 against element 2
figure(1),hold on,plot(ellipse(1,:),ellipse(2,:),'r'),grid on,hold off;
% Now get your ruler out and compare the major and minor axis lengths (3 sigma) with analytic ones:
3*sqrt(eig(C(1:2,1:2)))

I added figure 1 to the post to show an example of what it looks like (it'll be different every time).

4. Here's the console output:

ans =

-0.0016 0.0210 0.0044

ans =

3.9413 1.7339 -1.2207 3.9677 1.6217 -1.2932
1.7339 1.3280 -0.3075 1.6217 1.2528 -0.3024
-1.2207 -0.3075 0.7103 -1.2932 -0.3024 0.7417

ans =

2.1116
6.5212

5. Selecting perturbations (to calculate an approximate transition matrix for a linearized non-linear system) as I describe in the appendix amounts to selecting points at the ends of the principal axes of that ellipse (actually, a 1-sigma ellipse, but using 3-sigma [as pictured] is more conservative).

6. Hi Tom,

Wow, you have added many things. That balanced pendulum video is fascinating. I am amazed that the engineers can do that, not coming close to REALLY understanding how linear system theory enters the solution.

I think I will download Octave later today and give your code a try. I see the dot pattern in the parent post here. I also see the two trajectories, which are new this morning. Are these in reference to Jason's gamma factor? At second look, I see not. The red ellipse is the error boundary, the blue line is the probability distribution within that error boundary.

I have wanted to try Octave but never had the urgent reason to try it, recognizing that there is always a learning curve. Hopefully this afternoon.

Thanks yet again.

7. I ran my code on MatLab, so I can't guarantee it will work on Octave, but it should be close to working. If anything, I'd guess the plot instructions my not translate fully. I wanted to keep the proportion on the two axes equal.

And you're right: it has nothing to do with gamma!

I can share the code for that 2nd plot as well. I wanted to work out a simple example of an object falling, but it was more difficult than I thought. My problem is calculating the proper Bd for the air resistance case. Unfortunately Ac is singular. The calculation can still be done (I did it for the non-air resistance case), but I had to solve the equation symbolically. I thought there'd be an "integrate matrix exponential" function in Matlab for just such cases (Ac singular), but not in basic MatLab unfortunately. Maybe in one of their tool boxes. (Recall my previous expression for Bd involved inverting Ac). Go back to my SIM2: I added a link at the top to what you do if Ac is singular. Unfortunately it's not a very clean solution (and it's numerically unstable... the straightforward way is anyway). BTW, Ad = expm(Ac*T) is NOT calculated using the expression it's defined with, because that has numerical problems too. But fortunately Matlab/Octave does that for you.

8. Suppose you have a position and velocity as states and call them x1 and x2 respectively, and put them in a state vector x = [x1 x2]' (the ' is transpose here). Then Ac =

0 1
0 0

And Ad = expm(Ac*T) =

1 T
0 1

You see how singular Ac led to non-singular Ad? That's a super simple case, but leads to some fun with Bd... especially after you approximate air resistance with it! Also there are two separate systems: one for horizontal and one for vertical... but the air resistance (if done right) coupled them together, so you need a four state system.

Ac above literally just says that x2 = dx1/dt. Now say this is measured in the vertical direction. Then Bc = [0 1]', and u = -9.8 (just the acceleration of gravity, where I'm assuming the down direction is the negative direction. So you see what this says?: dx/dt = Ac*x + Bc*u

The bottom row now just says that dx2/dt = -9.8. Velocity is x2. Very simple, right? I already showed you Ad, but Bd = [0.5*T^2 T]'. You can easily get that from doing the symbolic integration, from 0 to T of expm(A(T-tau))*Bc*u. expm is just the Matlab command for matrix exponentiation. If you use the usual exp() on a matrix, it simply finds the exponent of every element of the Matrix, which is NOT what matrix exponentiation means. I think Octave uses the same. In fact, in this simple case you CAN use the definition of the matrix exponential to symbolically calculate expm(Ac*T):

expm(Ac*T) := I + Ac*T + ((Ac*T)^2)/2! + ((Ac*T)^3)/3! + ... (an infinite sum)

And of course what I mean by (for example) Ac^2 = Ac*Ac, where that's a MATRIX multiply (not an element by element multiply).

Ac*T =

0 T
0 0

(Ac*Ac)*T^2 =

0 0
0 0

So naturally Ac^n = 0 for n>1. So that infinite sum converges after just the first two terms: expm(Ac*T) = I + Ac*T, in this case.

Now try it for a system where acceleration is a state: x = [x1 x2 x3]' = [p v a]'. You'll find it takes three terms of the series to converge, and you'll get a T^2 entry (0.5*T^2) in the result.

9. Now to add in drag you have this: dx1/dt = x2 and dx2/dt = -d*sign(x2)*x2^2 + u, where d is the drag coefficient. That's a non-linear system of equations, but you can take the partial of "f" wrt x2 and get this "linearized" system valid in the vicinity of a particular value of x2 (call it x20) only: Ac =

0 1
0 -2*d*sign(x20)*x20

Now it turns out you fairly easily calculate the symbolic expm(Ac*T) here as well, and the Bd to (but we can't use the simple formula Bd = (Ac^-1)*(Ad - I)*Bc because again, Ac is singular (has no inverse)). The difficulty is that when we add in a horizontal dimension with drag, you'll find the drag formula (the force of drag is proportional to the square of the speed) couples the two dimensions together when the velocity is a combination of horizontal and vertical velocities. Do the coupling terms make the 4x4 Ac invertible? Unfortunately no, so now we have a bigger mess to solve. Lol... my simple example got complicated! In the above figure I ignore the coupling terms (which should be small anyway), and I only calculate Bd for the linear case. Also, for the blue curve I don't do a proper numerical integration, but instead approximate it with a time varying linear system (calculating a new Ad each time step). The non-drag case is LTI (constant Ad and Bd). To propagate the initial covariance (the small one in black at the top) you do the following:

C(t+T) = Phi*C(t)*Phi', where Phi = the transition matrix, which in this case is Ad. For the non-linear case I do that each time step (since Ad changes), but for the linear non-drag case I just do it once (one big time step).

BTW, assuming you have a zero mean state x, here's why that's the covariance propogation formula:

C(t) := E{x(t)*x'(t)}, where E is the expectation operator. Thus

Since the mean is subtracted off first anyway to calculate a covariance, this applies to a non-zero mean x as well.

I don't know if anything similar comes up in SFC models, but I know they're not all linear, so perhaps it does. Regarding the error propagation: again, I don't know, but you'd kind of think that, wouldn't you? Lol.

4. This comment has been removed by the author.

5. Do you visualize the expansion of money as an "object" like I do?

It's a strange object that materializes at the will of government. Government borrows from itself. Where there was no money, now there is money.

This new money disappears if government taxes it back. A constant tax rate based on every money exchange is one way to analyze the collection.

This strange object is measured by money and transactions. The product (money*transactions) forms a parabola if plotted on money-transaction axis's. The curve is the "size" of the object and is constant at every point on the curve. The object has NO time parameters.

Linear System Theory can be used to predict how taxation might, over time , collect the object.

My method was not Linear System Theory? While the results were the same, I divided the whole object into two parts. Then I assigned one part to government, the second part to the private sector, with the size of the parts being dependent upon a uniform series of time periods.

Is this making any sense at all?

When we divide the object this way, only a small part of the entire object is visible in any time period. The visible part is the expansion related to the government tax collection. Thus we see GDP = G/T.

The privately held part of the object remains un-expanded, being only visible to the extent that wealth is visible.

[I know this is hard to visualize. It is something like complex numbers that are both present and not real.]

In SIM, we go on to add two additional complexities: We add more money (a new object) each period, AND, we draw part of the new money from previously issued money. You are familiar with the resulting trend to the steady state limit.

Concluding, do other people think of an injection of money as creating a GDP object (like I visualize)?

1. Roger, I don't know. That sounds interesting, but I'd have to think about it some. Do you explain more on your blog?

Also regarding Jason's gamma: I kind of gave up on trying to figure out what his main point is there. I figured out WHAT it does, but I still don't understand why he felt the need to introduce it. I showed that you can change the time constant and the steady state to what you want w/o gamma (SIM7) and the sampling period (SIM6). True, only over a limited range in the case of SIM7. He refers to a money multiplier... and I confirmed with him yesterday he means M2 = m*M0 (or something like that), where m is the money multiplier. But SIM doesn't have any M2. You need banks for that. There are other SFC models with banks. I asked Nick Edmonds about it yesterday. Here's his reply.

So even after all the words that have been written on the subject, I don't really see what gamma is for. I think commentator Bill is in the same boat. I left an "exercise" for you on Jason's blog regarding it. :D

BTW, I liked his circuit analogy, but there again, I showed how to use normal SIM parameters (no gamma) to change the R and L values (the 1st comment to the post).

Perhaps it's the limited "range of movement" that the SIM parameters provide you that is bothering him. I've decided to drop the issue. Still I have no idea what he's imagining when he's imagining H. To me it's just the physical cash that been printed and spent into the economy. There are no banks, so there's no M2. Cash is the ONLY money in that universe. If gamma=0.5, it implies half of the cash has been lost (each period) or something (near as I can figure). Ultimately it doesn't matter since not all taxes are collected either I guess (some is lost)... and since the government keeps spending, you eventually reach the same steady state, despite all the lost cash. My interpretation is probably wrong... I just want you to know that you're not alone in being confused.

Also I found it a bit disconcerting that the expression for YD is dependent on gamma. Perhaps that makes sense, but gamma doesn't factor into the other "measurements" (Y, T and C). In the steady state, gamma drops out. And because we know ALL the dynamics are ultimately tied to a linear combination of H[n-1] and G[n] for all the measurements M[n], then the dynamic properties of YD won't be altered much (the initial offset may be). With G a step, we have:

H = Hss(1-exp(-t/Tc))

where Hss is a function of G. For the other measurements (I call them M) we have something more like:

M = Cm*H + Dm*G

So the Dm*G is the "offset" (just a constant) ... and the dynamic part is Cm*H. You can see that many combinations of Cm and Dm get you to the same steady state for M (call it Mss). Now figuring out which accounting expressions they satisfy at EVERY time step (not just in steady state) is another matter. That was my exercise for you. AFAIK, Jason's expressions may satisfy all of them (except those involving Delta H of course) at every time step... but I have my suspicions. My SIM2 did not satisfy them (though it was close). I think Jason is arguing that because he satisfies them in steady state, even with gamma, then he must satisfy them.

All in all, I'm not sure it's that important. Check out his post here... he defines B as the bonds sold by the government. I'm wondering what they are purchased with if no money exists yet (you may have to check the comments to see the bonds reference).

2. Another exercise could be to see if you can use Jason's same basic approach, but make the expression for YD entirely independent of gamma. Perhaps that doesn't make sense to do, but I'm curious why just that one. For example, in his code listing, he introduces gamma on the next to last line of the loop in the expression for "DD" (his name for YD). But perhaps you could move that to the last line (the expression for HH = H) instead. If you try it, let me know!

Also, another beautiful thing about linear systems: in my last comment to you on Jason's blog, I discuss his use of an impulse combined with an accumulator on the output (an integrator is the usual name for it). For example, when a=0 in this expression, you have an integrator:

x' = ax + bu = bu

In the discrete time world that's:

x[n+1] = x[n] + b*u[n]

Well with linear systems, the ordering of the blocks in a block diagram doesn't matter. So for example, in a system say you had the following:

x <- F <- G <- u

Where the "<-" are arrows and the F and G are black boxes that each implement a linear system. Well, that's the same as:

x <- G <- F <- u

So what G&L and what I do (and what you do too) amounts to taking an impulse G=20 at one period, integrating that (a black box) to create a step and sending that through SIM to form the final output. What Jason has does is reversed the order: he feeds the impulse directly into SIM and instead integrates the outputs. The result is the same. This is where a Z-transform comes in handy. Say SIM's Z transform is S(z). The Z transform of a unit step, or in integrator (same thing) is 1/(1-Z^-1). Call that U(z). The Z transform of an impulse is just 1. So if we call our output in the Z-transform domain H(z) we have:

H(z) = S(z)*U(z)*20

(for an impulse of magnitude 20). But that's equal to:

H(z) = U(z)*S(z)*20

Which is what Jason did. But since the way G&L (implicitly) define G(z) is U(z)*20, then the "transfer function" is really just S(z):

H(z)/G(z) = S(z)

Which is true for any G(z)

If you're curious, here's how you calculate S(z): The Z transform of a delay of 1 is Z^-1 (and for an advance of 1 it's Z), so we have from the time domain:

H[n] = Ad*H[n-1] + Bd*G[n]

The following in Z-transforms:

H(z) = Ad*H(z)*Z^-1 + Bd*G(z)

H(z)*(I - Ad*Z^-1) = Bd*G(z)

H(z)/G(z) = Bd/(I-Ad*Z^-1) = S(z)

Easy, right? =)

More here:
https://en.wikipedia.org/wiki/Z-transform

(check out the tables towards the bottom... they give the properties as well as a table of transforms of common functions (or time sequences)... you'll find the both the impulse and the unit step in there... as well as something that you'll notice looks a lot like S(z))

3. What's neat is looked at in the Z-transform domain, you can calculate the frequency response of S(z) at any frequency (should you stimulate the system with a sinusoid). That's just the phase and magnitude of S(z) with z corresponding the the frequency of interest. So for frequency w, it's S(exp(i*w)). Think of that tent pole analogy. The magnitude around the unit circle (and in particular at exp(i*w)) is the height of the tent. Where is the pole in this case? It's where the denominator of S(z) goes to zero, which is at z = Ad. There's also a "zero" (root in the numerator) at z=0. Think of zeros as tent stakes, staking the tent canopy to the ground. We know that 0 < Ad < 1... so as you move around the unit circle, starting at z=1, you'll find that you're at the peak at z=1 (that's exp(i*0), so w=0 there). The magnitude falls off as the magnitude of w increases, symmetrically about w=0. It's at its lowest at w=pi (i.e. z=-1, the highest possible frequency in the discrete time world... that's 1/2 the sample rate).

So SIM is a 1st order (one pole) low pass filter! (It's also infinite impulse response IIR, meaning the impulse response is infinite... technically it never gets to 0, though w/ finite precision calculators, it actually does).

You can also build systems out of nothing but zeros (no poles). An example is a finite impulse response (FIR) filter. It's not actually true that there are no poles... mathematically you could say they're all clustered at 0 (one huge tent pole at 0). Typically what's done is the "stakes" (zeros) are placed strategically on or near the unit circle to make certain frequencies relatively higher than others in their response. Zeros can be outside the unit circle and you can still have a stable system (unlike poles).

The same analogy holds in the continuous time world, only the unit circle corresponds to the imaginary axis. Inside the unit circle corresponds to the right half plane. Frequency = 0 (z=1 in discrete time) is instead s=0 in continuous time. Maximum frequency (z=-1 or w=pi) is s=i*infinity. Poles at 0 correspond to poles at -infinity (along the negative real axis), etc. Black boxes with Laplace transforms have many of the same properties, in particular F(s)G(s) = G(s)F(s). And you can find similar tables of Laplace transforms for common functions, and use them in the same way to analyze continuous time systems.

4. I write above:

"My SIM2 did not satisfy them" ... when I changed sample periods that is. It did with the default sample period. I fixed that in SIM6. SIM3 is kind of an intermediate result (I only check that gamma=1 always).

5. Also, Jason has another follow up to my latest to you in his circuits post:
http://informationtransfereconomics.blogspot.com/2016/03/an-rlc-circuit-with-r-s-and-l-f.html?showComment=1459529420141#c1583487334235972544

I'm ready to move onto other issues! If you have any breakthrough insights on this one though be sure to let me know!

BTW, he refers to Nick Rowe's post about chartalism and velocity of money, etc, several times as an alternate explanation. I thought "Great! Maybe Nick can explain" but Nick didn't get far in Jason's post. He said maybe the capacitor corresponds to something or other, but he wouldn't commit. He didn't understand it unfortunately. The thing is, I *think* I understand Nick's post.

Lol... Jason must be very frustrated!

6. Last thing: If you want to change sample periods while approximately satisfying all the accounting equations, I did that in SIM2 a long time back. There's no explicit gamma in there. You could (I suppose) use it to change time constants as well by simply not relabeling the time axis.

Ironically, Jason's gamma is similar to the approach Ramanan and "A H" described for changing the sample period: namely by scaling alpha2 and B by the same factor (that's the gist of it anyway). They don't alter the expression for YD though. There's some funny business you have to account for in translating between the "rate" variables (G,Y,T,YD,C, but not H) being "stocks" over one period. The stocks change when you change sample periods. Or you can think of it as the units changing for the rates (the denominator changes with the new period). It makes my head hurt a little... it confused me a bit (if you follow the history of my posts and updates you'll see).

7. I am working on a blog post to describe the GDP Object. I have a picture complete (unless I change my mind) and maybe it will help. I will copy some stuff from Brian's post and our comments. Part of the blog post was done in those comments but some of these ideas are very hard to convey in words.

I read Jason's latest comment and think the GDP Object 'might' help there. We shall see.

6. Roger, I couldn't help it: my question to myself in a comment above about "What concrete thing does Jason imagine H is?" prompted me to write a couple of more comments/questions there. Check it out and see if your mental image of the SIM world at all matches mine. I think it allows for some uncertainty about what the velocity of money is, but it doesn't allow for uncertainty about the stock of money.

1. Hi Tom,

I gave up on a "well done" post. Instead I made a draft post just for you (but all can see it). You can follow my comment link to the blog site, and the draft post is on top.

My, but you have been busy! I will read it after dinner tonight.

2. Hi Tom, I added a comment to yours on
on Jason's blog . I observed that the firms and employees were very honest. They paid x% to the ATM every time they received any money. (of course that is built into SIM)

I added an update to my post " For Tom: The GDP Object ".

I keep learning new things from you! I wondered how you so easily put links into your comments. I think I figured that out. Thanks again.