Domain Query: Of models and madness

FrancisBegbie had a request regarding my recent post about the manner in which economies conduct seppuku:
Economics is pretty simple in a lot of ways, and cannot be modelled with bullshit models. I've taken looks at some of them econometrics models and forecasting stuff, looking at the maths en aw, and it be all cuckoo clock stuff, or at least to me. The future is too uncertain, too messy to model (unless it be something like bonds or something). Yous lads are using crazy shit like Monte Carlos no, or whats the story? Be interesting to sees, from a nerd perspective, what are the specifications of those kind of models. Possible future post ken?
Happy to oblige, sir.

(On a related note- Lord love the Irish. I certainly do. Their economy may be in the crapper, but you will not find a more musical, more decent, more stalwartly faithful people anywhere on Earth. And their beer ain't half bad either- though their whiskey really leaves something to be desired...)

As FrancisBegbie acknowledges, I have something more than a passing interest in the subject of economic models and econometric forecasting. You may recall that I have some training in this sort of thing. In fact I have rather more than I let on in that particular post, because it so happens that I also have a Master's degree in mathematical finance. (I don't talk about it much, but it's true.) So I do have some understanding of how neoclassical economic theory looks at the world, and how standard financial modelling works- and therefore why so much of both is complete bunk.

(Note: to keep this as readable as possible I'll stay clear of equations as much as I can. There is only so much time that any of us can waste on such nonsense, after all. If someone asks it of me, I might get around to putting up a more technical addendum complete with equations and charts, but I don't see much demand for that.)

The (False) Science of Economic Modelling

The funny thing about most university-level courses in economics (outside of the US) is that you immediately get dumped straight into using mathematics to figure out optimisation problems and find solutions to preference curves subjected to constraints and all that sort of fun stuff. And actually it is very good fun, provided that you are of a mathematical bent (which I am).

Let me pose a couple of standard economics problems for you so that you can see what I'm on about. Let's start with a basic problem in microeconomics:

Consumer A has at his disposal an income of 100 units. He must choose between Good 1 and Good 2, subject to this maximum income. Assume that his preferences follow a Leontief function of exponential preferences between these two goods. What is his optimal point of consumption?

And now let's consider a relatively simple problem in macroeconomics:

Assuming inelastic aggregate supply, prove that an economy following an (approximately) linear aggregate demand function will be unable to boost output through use of demand-side policies.

Both problems require some amount of mathematical skill to solve. The first is basically a straight optimisation problem of a monotonically increasing exponential function subject to a linear constraint, which should not cause any reasonably bright mathematics student any trouble whatsoever once he's mastered university-level first-year calculus.

The second is actually even easier- basically you just draw a couple of graphs and show that if you move a sloped line along a vertical line, only the vertical point of the intersection between the two lines changes, not the horizontal point.

All of this is very interesting in a classroom setting- I know, I used to do this. It's also completely useless in the real world.

Take that first problem in microeconomics. Do you seriously consider your income in relation to just two goods at any given instant? I sure as hell don't. Nor do my preferences following anything like a "monotonically increasing exponential function", even at the best of times. And nor is my income a linear constraint- I get a pay cheque every two weeks, but I know that my expenses vary significantly from month to month, so there is no guarantee that I'll be able to satisfy all of my wants and desires in the current time period. Also note that this problem does not take into account the capacity of an individual to borrow beyond his income, and it also assumes implicitly that every consumer is completely rational in his wants and desires and that the value that he obtains from consumption of any one good or service can be somehow measured. In reality, such value is completely subjective. The value that you place on a heavy metal concert is likely completely different to the value that I place on it- and let's not even get started on whether you would prefer to see ALESTORM over FINNTROLL if they both happened to be playing on the same day at different venues, for instance.

Now let's look at the second problem. I've noted previously that the idea that you can break up economic activity into aggregate supply and aggregate demand is pure moonbattery. More to the point, how exactly are you going to measure and model every single possible economic activity within a nation? The best you can do is come up with a bunch of half-baked estimates using a lot of very questionable statistical methods. Now, if your starting point is shaky, any conclusions that you draw from running models that assume that your starting point is solid will by definition be shaky also. It's like stacking Jenga blocks- if you have a crappy foundation, you can only build the tower up so far before the next bad idea topples the entire thing.

Financial Modelling for Morons

Things get considerably worse if you start looking at the mathematics underpinning most financial models for complex derivative products. Let's take the (in)famous Black-Scholes-Merton equation, which provides a closed-form solution for the pricing of a European call option. That formula uses a lot of very interesting and elegant mathematics to derive a formula that, for the first time, gave practitioners the ability to price options cleanly and quickly. Yet the model is based on some very strong assumptions:

  1. Markets are frictionless (i.e. no transaction costs, delays to transactions, "stickiness" in prices, etc.)
  2. The Law of One Price holds (basically this means that in a complete market, i.e. one in which every product either is unique or can be replicated in value by a portfolio of similar products, the price assigned to any given product is unique)
  3. There is no opportunity for arbitrage (risk-free profit)
  4. Every market participant has risk-neutral preferences
  5. The risk-free interest rate is constant
  6. The volatility of the underlying asset is constant
  7. Asset prices follow a "smooth" underlying distribution (I'll go in to this in more detail shortly)
And so on and so forth. It becomes very obvious even to the layman that most of these conditions are ridiculously stringent and cannot be met by any financial market anywhere in the world. And we're talking about options on equities here (or something similar). Just wait until you start dealing with options on commodities, where you have to worry about physical delivery of the underlying asset, or options on interest rates, where you have to figure out how to fit your future yields to the current yield curve, or options on electricity, where the underlying asset is extremely illiquid and subjected to insane intraday volatility, or... Well you get the idea.

Every single financial model, whether it be a closed-form model like BSM or a Monte Carlo/Partial Differential Equation approach like the more advanced multifactor models used for pricing highly complex path-dependent options on interest rates, relies on certain basic assumptions. Without those assumptions, you literally would not be able to price anything. The biggest assumption, by far, has always been that asset prices are "smooth". In layman's terms, this means that the returns on an asset can be modelled using a simple statistical distribution, preferably the Gaussian distribution. The reason that we practitioners want this is not because we don't have other distributions to work with- there are many. It's just that trying to put those distributions into concrete mathematical terms becomes so difficult that we often just give up and say, "this is the best we can do".

In practice, option pricing relies much more on good judgement and a sound understanding of the market than on any model. I have had the privilege of working for two outstanding traders in my career- one at my old firm who traded (and still trades) Bermudan options on interest rates, and one at my current firm who trades inflation products (and is unquestionably the best on the entire street at doing it). Both of them knew perfectly well where the models worked and where they didn't, and applied haircuts and premiums to deal with risks in their books that they knew simply couldn't be hedged under the paradigms of the models that they were using. And both have managed to perform extremely well under very challenging market conditions because they were very careful with managing their risks and their cash flows over time, without taking inordinate risks in their trading portfolios.

Next Stop, Monte Carlo

To address FrancisBegbie's very specific question about the use of Monte Carlo simulation, I have this to say. Monte Carlo simulation is a tremendously useful tool. The basic idea is that you simulate out as many future outcomes as possible, given a starting point, a certain set of assumptions about the probabilities attached to every possible outcome, and some kind of random number generator.

To give you a very simple example, let's say that I want to measure the probability of getting k heads in N coin tosses of a perfectly fair coin. Now it just so happens that you can do this using a binomial distribution, but let's say that I'm completely ignorant of anything other than the most basic concepts of probability. If I wanted to measure this objectively and empirically, I would essentially set up a very large number of trials in which I would toss my coin N times in each trial, and measure the number of heads that I got in each, and then plot a graph showing the number of heads that I got in each trial. Eventually this would take the shape of a probability distribution- though it would take a long time to do this to the point where I could confidently argue that the probability of k heads in N coin tosses is
where of course p = 0.5.

If I had a computer handy, of course, I could dramatically cut down the amount of time I had to spend on doing this by simply assigning a probability of success on any given coin toss of 0.5, and then have my computer flip my coin thousands of times for me and graph the outcomes.

This is the core of Monte Carlo simulation, and it should be immediately obvious why this is such a useful tool. Using MCS, one can simulate out a huge number of possible outcomes and then figure out what the most likely event really is. But as with everything else, MCS depends heavily on what you put into the process. As always, GIGO.

MCS and VaR and to Hell with Alphabet Soup

Aside from option pricing, one major useful application of Monte Carlo simulation lies in Value at Risk (VaR). VaR is a tool first developed by JPMorgan back in the late 80s and early 90s to measure the complete risk of an entire portfolio of diverse assets and reduce that risk down to a single number. The idea is tremendously elegant- but relying on it is incredibly dangerous, for a variety of complicated technical reasons that I won't get into here.

Consider the problem faced by any bank's risk manager: his portfolio of assets is incredibly complex. It includes equity products, interest rate products, mortgage products, and FX products- and each product line has its own vanilla and structured businesses, and each such sub-group has its own book-level complexities. There is no way in hell that a risk manager is going to be able to aggregate and process all of that information fast enough to figure out what the true risk to the bank's capital is going to be by looking at every single individual position.

So what he does instead is simulate out the sub-portfolios of the bank's holdings using his big-ass Monte Carlo simulation engine, subject to his usual assumption that asset prices behave "nicely", or "within historical norms". (These are notoriously elastic terms, which is why I've put them in quotes.) He then comes up with a VaR number that says that the firm could lose at most X million over the next Y days with a probability limit of Z.

It all sounds brilliant, in theory, but this entire approach has huge and glaring weaknesses. The first is due to a basic property of VaR- it isn't sub-additive. In simple terms this means that VaR(A) + VaR(B) != VaR(A + B). The second is due to the nature of the MCS engine itself. One thing that no one tells you about MCS is that it is never truly random. Every single Monte Carlo generator used in real life businesses uses pseudorandom number generation. What this means is that the string of random numbers generated by the computer isn't really random. If you were to use truly random data to generate out future outcomes, your model would certainly be far more robust- but it would also be impossible to replicate your results with any degree of certainty. That is precisely why practitioners stick to pseudorandom numbers- the most sophisticated PRNGs, such as the Mersenne Twister and others, are capable of producing PRNG sequences that are "good enough" for practical use without resulting in severe problems when it comes time to replicate the results for the next business day.

Two Econometricians Crash a Car...

Now that I've outlined the problems inherent in both neoclassical economic modelling and MCS models, let's take a look at the intersection between the two- the arcane science we know of as econometrics.

Econometrics is mathematical economics taken to its most extreme. It's a haven for Poindexters who really like running massive regression models on even more massive data sets. And here is where, if you're not careful, you can end up piling bad assumptions on top of worse assumptions to end up with a completely horrid mess- imagine what would happen if you tried to combine French onion soup, Szechuan chicken stir-fry, and a key-lime pie into the same dish, and you've got the basic concept.

Let's take time series modelling as an example. This is an extremely important part of economic forecasting. If you can successfully fit a model to an existing data set with very little "noise" between what the model predicts and what the data set actually says, then it stands completely to reason that future data will follow the same pattern that past data did and therefore the model will be able to forecast the future to some reasonably accurate degree. This is of particular use when trying to forecast things like, say, the unemployment rate, or the inflation rate. A time series model, such as an ARMA or ARIMA model, is essentially a model that says that the past is a good predictor of the future, subject to some noise term that we seek to minimise to the greatest extent possible.

Just one problem: the noise term is never as well-behaved in real life as an elegant model like an ARIMA or  GARCH approach would like. And if one were to accept the conclusions of the model without first making allowance for the basic assumptions behind the model and making note of where those assumptions are violated, then one is left with precisely the inedible, stinking mess that I described above.

When Models Fail

None of these things would be overly problematic in isolation if it were not for the fact that we have let Very Important People in Charge of Very Important Things get away with a lot of statistical handwaving and mummery.

For instance, as adherents of Austrian economics have been pointing out for decades, neo-Keynesian economics does not account for, and has never accounted for, debt and the dangers of credit-based economic expansion. This is why neoclassical economics is so very bad at predicting how and why recessions occur- the models are so sophisticated in mathematical terms and are yet completely retarded in terms of common sense.

Or let's take Ireland's very specific case as an example. The economic forecasters of the EU were convinced by their extremely sophisticated mathematical models that economic harmonisation between various member states would lead to stable, non-inflationary growth as each economy essentially tied itself to German-style monetary policy, because of course the main monetary models of the time said that if you have met X, Y, and Z preconditions for growth, then outcomes A, B, and C are most likely. The problem, of course, is that the "natural" interest rate of any country depends heavily on that country's infrastructure, tax environment, and productivity- and Ireland's specific situation demanded a much higher natural interest rate than the official interest rate of the EMU. So naturally a giant bubble formed as lower-than-market interest rates forced huge malinvestments within what was at one point unquestionably one of Europe's highest-performing economies.

This is also why banks and businesses repeatedly fail- because they ignore the devil in the details. For instance, the highly complex CDO and CDS models that got several big banks (and AIG) into a world of trouble 6 years ago were based on an embedded assumption that correlations between different tranches and different assets were relatively stable- i.e. the joint distribution of probabilities of a CDO could be modelled using a relatively tractable and easy copula model. (I'm massively oversimplifying but that's the gist of it.) The problem was, of course, that when everything went pear-shaped, the correlations between different assets shifted dramatically. None of the models were ever built to capture this- because they couldn't be and still remain useful and practical.

It gets worse. AIG Financial Products- the "hedge fund buried at the heart of the world's biggest insurer", as I've seen it put repeatedly- was convinced that the likelihood of its credit default swaps actually paying out on a given tranche of illiquid mortgage-backed or asset-backed securities in a CMO, CLO, or other collateralised product was miniscule- almost too small to measure. This is because their models had a built-in assumption that something like the events of 2008 could only happen maybe once or twice in the entire lifetime of the Universe. Of course, we know, or should know, that financial crises happen a damn sight more frequently than that. As a result, AIG- a company with a trillion-dollar balance sheet- had to declare bankruptcy to qualify for a government bailout, even though subsequent investigations showed clearly that AIG itself was not a systemic threat to the entire financial system and that the collapse of those credit derivatives could have been absorbed by the global banking industry- albeit with severe losses and pain.

Common Sense is Anything But

In conclusion- and it's been a long journey through some esoteric topics here- the key thing to keep in mind is that it generally takes a very intelligent person to believe very stupid things. I can't begin to tell you the number of times I've interacted with fellow students or faculty at the universities that I attended where my interlocutor was incredibly cocksure about this economic theory or that mathematical model. The reality is that models are at best an approximation of reality. They are tremendously useful for shedding light on a very complex and inconceivably tangled world, but they should NEVER be accepted as Gospel truth. They are not, and can never be, because they are at best poor imitations of the underlying processes that they seek to recreate. Always remember that a model is only as good as its assumptions- and if those assumptions are easily violated, then the model itself is of extremely limited usefulness.

Comments

Popular Posts