(949) 706-7777

How to Beat the Machines Before They Beat You

The resurgence of “trend-following” algorithms in the aftermath of the financial crisis, combined with the ease of implementing such algorithms in practice, has changed the landscape of futures trading. Today, a kid with a computer and access to a broker account can patch together a few lines of “python” code and implement a trading algorithm in a few days. Seriously. The way these new designers of machines work is the following: get some historical data, test an idea for trading using a “back-test”, compute some risk-reward statistics like the “Sharpe ratio”, and implement in real-time on real markets.

Now anyone familiar with financial history knows that the last 30 years have experienced a massive bull market in bonds, as yields have fallen to new lows. But the budding algorithmic designer’s back-test only sees that buying bonds (if the signals say so) is likely to be profitable. Of course they have received ample help from the central banks of the world who are also doing the same, and this reinforces the model’s predictability and the designer’s reliance on the algorithm. Since everyone has the same historical data and comes to the same conclusions, the algorithms of many acting at different time scales and different sizes take the buying of bonds into their logical extreme, i.e. buy them beyond zero or even negative yields!

You can almost hear the algorithm laughing at the shorts who sold bonds since they thought yields would not go so low or negative – it’s clear who the winner was so far. Technically nothing irrational with this, or with winning, and you can expect the rule to work until it doesn’t. Like it did not work last month, or around the 2013 “taper tantrum”.

The important feature to remember is that algorithms will likely do the opposite of what humans are expected to do when it comes to extremes. Algorithms will buy more when yields fall, and sell when yields rise. Humans will bet on mean-reversion – it’s just how humans think. But if every trend follower’s machine is doing the opposite of the humans, the machines are likely to win out in the short run, forcing the humans to close shop, unless they can pick their fights carefully.

A second example, not entirely unrelated, comes from “volatility-based” strategies. There has been a lot written on this, so I will not repeat it all, but let’s just summarize by saying that volatility-targeting algorithms sell when implied or realized volatility rises, and buy when volatility falls. This is also a form of trend following, and increases the overall momentum in markets, which machines are faster to take advantage of than humans.

What we have seen over the last decade since the financial crisis is (1) yields falling, and (2) volatility falling. So what is a well-trained machine going to do? Buy! Certainly there have been short spurts of rising yields and increasing volatility, and in most of these cases the markets have dropped into air pockets as all the machines sell simultaneously. But they recover quickly and front-run the humans as soon as the central banks step in to provide support.
So how does one beat the machines?

First, you have to anticipate what the machines are likely to do, and get there first, and then get out first. Easier said than done if you are competing on the same time scale, but not impossible if your time scale is different. If you can come up with prescient indicators on what the most likely action of the algorithms are, then you can get there first since the designer is not likely to change the core algorithm that makes the machine tick. For example, watch levels of implied volatility, e.g. the VIX, and especially sharp rises in it, which can trigger algorithmic selling.

Second, pick your battles and be disciplined. It’s better to change your time horizon and invest when investing in the liquid markets where the barriers to entry are low and machines can out-trade humans, rather than trade with the machines. Sell the noise where noise is expensive, and buy the tails where machines have no data with which to calibrate themselves.