Underfitting, misfitting and understanding alpha’s drivers

While overfitting is certainly a challenge, falling for the opposite extreme is also a possibility.
Reporting part of an interview of William Echkardt from Futures magazine (which I would recommend to read in full from here):

“I can talk a little more about over-fitting, if not my personal proprietary techniques. First of all I like the [term] over-fitting rather than curve-fitting because curve-fitting is a term from non-linear regression analysis. It is where you have a lot of data and you are fitting the data points to some curve. Well, you are not doing that with futures. Technically there is no curve-fitting here; the term does not apply. But what you can do is you can over-fit. The reason I like the term over-fit rather than curve-fit is that over-fit shows that you also can under-fit. The people who do not optimize are under-fitting.”


Underfitting and Misfitting

If we are using an insufficient number of degrees of freedom, so that our system doesn’t differentiate between some key changes in market’s behaviour, then what we are doing is underfitting. A trivial example of underfitting could be buying a random stock from the stock universe at a random point in time and holding it for a random time period.

Another possibility is that we are not using the
right variables (or we have the right variables but we are using them in a poor way) – let’s call this misfitting. Imagine a model on Italian BTPs that looks at Crude Oil prices and totally ignores the spread with German bonds (now, there could even be some exploitable relationship between BTPs and Crude Oil, just trying to make a point).

Clearly, what makes a variable “right” for a given model and a given asset is highly arguable.
Similarly to what said for overfitting, I don’t think we can easily tell in absolute terms whether a model is flawed with underfitting or misfitting (except for very obvious cases). Rather, I like to reason in terms of the possible existence of a better model’s specification that we are ignoring, e.g. there could be a key factor that our model is particularly sensitive to and that we are not accounting for (either in terms of the specific asset we apply the model to or in terms of market’s current dynamics).  Or it could be the case that we are using some variables that are only linked to the real factor, but are not the actual alpha driver.

Techniques to perform this kind of analysis include PCA and factor analysis, but according to what one exactly does many other quantitative techniques can be applied (at a portfolio level, something like market clustering  presented from David Varadi seems promising).
Of course (and unfortunately), we have to keep in mind that the more we operate this kind of a posteriori analysis, the more likely we are to go one extreme (underfitting/misfitting) to the other (overfitting).


Fat tails and changing market dynamics

In another part of the interview mentioned above, Mr Echkardt strictly relates the number of degrees of freedom to the number of trades in our backtest, arguing that one needs more trades than expected in a “Gaussian world” because of fat tails of markets’ returns. While I agree with the qualitative relationship between degrees of freedom and number of trades, I am not sure I agree with the strict quantitative relationship between the two variables.
The reason for this is twofold:

1) It’s not always possible to exactly quantify the actual number of degrees of  freedom being used or how much hindsight we are pouring into our modelling (as discussed in my previous post);

2) I think fat-tails is only part of the story. Another big part is the continuous changes that markets go trough (under the shape of heteroskedasticity but not only).

Imagine you test a model over 2 years of data, and that because the model is a relatively high-frequency model (and so produces a very high number of trades) you think you are guarding your self from overfitting. What you might be ignoring is that having tested the model over a relatively short time window, you could have not tested it against different market conditions. It might well be that 2.5 years ago markets were somewhat different and your model was useless, which implies that as soon as markets change again you will lose your edge.  An example could be a model that unknowingly takes advantage of some market behaviour born from the Fed being on hold over such a long time period.
This is another form of overfitting if you want, but one which can’t be accounted for by simply looking at the number of trades vs the number of model’s parameters.

Because of this, I’d always like to test any new strategy on as much historical data as possible. In regards to this, I am in partial disagreement with Dr Chan, who states that he seldom tests strategies with data older than 2007 (read more here: The Pseudo-science of Hypothesis Testing). All other things being equal, I find a strategy that worked well for a long time to be more likely to work in the near future than a strategy who worked well over a short history (which is not to mean that something that started working only recently can’t keep working). Also, even if you have something that started working only recently, having a look at how it behaved when it didn’t really perform can certainly offer some interesting insights – especially if you are not sure on what the driver behind your alpha really is.


Alpha’s drivers

This leads me to the final point before concluding this long post: do we really have to understand what our model is doing and what kind of inefficiency we are exploiting?

Personally, I think that understanding the underlying driver of our alpha is certainly a big plus, as it lets you directly monitor the behaviour of the prime driver, which in turn could give you some practical insights in troubled times. However, this is not always enough – think of the quant funds during the 07-08 meltdown: they were fully aware of the driver behind their equities stat arb strategies, but they still got trapped in orders flows and forced-liquidations. Another example could well be the blow-up of LTCM.

Moral of the story is that there could always be an additional layer of complexity not being considered, so that (partly) understanding our alpha’s driver might not offer any additional upside.
Therefore, although nice I don’t deem it necessary to understand the real driver behind our alpha – provided that our statistical analysis gives us enough confidence to trade our strategy.

Schwager’s Market Wizards series presents supporter of both sides, under the names of D.E. Shaw and Jaffray Woodriff. You can read more about their views in William Hua’s post in Adaptive Trader: Ensemble Methods with Jaffray Woodriff, or have a look at this QUSMA’s post for a more in-depth example of Woodriff’s approach: Doing the Jaffray Woodriff Thing (Kinda)

Andrea

Advertisements

About mathtrading

My name is Andrea La Rosa and I am a quant trader based in the UK. In the past I worked as a quant in the prop desk of an investment bank, before deciding to fully dedicate myself to quantitative trading.
This entry was posted in On backtesting, Trading Strategies Design and tagged , , , , , . Bookmark the permalink.

One Response to Underfitting, misfitting and understanding alpha’s drivers

  1. Pingback: Features selection in trading algorithms | Math Trading

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s