Forecasting models and optimal parameters

I was participating at a conference yesterday, and one of the panellists observed that in order for a model to have a good predictive power its parameters have to be stable over time.
The reason behind this is that any forecasting method bases its forecasts and its optimal parameters on past data. Depending on the model, we could say that the parameters represents our model expectation of the future (conditional on the past). If we have to change the parameters values very often, all we are doing is trying to change the model to adapt it to some new dynamics. Taken to the extreme, this is simply overfitting and trying to use a model to capture dynamics that it’s not able to capture  (an example of this are Moving Averages type of models to forecast “trends”…if we run them over different historical time periods and different look-back windows, we will always find different “optimal” parameters).

This observation can provide us with a simple (and to some degree mechanical) rule to help decide whether a  forecasting model is feasible or not, without delving into more complex techniques (e.g.machine learning).
A way to do this could be to see how the change of optimal parameters values affects the model performances.
If the model is a pure forecasting model (e.g. we are trying to forecast something that is then observable, like economic figures or volatility, in contrast to something like Moving Averages where we are trying to forecast something not well definable like “trends” ),  then we can calculate what’s the increase in Root Mean Square Error (or its %) due to the change in the optimal parameters values.
The RMSE ( Root Mean Square Error) is simply the square root of the mean of squared errors, defined as the difference between the predicted values and the actual values: . I would like to specify here that sometimes I find more useful to define a % RMSE (same as above, but divide the difference by the actual value).
To calculate the increase in %RMSE (or RMSE) caused by the change in optimal parameters values we need to calculate the optimal parameters and their forecast error at each time step, and take the differences with the %RMSE due to the “old” parameters at each time step.
Once this is done we can then proceed and plot the time series of this “perfomance deterioration” indicator or simply take the sum of the increases in %RMSE over a certain period (which will depend on the holding period of our strategy).
If the effects of change of parameters are “big”, we may want to be careful about investing our money based on this forecasting model.

What’s striking is that, if you think about it, to do something similar with not pure forecasting model means evaluating how the change in parameters affects the trading model performances, which is effectively the definition of out-sample testing. This suggests that for pure-forecasting models the above procedure allows to clearly isolate the deficiencies of our forecasting models from those of the trading model and, according to the results, may give us some hints on how to make the trading model “smart(er)”.



About mathtrading

My name is Andrea La Rosa and I am a quant trader based in the UK. In the past I worked as a quant in the prop desk of an investment bank, before deciding to fully dedicate myself to quantitative trading.
This entry was posted in On backtesting and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s