When trying to analyse market data it is common practice to use techniques borrowed from different fields to transform the data. Examples of these are probability distributions of price returns, moving averages, autocorrelations, rolling volatility estimates, data mining techniques and pretty much any kind of operation we use to treat the data/build an indicator.
Nothing wrong per se in using any of these methods, but I see a problem arising when we try to define the market behaviour by means of these. In general, most of these methods cause a a great loss of information about the actual market structure. Again, this is not necessarily wrong, and actually I find it a needed step in the analysis: markets are so complex that filtering out some “noise” is necessary, but one has to be aware of what’s being considered as noise.
Take empirical probability distributions/pdf of daily returns: they are great in that they tell you more or less what’s the range of possible events on a daily basis, but nothing more really. A market could trend up for some time and then sell off right back to its original level and have one probability distribution, or it could trade on a range for the whole same period, and still have exactly the same distribution of returns. In this case, an important piece of information that gets lost (among others) is the time evolution of returns (and hence of course of their moments).
In this case, combining the knowledge of returns probability distribution with an analysis of autocorrelations could clarify the picture, as it could the use of a rolling probability distribution with a smaller time window. These expedients would allow us to use more information but of course we would still suffer a loss of information (but this is also part of our goal).
Similar argument goes for moving averages…all they tell us is how the average price has moved in the last X days . They, alone, don’t tell us anything about the actual range of the price movements, nor about how the average price will move in the future.
What I am trying to say here is that markets have a very fine and complex structure and trying to fully define them is not only almost impossible, but also not needed from a trading point of view.
If we consider markets as an ocean of information, trading consists in finding a small but meaningful and recurrent wave (or better, many waves) out of this ocean and to ride it until it changes direction. Spotting the wave is only half of the game, building an algorithm to ride it is the other half.
Likewise, my approach so far has been to look for small inefficiencies characterizing a market and try to take advantage of them while limiting the impact of other factors emerging from the inherent limitations of the data transformation technique in use.
PS: Happy New Year everyone!