To understand whether a strategy is able to perform in the future, the first question to ask is probably whether our strategy really showed great performance in the historic back-test or all it was doing was just describing past data accurately.
In regards to this, something I recently started doing when searching the strategy parameters space (i.e. “optimizing the strategy”), is to evaluate only the 98% quantile of the returns, that is leaving out the top 2% trades/days.
This approach has 2 main advantages:
1- It reduces the chances that the search algorithm gets stuck to some particular parameters values catching one-of/rare patterns in market history (e.g. 1987 crash, 2010 flash crash, or just particular squeezes).
2- If the search has positive outcome, what you find is a strategy that is profitable even if the top 2% of his trades never occurred, which means that the strategy is relatively robust.
Of course depending on the trading strategy in use, you may want to change the actual percentage value (if you are using a MR strategy, usually having high win %, you might want to use an higher threshold), but you get the idea.
In some extent, this is similar to using the Omega ratio with a variable threshold level (given by the 98% quantile). However, I find this quantile normalization approach somewhat more powerful, as it allows you to calculate the full range of strategy statistics (including your chosen fitness function) across these “normalised” returns – in contrast with the single number spit out using the Omega ratio.