@tod I totally understand you willing to keep that alpha you found private.
Taylor's theorem is the one you are referring to, and the problem is called overfitting. This why those using complex deep-learning algorithms use strategies to dumb the neural network's learnings, so they work out of sample This is also why I am wary of optimization, Monte-Carlo and walk forward analysis, if used to tune up. What I like use optimization for is to find the range of parameters where the strategy works best and worst, not to tune it up, but to understand its limitations. I also like to break it, using it with tickers and timeframes that I know their behavior, to see how the strategy behaves in those situations, as a way to know what to expect, and how to size it. If I can break the strategy easily, I can dismiss it and move on to something else. If it only works with a very specific, single parameter or narrow range, then I dismiss it because it will not work out of sample. It it is hard to break and the parameter range is wide, then we are onto something.
For instance, I know that a ticker like GRPN is going to have a massive loss, NVDA or TSLA are going to be rockets, /BTC is going to have massive swings, GLD is going to chop a lot, then make a big move, etc. And I know when the market has flash crashes, continued bear markets, and so on. If I know the drawdown in a very bad scenario, I can size the allocation to that strategy. If I have a strategy that loses very little or even makes some on GRPN, but still makes money on the famous ones and others like
For trend momentum strategies, there's a lot written about selecting stocks with a big move history on the leading pack within the leading sector. You will miss an early entry and therefore bragging rights, but you jump in to a confirmed trend that may continue 35% of the time, and when if does, you make enough to compensate the small loses of the other 65% of times the trend died.