On-line parameter optimization for automated Strategies

Most strategies have parameters to set and depending on the situation, a certain parameter value does best. The result is that most strategies work for a while and then aren't valid anymore. But the entire strategy might not be obsolete, the parameters just need fine-tunning.

Here I’ll show you a way to optimize the parameters while the strategy is running and I compare it to a strategy where the parameters are set once and for all at the beginning of the test period with classical optimization over a period in the past.

When is on-line parameter optimization applicable

Just for the sake of having a real-life example, the method is applied to an automated strategy which I talked about in my previous article this month. However, the method is applicable to any strategy with optimizable parameters. The strategy in question gives a probability that the next movement will be positive or negative:

If you have a strategy that works in the way of the drawing above, you’ll soon notice that having a position open at all times isn’t
the best idea. A better plan might be to only open a position when the probability of a rise is either very high or very low. Then you will notice that the optimal threshold probability at which to start investing isn’t constant but changes with market conditions.

You will most probably have even more and probably very different parameters in your strategy and they can all be adapted "on-line" (while the strategy is running).

A genetic algorithm to choose optimal parameters

The optimal parameters aren’t constant so we’re going to try approaching them with a genetic algorithm.

This kind of algorithm is very simple to implement which is one of it's great strength. The idea is itself very
simple, evolution, the strong survive and the weak die.

We start with a population of individuals who each have a list of parameters (equivalent to genes) and we simulate the profit they would have generated if those parameters had been used. Since the latest profits are the ones we’re interested in, we use an exponential moving average over the profits.

Every few steps, we choose 3 random individuals and replace the one with the worst profit EMA by the child of the other two. This method keeps population size stable.

Finally, on every step, we use for actual trading the parameters from the individual with the best profit EMA so far.

Children are generated from two individuals A and B. for each parameter, a random share is inherited from A and the rest from B. we also add a certain amount of random mutation necessary to maintain diversity within the population.

Reproduction in Java style code would look something like this :

for(int i = 0; i < numParameters; i++){
double fatherShare = Math.Random(); //random numberbetween 0 and 1
child.setParameter(i, (indivA.getParameter(i)*fatherShare)+ (indivA.getParameter(i)*(1-fatherShare))+µ;
// µ stands for the random mutation

One may wonder if the genetic algorithm is really necessary, couldn’t we simply generate a wide and varied population of parameter sets, never change it, and choose on each step the one with the best profit EMA? The answer is no because what happens is the “best” parameter set is often just a lucky one in which case the results are unlikely to repeat.

Our population of parameter sets evolves slowly with most individuals very similar to one another but with enough diversity for some flexibility.

Results with the LSTM algorithm

As in my last article, I compare the strategy to one that hedges every position. It invests both ways on every step and therefore is only subject to trading costs, spread and commission.

I then compare this strategy with the LSTM strategy from my last article:

Not only does the use of evolutionary parameters improve the results, it does so while increasing then umber of trades. Adapting strategy parameters live is a definite improvement and it could/should be used with any strategy.


There are a couple minor drawbacks to the evolutionary parameters method. Namely, the parameters will take a while to set in correctly and the method has a few parameters of its own like population size and mutation rate.

The first problem requires that the strategy be pre-trained before it is run live; The strategy is run in the historical-tester for a couple of month up to the most recent data, everything is saved and it can then be run live. This is already the case for the strategy that uses an LSTM network.

The population size can be chosen by running the strategy in the historical tester for different values and then selecting the best one.
Alternatively, one may simulate population dynamics but it is very difficult to control because population size can fall to zero or increase exponentially and then require too much computation. Take my word for it, It’s not worth the trouble. Choose an aceptable population size, not too big, and be done with it.

The mutation rate can simply be a parameter held by the individuals. Mutation rate then evolves with a certain delay but it works well enough.


The evolutionary/genetic algorithm is a very elegant peace of coding but the truth is that there are many other ways to do what it does. The main point is that strategy parameters shouldn’t be fixed because market conditions change all the time depending on the hour of the day, the day of the week, news events, etc…

It seems very complicated to design a strategy that would be prepared for every eventuality. A much better solution is to adapt the strategy on the fly with the latest data available.

And as a follow up to my last article, this method makes the LSTM strategy theoretically profitable, albeit barely, when used with the commissions that go with the largest accounts on dukascopy. That’s obviously not good enough but I stay confident that further improvements can yet be made.

Thanks for reading,

Emeric Beaufays
Translate to English Show original