I'm almost sure this rumor is wrong, at least for the current realization of the BaseTarget changing algorithm. The reason is that the (normalized) BaseTarget can approach 0 rather easily. Then, for *i* when this happens, the term *1/baseTarget*_{i}^{2} would be so large, that it over-weights all other terms. That is, a "bad" alternative chain would still have a decent chance to win over the "good" one.

**By the way, the current algo has the same deficiency, although to less extent.** Imposing a reasonable lower limit on BaseTarget would solve this issue.

Shouldn't we get rid of this deficiency by using *1/SQRT(baseTarget)* instead of *1/baseTarget* then? If the answer is positive then we could "inductively" come to 1/baseTarget^{0} (aka 1/1).

Well, if one starts using

*1/baseTarget*^{a} with

*a<1*, then you decrease this effect, but, in compensation, the expected values of

*1/baseTarget*^{a} would be closer for "good" and "bad" branches, which makes them more difficult to distinguish (as you pointed out, in the limit a -> 0 they will become indistinguishable

). Yes, one should be able to optimize in

*a*, but why doing that if one can get rid of that (former) effect for good, just by introducing a reasonable lower limit for the baseTarget?

EDIT: this "optimization in

*a*" would not work well, since the "optimal" value of

*a* will depend on other parameters (the stake of the "bad guy", ...) and so there is no "universal" optimal value for

*a*. Well, one more reason to introduce a lower limit for the baseTarget...