Well, I think it's really time to solve this problem. What I propose, is to adopt a (strongly) modified version of the algorithm described here:

https://nxtforum.org/proof-of-stake-algorithm/basetarget-adjustment-algorithm/. Namely:

1. Introduce the interval of values the BaseTarget may assume. Say, [90%; 3000%].

2. BaseTarget changes only at blocks that are multiple of 10.

3. Let T be the mean blocktime of last 10 blocks (in minutes). Then

3.1 If T<0.9, set New_BaseTarget=Old_BaseTarget*0.93

3.2 If T>1.1, set New_BaseTarget=Old_BaseTarget*1.1

3.3 If 1<T<1.1, set New_BaseTarget=Old_BaseTarget*T

3.4 If 0.9<T<1, New_BaseTarget=Old_BaseTarget*(1-0.7*(1-T))

Of course, if the New_BaseTarget tries to go out of the above interval, just set it to the limiting value.

The above values can be changed, of course, but I think it should be along these lines: we shouldn't allow the BaseTarget to change very quickly, while still allowing it to be adjusted. When doing the hard fork, set Initial_BaseTarget = 300%, say.

An algorithm like this should solve the problem of large blocktimes for good. Also, it is more secure than the current one, exactly since the blocktime will become more "concentrated" (i.e., the variance will decrease) with it.