elective-stereophonic
elective-stereophonic
Fixing the blocktimes
Please login or register.

Login with username, password and session length
Advanced search  

News:

Latest Nxt Client: Nxt 1.11.15

Pages: [1] 2  All

Author Topic: Fixing the blocktimes  (Read 12254 times)

mthcl

  • Hero Member
  • *****
  • Karma: +96/-8
  • Offline Offline
  • Posts: 562
    • View Profile
Fixing the blocktimes
« on: August 29, 2015, 10:37:07 pm »

Well, I think it's really time to solve this problem. What I propose, is to adopt a (strongly) modified version of the algorithm described here: https://nxtforum.org/proof-of-stake-algorithm/basetarget-adjustment-algorithm/. Namely:

1. Introduce the interval of values the BaseTarget may assume. Say, [90%; 3000%].

2. BaseTarget changes only at blocks that are multiple of 10.

3. Let T be the mean blocktime of last 10 blocks (in minutes). Then

 3.1 If T<0.9, set New_BaseTarget=Old_BaseTarget*0.93
 3.2 If T>1.1, set New_BaseTarget=Old_BaseTarget*1.1
 3.3 If 1<T<1.1, set New_BaseTarget=Old_BaseTarget*T
 3.4 If 0.9<T<1, New_BaseTarget=Old_BaseTarget*(1-0.7*(1-T))

Of course, if the New_BaseTarget tries to go out of the above interval, just set it to the limiting value.

The above values can be changed, of course, but I think it should be along these lines: we shouldn't allow the BaseTarget to change very quickly, while still allowing it to be adjusted. When doing the hard fork, set Initial_BaseTarget = 300%, say.

An algorithm like this should solve the problem of large blocktimes for good.  Also, it is more secure than the current one, exactly since the blocktime will become more "concentrated" (i.e., the variance will decrease) with it.
Logged

jones

  • Hero Member
  • *****
  • Karma: +310/-8
  • Offline Offline
  • Posts: 1043
  • write code not war
    • View Profile
    • jNxt
Re: Fixing the blocktimes
« Reply #1 on: August 30, 2015, 07:40:31 am »

Well, I think it's really time to solve this problem. What I propose, is to adopt a (strongly) modified version of the algorithm described here: https://nxtforum.org/proof-of-stake-algorithm/basetarget-adjustment-algorithm/. Namely:

1. Introduce the interval of values the BaseTarget may assume. Say, [90%; 3000%].

2. BaseTarget changes only at blocks that are multiple of 10.

3. Let T be the mean blocktime of last 10 blocks (in minutes). Then

 3.1 If T<0.9, set New_BaseTarget=Old_BaseTarget*0.93
 3.2 If T>1.1, set New_BaseTarget=Old_BaseTarget*1.1
 3.3 If 1<T<1.1, set New_BaseTarget=Old_BaseTarget*T
 3.4 If 0.9<T<1, New_BaseTarget=Old_BaseTarget*(1-0.7*(1-T))

Of course, if the New_BaseTarget tries to go out of the above interval, just set it to the limiting value.

The above values can be changed, of course, but I think it should be along these lines: we shouldn't allow the BaseTarget to change very quickly, while still allowing it to be adjusted. When doing the hard fork, set Initial_BaseTarget = 300%, say.

An algorithm like this should solve the problem of large blocktimes for good.  Also, it is more secure than the current one, exactly since the blocktime will become more "concentrated" (i.e., the variance will decrease) with it.

Sounds good to me, a change like this is overdue. but I still have a few remarks/questions.

1. with your statement 3.4, why do we multiply the (1-T) by 0.7, that seems like a rather arbitrary value, any particular reason for that number in particular, or are we just generally making the base target scale slower when lowering the base target.

2. So this new algorithm would be more secure for blockchain transactions, because more blocks are able to pile on top of the transactions in a much more time efficient manor, but should we be worrying at all about increased forking due to a higher block concentration.
Logged
-- Jones NXT-RJU8-JSNR-H9J4-2KWKY

mthcl

  • Hero Member
  • *****
  • Karma: +96/-8
  • Offline Offline
  • Posts: 562
    • View Profile
Re: Fixing the blocktimes
« Reply #2 on: August 30, 2015, 12:09:29 pm »

Well, I think it's really time to solve this problem. What I propose, is to adopt a (strongly) modified version of the algorithm described here: https://nxtforum.org/proof-of-stake-algorithm/basetarget-adjustment-algorithm/. Namely:

1. Introduce the interval of values the BaseTarget may assume. Say, [90%; 3000%].

2. BaseTarget changes only at blocks that are multiple of 10.

3. Let T be the mean blocktime of last 10 blocks (in minutes). Then

 3.1 If T<0.9, set New_BaseTarget=Old_BaseTarget*0.93
 3.2 If T>1.1, set New_BaseTarget=Old_BaseTarget*1.1
 3.3 If 1<T<1.1, set New_BaseTarget=Old_BaseTarget*T
 3.4 If 0.9<T<1, New_BaseTarget=Old_BaseTarget*(1-0.7*(1-T))

Of course, if the New_BaseTarget tries to go out of the above interval, just set it to the limiting value.

The above values can be changed, of course, but I think it should be along these lines: we shouldn't allow the BaseTarget to change very quickly, while still allowing it to be adjusted. When doing the hard fork, set Initial_BaseTarget = 300%, say.

An algorithm like this should solve the problem of large blocktimes for good.  Also, it is more secure than the current one, exactly since the blocktime will become more "concentrated" (i.e., the variance will decrease) with it.

Sounds good to me, a change like this is overdue. but I still have a few remarks/questions.

1. with your statement 3.4, why do we multiply the (1-T) by 0.7, that seems like a rather arbitrary value, any particular reason for that number in particular, or are we just generally making the base target scale slower when lowering the base target.

2. So this new algorithm would be more secure for blockchain transactions, because more blocks are able to pile on top of the transactions in a much more time efficient manor, but should we be worrying at all about increased forking due to a higher block concentration.

1. That's because there is generally an asymmetry between increasing and decreasing BT. See the topic I cited in the 1st post, there it's better explained. But we can change 0.7 to 0.85, why not?..  :)   I think the algorithm will work nicely for any reasonable choice.

2. Very fast blocks were happening also because of the too-big-fluctuations of the BT. Of course, with this algorithm they still would occur occasionally (due to the fact that the Exponential distribution is asymmetric), but "series of fast blocks" (which, I guess, cause forking) will be much more unlikely.
« Last Edit: August 30, 2015, 03:56:18 pm by mthcl »
Logged

LibertyNow

  • Sr. Member
  • ****
  • Karma: +78/-33
  • Offline Offline
  • Posts: 361
    • View Profile
    • Liquid Tech
Re: Fixing the blocktimes
« Reply #3 on: September 04, 2015, 08:45:52 pm »

@mthcl

Thanks for starting the new thread here and putting some details on this change!  Which programmers do we bug now?  What's the approval process or does Jean just put into the next one?

I'm sure this is posted somewhere, but a refresher would be great for myself and others who aren't aware of how the development process works.

LibertyNow
Logged
LQD NXT:17750387231635486778, EDGE NXT: 10713749908351947210
LQD: http://www.liquidtech.info, EDGE: https://www.bustabit.com/user/EDGE_NXTAE

Tosch110

  • Ex-Staff Member
  • Hero Member
  • *****
  • Karma: +211/-18
  • Offline Offline
  • Posts: 2365
    • View Profile
Re: Fixing the blocktimes
« Reply #4 on: September 04, 2015, 11:19:54 pm »

Good to tackle this again! I would like to see those changes implemented.

Would this also influence who is chosen to generate new blocks?
For example:
- Blocks more frequently to big accounts or small accounts?
- Or that accounts that freshly join the network cannot be taken into account anymore, maybe the oppsite, they could easier create the next Block?

Just curious what might happen which the change of the baseTarget

Jack Needles

  • Full Member
  • ***
  • Karma: +25/-2
  • Offline Offline
  • Posts: 149
    • View Profile
Re: Fixing the blocktimes
« Reply #5 on: September 05, 2015, 12:51:14 am »

.
« Last Edit: January 16, 2016, 12:33:37 am by Jack Needles »
Logged

mthcl

  • Hero Member
  • *****
  • Karma: +96/-8
  • Offline Offline
  • Posts: 562
    • View Profile
Re: Fixing the blocktimes
« Reply #6 on: September 07, 2015, 03:58:05 pm »

Good to tackle this again! I would like to see those changes implemented.

Would this also influence who is chosen to generate new blocks?
For example:
- Blocks more frequently to big accounts or small accounts?
- Or that accounts that freshly join the network cannot be taken into account anymore, maybe the oppsite, they could easier create the next Block?

Just curious what might happen which the change of the baseTarget
No, changes in the value of the baseTarget don't influence at all who is chosen to generate new blocks. Only the blocktimes change.
Logged

mthcl

  • Hero Member
  • *****
  • Karma: +96/-8
  • Offline Offline
  • Posts: 562
    • View Profile
Re: Fixing the blocktimes
« Reply #7 on: September 07, 2015, 04:04:53 pm »

Nice initiative!

Though I'm not sure about the seemingly complex and arbitrary logic when something much simpler would suffice.

I'd give serious consideration to an Exponential Weighted Moving Average (EWMA), also known as an Infinite Impulse Response (IIR) filter in EE-speak.

In a nutshell, each new input is "weighted" with a 1/2^N value, while the previous average has a (2^N-1)/2^N weight.  This particular algorithm is very CPU friendly since one computation is made for each new input, and since all is a power of 2, multiplications can be replaced with bit shifts.

N is chosen to provide a suitable time constant to balance 'response quickness' vs. 'signal stability'.

As a first try for this purpose, I'd look at the 1/16 (N=4), 1/32 (N=5), and 1/64 (N=6) factors for study.

Example C implementation from Linux source:
Code: [Select]
/*
  * lib/average.c
  *
  * This source code is licensed under the GNU General Public License,
  * Version 2.  See the file COPYING for more details.
  */
 
 #include <linux/export.h>
 #include <linux/average.h>
 #include <linux/kernel.h>
 #include <linux/bug.h>
 #include <linux/log2.h>
 
 /**
  * DOC: Exponentially Weighted Moving Average (EWMA)
  *
  * These are generic functions for calculating Exponentially Weighted Moving
  * Averages (EWMA). We keep a structure with the EWMA parameters and a scaled
  * up internal representation of the average value to prevent rounding errors.
  * The factor for scaling up and the exponential weight (or decay rate) have to
  * be specified thru the init fuction. The structure should not be accessed
  * directly but only thru the helper functions.
  */
 
 /**
  * ewma_init() - Initialize EWMA parameters
  * @avg: Average structure
  * @factor: Factor to use for the scaled up internal value. The maximum value
  *      of averages can be ULONG_MAX/(factor*weight). For performance reasons
  *      factor has to be a power of 2.
  * @weight: Exponential weight, or decay rate. This defines how fast the
  *      influence of older values decreases. For performance reasons weight has
  *      to be a power of 2.
  *
  * Initialize the EWMA parameters for a given struct ewma @avg.
  */
 void ewma_init(struct ewma *avg, unsigned long factor, unsigned long weight)
 {
         WARN_ON(!is_power_of_2(weight) || !is_power_of_2(factor));
 
         avg->weight = ilog2(weight);
         avg->factor = ilog2(factor);
         avg->internal = 0;
 }
 EXPORT_SYMBOL(ewma_init);
 
 /**
  * ewma_add() - Exponentially weighted moving average (EWMA)
  * @avg: Average structure
  * @val: Current value
  *
  * Add a sample to the average.
  */
 struct ewma *ewma_add(struct ewma *avg, unsigned long val)
 {
         unsigned long internal = ACCESS_ONCE(avg->internal);
 
         ACCESS_ONCE(avg->internal) = internal ?
                 (((internal << avg->weight) - internal) +
                         (val << avg->factor)) >> avg->weight :
                 (val << avg->factor);
         return avg;
 }
 EXPORT_SYMBOL(ewma_add);
 
Sure, this can be used as well (I agree that it's better than just considering the simple average). What is really important, are the following points:

1. There are upper and (most importantly) lower limits for the value of BT.

2. BT is allowed to change, to adapt to changes in active balance.

3. BT is only allowed to change SLOWLY!  This will greatly reduce the fluctuations of the blocktimes, and therefore improve the security.
Logged

KarlKarlsson

  • Hero Member
  • *****
  • Karma: +79/-25
  • Offline Offline
  • Posts: 711
    • View Profile
Re: Fixing the blocktimes
« Reply #8 on: September 09, 2015, 07:20:39 am »

So, what did Jean-Luc say about the implementation of this? With the release of 1.6?
Logged
NXTinfo.org - Your toolbox to become an Asset Expert! | Twitter | Facebook | ZapChain

mthcl

  • Hero Member
  • *****
  • Karma: +96/-8
  • Offline Offline
  • Posts: 562
    • View Profile
Re: Fixing the blocktimes
« Reply #9 on: September 09, 2015, 08:37:05 pm »

So, what did Jean-Luc say about the implementation of this? With the release of 1.6?
He said that he plans to include this in 1.7, but this is at least a few months from now. In the meanwhile, it would be very good if someone would do the simulations to see how such a system would work.

So. Are there people interested in doing simulations? Are there whales willing to offer bounties for that?
Logged

LibertyNow

  • Sr. Member
  • ****
  • Karma: +78/-33
  • Offline Offline
  • Posts: 361
    • View Profile
    • Liquid Tech
Re: Fixing the blocktimes
« Reply #10 on: September 09, 2015, 09:38:10 pm »

So, what did Jean-Luc say about the implementation of this? With the release of 1.6?
He said that he plans to include this in 1.7, but this is at least a few months from now. In the meanwhile, it would be very good if someone would do the simulations to see how such a system would work.

So. Are there people interested in doing simulations? Are there whales willing to offer bounties for that?

@jl777 Is there anymore Destroyer funds left?  This seems like a worthwhile use for that.
Logged
LQD NXT:17750387231635486778, EDGE NXT: 10713749908351947210
LQD: http://www.liquidtech.info, EDGE: https://www.bustabit.com/user/EDGE_NXTAE

jl777

  • Hero Member
  • *****
  • Karma: +718/-123
  • Offline Offline
  • Posts: 6170
    • View Profile
Re: Fixing the blocktimes
« Reply #11 on: September 10, 2015, 05:55:55 am »

So, what did Jean-Luc say about the implementation of this? With the release of 1.6?
He said that he plans to include this in 1.7, but this is at least a few months from now. In the meanwhile, it would be very good if someone would do the simulations to see how such a system would work.

So. Are there people interested in doing simulations? Are there whales willing to offer bounties for that?

@jl777 Is there anymore Destroyer funds left?  This seems like a worthwhile use for that.
~80,000 nxt left,let me know how much is needed
Logged
There are over 1000 people in SuperNET slack! http://slackinvite.supernet.org/ automatically sends you an invite

I am just a simple C programmer

Jack Needles

  • Full Member
  • ***
  • Karma: +25/-2
  • Offline Offline
  • Posts: 149
    • View Profile
Re: Fixing the blocktimes
« Reply #12 on: September 10, 2015, 07:14:13 pm »

.
« Last Edit: January 16, 2016, 12:33:19 am by Jack Needles »
Logged

mthcl

  • Hero Member
  • *****
  • Karma: +96/-8
  • Offline Offline
  • Posts: 562
    • View Profile
Re: Fixing the blocktimes
« Reply #13 on: September 10, 2015, 07:51:14 pm »

Quote
I'm no expert on fork selection, but having limits on Base Target might open the door to a lower-forging-balance fork having more weight than a higher-forging-balance (ie: main) fork...

Could you please elaborate? How such a thing can happen, and what the limits on BaseTarget have to do with this?
Logged

Jack Needles

  • Full Member
  • ***
  • Karma: +25/-2
  • Offline Offline
  • Posts: 149
    • View Profile
Re: Fixing the blocktimes
« Reply #14 on: September 10, 2015, 08:23:47 pm »

.
« Last Edit: January 16, 2016, 12:32:59 am by Jack Needles »
Logged

mthcl

  • Hero Member
  • *****
  • Karma: +96/-8
  • Offline Offline
  • Posts: 562
    • View Profile
Re: Fixing the blocktimes
« Reply #15 on: September 10, 2015, 08:52:46 pm »

Quote
I'm no expert on fork selection, but having limits on Base Target might open the door to a lower-forging-balance fork having more weight than a higher-forging-balance (ie: main) fork...

Could you please elaborate? How such a thing can happen, and what the limits on BaseTarget have to do with this?

It is my understanding that Base Target feeds into difficulty, which is used is in branch selection.

From https://wiki.nxtcrypto.org/wiki/Whitepaper:Nxt#Base_Target_Value
Quote
The cumulative difficulty value is derived from the base target value

If limits are implemented, it seems far more likely that the limit(s) are reached on undesirable forks instead of the main chain.  How this would ultimately affect cumulative difficulty and branch selection are unknown by me.  But I do think it is an effect which should be considered.
Well, this has to do with the upper limit on the BT, but not lower one. That upper limit would be "critical" only if the active balance is very low. I remember CfB told me some reason to have an upper limit, but I confess I don't remember it anymore (it had something to do with boundary conditions and discretization).

Anyhow, those long blocktimes have to do with (absence of) the lower limit on the BT. There is a natural difference between PoW and PoS: the inverse hashing power has no limits from both sides, while the inverse active balance is not limited from above, but is limited from below. So, it's very natural to have a lower limit on the BT - since (ideally) it's proportional to the inverse active balance.
« Last Edit: September 10, 2015, 09:32:31 pm by mthcl »
Logged

LibertyNow

  • Sr. Member
  • ****
  • Karma: +78/-33
  • Offline Offline
  • Posts: 361
    • View Profile
    • Liquid Tech
Re: Fixing the blocktimes
« Reply #16 on: September 21, 2015, 01:47:48 pm »

Quote
I'm no expert on fork selection, but having limits on Base Target might open the door to a lower-forging-balance fork having more weight than a higher-forging-balance (ie: main) fork...

Could you please elaborate? How such a thing can happen, and what the limits on BaseTarget have to do with this?

It is my understanding that Base Target feeds into difficulty, which is used is in branch selection.

From https://wiki.nxtcrypto.org/wiki/Whitepaper:Nxt#Base_Target_Value
Quote
The cumulative difficulty value is derived from the base target value

If limits are implemented, it seems far more likely that the limit(s) are reached on undesirable forks instead of the main chain.  How this would ultimately affect cumulative difficulty and branch selection are unknown by me.  But I do think it is an effect which should be considered.
Well, this has to do with the upper limit on the BT, but not lower one. That upper limit would be "critical" only if the active balance is very low. I remember CfB told me some reason to have an upper limit, but I confess I don't remember it anymore (it had something to do with boundary conditions and discretization).

Anyhow, those long blocktimes have to do with (absence of) the lower limit on the BT. There is a natural difference between PoW and PoS: the inverse hashing power has no limits from both sides, while the inverse active balance is not limited from above, but is limited from below. So, it's very natural to have a lower limit on the BT - since (ideally) it's proportional to the inverse active balance.

@Jean-Luc Is this cool? Can we add a lower limit into the next release?  Is anybody willing to do the simulation for some NXT from the Destroyer fund? Say, 25k?
Logged
LQD NXT:17750387231635486778, EDGE NXT: 10713749908351947210
LQD: http://www.liquidtech.info, EDGE: https://www.bustabit.com/user/EDGE_NXTAE

Jean-Luc

  • Core Dev
  • Hero Member
  • *****
  • Karma: +816/-81
  • Offline Offline
  • Posts: 1610
    • View Profile
Re: Fixing the blocktimes
« Reply #17 on: September 21, 2015, 03:59:20 pm »

Yes, we will do something about limiting the range of block times in the next hard fork.
Logged
GPG key fingerprint: 263A 9EB0 29CF C77A 3D06  FD13 811D 6940 E1E4 240C
NXT-X4LF-9A4G-WN9Z-2R322

KarlKarlsson

  • Hero Member
  • *****
  • Karma: +79/-25
  • Offline Offline
  • Posts: 711
    • View Profile
Re: Fixing the blocktimes
« Reply #18 on: September 21, 2015, 04:42:11 pm »


Yes, we will do something about limiting the range of block times in the next hard fork.
Very nice. Appreciated, as always!


Gesendet von iPhone mit Tapatalk
Logged
NXTinfo.org - Your toolbox to become an Asset Expert! | Twitter | Facebook | ZapChain

abctc

  • Hero Member
  • *****
  • Karma: +148/-13
  • Offline Offline
  • Posts: 1396
    • View Profile
Re: Fixing the blocktimes
« Reply #19 on: September 21, 2015, 07:24:15 pm »

Yes, we will do something about limiting the range of block times in the next hard fork.
- thank you so much!
Logged
Welcome to the Nxt generation of crypto!   Magis quam Moneta (More than a Coin)
"Do not worry, it is an attack" (c) Jean-Luc
Pages: [1] 2  All
 

elective-stereophonic
elective-stereophonic
assembly
assembly