elective-stereophonic
elective-stereophonic
Show Posts - mthcl
Please login or register.

Login with username, password and session length
Advanced search  

News:

Latest Stable Nxt Client: Nxt 1.12.2

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - mthcl

Pages: 1 [2] 3 4 ... 21
21
Transparent Forging / Re: Transparent Forging - the latest info?
« on: November 09, 2015, 09:56:01 pm »
The remaining pieces of transparent forging, such as prediction of next forger and blacklisting of those missing their turns, will be implemented, if really necessary, after the issue of full blockchain pruning is solved, which is after 2.0.

For now, the improvement in making the block times more regular and avoiding long blocks, which is already implemented by improving the base target adjustment algorithm and will take effect at the 1.7 hardfork, is sufficient, and with the number of transactions we have there is space to grow before an increase of possible transaction throughput will be really needed.

I second what JLP said here.

Now, I would say that there is no such thing as the transparent forging. Of course, the idea is that the next forgers (at least, several of them) are known in advance, and this would make several things (such as instant transactions) possible; but this is partially true even for the current algorithm. The details of the transparent forging are still awaiting discussion; anyhow, as stated above, that's not urgent. One possible proposal is in my "Math of Nxt forging" paper. Also, very old CfB's posts on this are probably outdated; I remember we discussed one TF proposal with him, and that proposal turned out to be unsafe.

22
Nxt General Discussion / Re: IOTA + Nxt
« on: October 29, 2015, 02:39:44 pm »
Seems we have a different opinion here:  https://simtalk.org:444/index.php?topic=134.0     :o

24
No one knows who owns the account with 50M ?

I would be surprise if it is not BCnext.

Edit: Who would hold that much NXT for so long, if not him.
It is not BCNext, almost surely.
Why you think that?

Because I have some information supporting this claim   :)

25
No one knows who owns the account with 50M ?

I would be surprise if it is not BCnext.

Edit: Who would hold that much NXT for so long, if not him.
It is not BCNext, almost surely.

26
Nxt Improvement Proposals / Re: Fixing the blocktimes
« on: October 12, 2015, 05:09:56 pm »
curious to know the longest block of the average day

So am I.

also by making 1000 NXT required, doesnt that make any N@S attack cost 1000x more? it seems that this will create some trolling that it is unfair that small NXT accounts cant forge. Requiring ~$10 to forge discriminates against the people who cant afford that much.
I think this doesn't have to do with N@S, at least for now (when there is no penalty for nonforging). But any unfairness argument of that sort would be just ridiculous: how many BTC's can one mine even for a 1000$ investment, without joining mining pools?..

27
Nxt Improvement Proposals / Re: Fixing the blocktimes
« on: October 05, 2015, 11:43:32 pm »
If someone can provide an executable that take as a string input the blockchain path and other parameters. I can use an optimizer that will try to optimize the parameters to get a minimal variance and such.

The problem is that I don't know java and all I need to do that is an executable that take the various parameters as input, the program would also need to output the results in a text file with the filename given as an input parameters. No fancy output is required.

In other words, the JL program would need to be modified to act as a function that take input and give output.

If any question, please ask.

I don't think there is really a need for optimization; the algorithm with JLP's parameters works very well already, and, in any case, life is different from math models.  Also, there is not much room for optimization because of the following: if the stake is constant and the BT does not change at all, then the standard deviation would be 60 as well (this is just a property of the Exponential distribution: its expectation equals standard deviation). The standard deviation probably can be forced down just a little bit more (because the BT adjustment algorithm provides negative feedback), but this won't influence a lot the local picture of the sequence of the blocktimes.

So, I would vote for implementation of the algo with JLP's parameters.

28
Nxt Community News and Announcements / Re: [ANN] Jinn
« on: September 17, 2015, 10:16:21 pm »
...
Thoughts? Comments? CFB? Triangle?


29
General / Re: Preventing spam by forgers
« on: September 13, 2015, 01:34:54 pm »
...
and since it has 7 for days of week, 12 for months in a year with 16 that is power of 2, along with 15, it has all the magic numbers needed for good luck.
...
Protest. All numbers are magic  :)

30
General / Re: Preventing spam by forgers
« on: September 13, 2015, 12:34:17 pm »

...

But also could:

- register 256 3 letter MS currencies (saving 6.4 million NXT in the process  :D)
  or
- register 256 assets (saving 256.000 NXT - not bad either)
  or
- create a fake (same named) asset and start creating buy and sell orders, causing thousands of trades and making the asset appear as the real deal (since most fake assets have little number of trades)

That's a valid point as well.  So, I vote for implementation of the bcdev's proposal.

31
General / Re: Preventing spam by forgers
« on: September 12, 2015, 11:46:57 pm »
Give each forger the transaction fees belonging to the block 1440 blocks ago.
1) This scheme is bad because it removes the incentive to include transactions in blocks. A forger has to get some % of what they put in the block, preferably more than what they'd get from the next forger [60/40 scheme is much better than 50/50].
Also, it opens the way for all sorts of "forging games", e.g. of the following kind. Assume that I'm supposed to forge the block N, and N-1440 is empty. However, N-1439 is full of transactions. I can calculate that if I forge N, the N+1 will not be mine, it will belong to account X.  Why should I forge the block N then? Let me not forge it, and see what happens.  Maybe, I should blackmail X to pay me half of the fees of N-1439?..

32
General / Re: Preventing spam by forgers
« on: September 12, 2015, 03:04:42 pm »
I'd say go one step further. Give each forger the transaction fees belonging to the block 1440 blocks ago.
Simpler to implement it that way.
I think that it's a bad idea.

33
General / Re: Preventing spam by forgers
« on: September 12, 2015, 02:25:07 pm »
Right now a forger can create free transactions if he puts them in block that he forged.
Since max tx size is 1kB, that means that a forger can spam 250kB in one block for free.

Solution:
Make block rewards split between last X forgers.
For example:
60/40 scheme - A forger forges block 1000, 60% of tx reward goes to him, 40% to forger of block 999.
After that forger of block 1001 gives 40% of his reward to the forger of 1000... And so on.
40/30/30 scheme - A forger forges block 1000, 40% of tx reward goes to him, 30% to forger of block 999, 30% to forger of block 998.
After that forger of block 1001 gives 30% of his reward to the forger of 1000 and 30% to 999... And so on.

Overall this scheme would disincentivize transaction spam since even if you forge a block, you'd still pay a fraction of tx price [60/40 - 0.4NXT per tx, 40/30/30 - 0.6NXT per tx].

What do you think of this scheme?
I think that it's a good idea.

34
Nxt Improvement Proposals / Re: Fixing the blocktimes
« on: September 10, 2015, 08:52:46 pm »
Quote
I'm no expert on fork selection, but having limits on Base Target might open the door to a lower-forging-balance fork having more weight than a higher-forging-balance (ie: main) fork...

Could you please elaborate? How such a thing can happen, and what the limits on BaseTarget have to do with this?

It is my understanding that Base Target feeds into difficulty, which is used is in branch selection.

From https://wiki.nxtcrypto.org/wiki/Whitepaper:Nxt#Base_Target_Value
Quote
The cumulative difficulty value is derived from the base target value

If limits are implemented, it seems far more likely that the limit(s) are reached on undesirable forks instead of the main chain.  How this would ultimately affect cumulative difficulty and branch selection are unknown by me.  But I do think it is an effect which should be considered.
Well, this has to do with the upper limit on the BT, but not lower one. That upper limit would be "critical" only if the active balance is very low. I remember CfB told me some reason to have an upper limit, but I confess I don't remember it anymore (it had something to do with boundary conditions and discretization).

Anyhow, those long blocktimes have to do with (absence of) the lower limit on the BT. There is a natural difference between PoW and PoS: the inverse hashing power has no limits from both sides, while the inverse active balance is not limited from above, but is limited from below. So, it's very natural to have a lower limit on the BT - since (ideally) it's proportional to the inverse active balance.

35
Nxt Improvement Proposals / Re: Fixing the blocktimes
« on: September 10, 2015, 07:51:14 pm »
Quote
I'm no expert on fork selection, but having limits on Base Target might open the door to a lower-forging-balance fork having more weight than a higher-forging-balance (ie: main) fork...

Could you please elaborate? How such a thing can happen, and what the limits on BaseTarget have to do with this?

36
Nxt Improvement Proposals / Re: Fixing the blocktimes
« on: September 09, 2015, 08:37:05 pm »
So, what did Jean-Luc say about the implementation of this? With the release of 1.6?
He said that he plans to include this in 1.7, but this is at least a few months from now. In the meanwhile, it would be very good if someone would do the simulations to see how such a system would work.

So. Are there people interested in doing simulations? Are there whales willing to offer bounties for that?

37
Nxt Improvement Proposals / Re: Fixing the blocktimes
« on: September 07, 2015, 04:04:53 pm »
Nice initiative!

Though I'm not sure about the seemingly complex and arbitrary logic when something much simpler would suffice.

I'd give serious consideration to an Exponential Weighted Moving Average (EWMA), also known as an Infinite Impulse Response (IIR) filter in EE-speak.

In a nutshell, each new input is "weighted" with a 1/2^N value, while the previous average has a (2^N-1)/2^N weight.  This particular algorithm is very CPU friendly since one computation is made for each new input, and since all is a power of 2, multiplications can be replaced with bit shifts.

N is chosen to provide a suitable time constant to balance 'response quickness' vs. 'signal stability'.

As a first try for this purpose, I'd look at the 1/16 (N=4), 1/32 (N=5), and 1/64 (N=6) factors for study.

Example C implementation from Linux source:
Code: [Select]
/*
  * lib/average.c
  *
  * This source code is licensed under the GNU General Public License,
  * Version 2.  See the file COPYING for more details.
  */
 
 #include <linux/export.h>
 #include <linux/average.h>
 #include <linux/kernel.h>
 #include <linux/bug.h>
 #include <linux/log2.h>
 
 /**
  * DOC: Exponentially Weighted Moving Average (EWMA)
  *
  * These are generic functions for calculating Exponentially Weighted Moving
  * Averages (EWMA). We keep a structure with the EWMA parameters and a scaled
  * up internal representation of the average value to prevent rounding errors.
  * The factor for scaling up and the exponential weight (or decay rate) have to
  * be specified thru the init fuction. The structure should not be accessed
  * directly but only thru the helper functions.
  */
 
 /**
  * ewma_init() - Initialize EWMA parameters
  * @avg: Average structure
  * @factor: Factor to use for the scaled up internal value. The maximum value
  *      of averages can be ULONG_MAX/(factor*weight). For performance reasons
  *      factor has to be a power of 2.
  * @weight: Exponential weight, or decay rate. This defines how fast the
  *      influence of older values decreases. For performance reasons weight has
  *      to be a power of 2.
  *
  * Initialize the EWMA parameters for a given struct ewma @avg.
  */
 void ewma_init(struct ewma *avg, unsigned long factor, unsigned long weight)
 {
         WARN_ON(!is_power_of_2(weight) || !is_power_of_2(factor));
 
         avg->weight = ilog2(weight);
         avg->factor = ilog2(factor);
         avg->internal = 0;
 }
 EXPORT_SYMBOL(ewma_init);
 
 /**
  * ewma_add() - Exponentially weighted moving average (EWMA)
  * @avg: Average structure
  * @val: Current value
  *
  * Add a sample to the average.
  */
 struct ewma *ewma_add(struct ewma *avg, unsigned long val)
 {
         unsigned long internal = ACCESS_ONCE(avg->internal);
 
         ACCESS_ONCE(avg->internal) = internal ?
                 (((internal << avg->weight) - internal) +
                         (val << avg->factor)) >> avg->weight :
                 (val << avg->factor);
         return avg;
 }
 EXPORT_SYMBOL(ewma_add);
 
Sure, this can be used as well (I agree that it's better than just considering the simple average). What is really important, are the following points:

1. There are upper and (most importantly) lower limits for the value of BT.

2. BT is allowed to change, to adapt to changes in active balance.

3. BT is only allowed to change SLOWLY!  This will greatly reduce the fluctuations of the blocktimes, and therefore improve the security.

38
Nxt Improvement Proposals / Re: Fixing the blocktimes
« on: September 07, 2015, 03:58:05 pm »
Good to tackle this again! I would like to see those changes implemented.

Would this also influence who is chosen to generate new blocks?
For example:
- Blocks more frequently to big accounts or small accounts?
- Or that accounts that freshly join the network cannot be taken into account anymore, maybe the oppsite, they could easier create the next Block?

Just curious what might happen which the change of the baseTarget
No, changes in the value of the baseTarget don't influence at all who is chosen to generate new blocks. Only the blocktimes change.

39
Nxt Improvement Proposals / Re: Fixing the blocktimes
« on: August 30, 2015, 12:09:29 pm »
Well, I think it's really time to solve this problem. What I propose, is to adopt a (strongly) modified version of the algorithm described here: https://nxtforum.org/proof-of-stake-algorithm/basetarget-adjustment-algorithm/. Namely:

1. Introduce the interval of values the BaseTarget may assume. Say, [90%; 3000%].

2. BaseTarget changes only at blocks that are multiple of 10.

3. Let T be the mean blocktime of last 10 blocks (in minutes). Then

 3.1 If T<0.9, set New_BaseTarget=Old_BaseTarget*0.93
 3.2 If T>1.1, set New_BaseTarget=Old_BaseTarget*1.1
 3.3 If 1<T<1.1, set New_BaseTarget=Old_BaseTarget*T
 3.4 If 0.9<T<1, New_BaseTarget=Old_BaseTarget*(1-0.7*(1-T))

Of course, if the New_BaseTarget tries to go out of the above interval, just set it to the limiting value.

The above values can be changed, of course, but I think it should be along these lines: we shouldn't allow the BaseTarget to change very quickly, while still allowing it to be adjusted. When doing the hard fork, set Initial_BaseTarget = 300%, say.

An algorithm like this should solve the problem of large blocktimes for good.  Also, it is more secure than the current one, exactly since the blocktime will become more "concentrated" (i.e., the variance will decrease) with it.

Sounds good to me, a change like this is overdue. but I still have a few remarks/questions.

1. with your statement 3.4, why do we multiply the (1-T) by 0.7, that seems like a rather arbitrary value, any particular reason for that number in particular, or are we just generally making the base target scale slower when lowering the base target.

2. So this new algorithm would be more secure for blockchain transactions, because more blocks are able to pile on top of the transactions in a much more time efficient manor, but should we be worrying at all about increased forking due to a higher block concentration.

1. That's because there is generally an asymmetry between increasing and decreasing BT. See the topic I cited in the 1st post, there it's better explained. But we can change 0.7 to 0.85, why not?..  :)   I think the algorithm will work nicely for any reasonable choice.

2. Very fast blocks were happening also because of the too-big-fluctuations of the BT. Of course, with this algorithm they still would occur occasionally (due to the fact that the Exponential distribution is asymmetric), but "series of fast blocks" (which, I guess, cause forking) will be much more unlikely.

40
Goods / Re: NXT bracelet
« on: August 29, 2015, 10:53:48 pm »

Sorry, what is QR?  Anyhow, if it can be made of beads, she can make it!
 
hi! sorry for expressing bad. QR code.
the idea was to add a supplement  ;D, but would be required enormous bracelet... o a lot of small pearls ::)


Shipping where?
spain
She's saying that the shipping would be around 5$ if we pretend that there is only paper inside the envelope (well, the bracelet is sufficiently flat  :) ). Smth recembling the QR code can also be made.

Could you write her directly about that, marinabeadingart@gmail.com ?  Her name is, obviously, Marina   :)

Pages: 1 [2] 3 4 ... 21
elective-stereophonic
elective-stereophonic
assembly
assembly