Just an update, and some clarification about my SuperNET work. I can't guarantee a 'mathy whitepaper' as James described in an earlier post; it depends on whether there's anything I discover that's worth communicating in such a format. 'It's more of a see what I can do' arrangement, this post describes it more. But at the very least, there are, and will be further forum posts on supernet.org about why the Telepathy protocol will be doing some things the way it does (which I can polish into articles/compile into a paper, if they turn out to be more suitable formats).
Also, to clarify: this isn't a formal contract. It's not even a part-time thing, it's really a 'contribute as much as I'm comfortable with" arrangement. It may turn out that what I can do isn't worth 1000 SuperNET, we should discuss the bounty.
Same here.
What I have built so far is something I can be proud of:
- a multiheaded python/twisted proxy that implements the FULL SuperNET API
- proper daemonization
- internal server/client structure: server receives requests, turns around, lets a client instance do the job, gets result from client instance,
hands it back out to the requester
- MULTIHEADED: depending on the GET request it gets, it can affect different backend requests.
meaning: it takes the content of the request, turns around and fetches EITHER:
- an xml query to somewhere on the web, PARSES the xml and returns something parsed
- a POST request to a SuperNET server somewhere on the web and does a SuperNET api call, and returns that
- does a lookup of locally cached data that rarely change and are kept in a local cache
- extendable by simply modding existing classes
-SCRIPTED:
- then it has schedulers that do scheduled tasks, as eg periodically refresh locally cached data
- scripting facilities to run scripts, as e.g. TESTS or business logics
- when new api calls are added to SuperNET, it is a matter of MINUTES to add them into the class structure. already did that when passthrough was added, and s.t. else was renamed. can't wait to put in MGW!
This is quite an achievement, and will have many uses in the future (and is quite rough in a few aspects, eg the xml parsing is rudimentary, it simply provides the facility to do so- after all, the specific use case has to know what info is demanded).
But what about testing? that is slightly different.
Now what I already do is PING the whitelist on a timer, ie fire a volley of pings as I set the timer- like 1 second or whatever,
and it pings away like supercharged homing beacon (and let's see what'll happen if we really crank up the speed!)
Doing 'findnode' may also work, and 'sendmessage'.
But I am not sure about internally simulating Kademlia DHT topographies and its possible failure modes.
That will take time, if possible for me at all.
In any case, testing is a bitch, and setting up an orthodox testing suite is unfeasable for us.
Anyone who has ever worked on such an orthodox test suite/framework could hopefully support this statement.
So, as Zahlen says: I'll do what I can and let's see where it takes us. Only when there is material progress, we can think about some compensation.
In the case of my SuperNET api, it will certainly have several more uses than only testing, but testing (and also the bounty associated with it) should be the first step, becasue that is what is needed to get SuperNET running!