Nxt Forum

Nxt Discussion => Alternative Clients => Nxt Software Releases => SuperNET Releases => Topic started by: jl777 on April 17, 2015, 12:36:08 am

Title: SuperNET agents
Post by: jl777 on April 17, 2015, 12:36:08 am
I got plugins to work so you can daemonize it and optionally turn it into a websockets server

you can also use the "passthru" API to send requests to the plugin, given the "daemonid" that is returned from the "syscall" API when you launch the daemon

the "syscall" "name":"python pangea.py" "launch":1 "websocket":6666 will daemonize pangea.py and install a websockets server on port 6666

the plugin can also send requests to SuperNET API by sending back to the host "pluginrequest":"SuperNET" as a field in a standard SuperNET API JSON

for now it will simply block until it is done, if a non-blocking method is needed ,let me know

there is an example plugin.c that shows what is needed to get a working plugin

https://github.com/joewalnes/websocketd does the websock magic

A large number of languages are supported, all you need is anything that nanomsg will work with, which is a lot: http://nanomsg.org/documentation.html

JavaScript (Node.js)

Title: Re: SuperNET plugins
Post by: jl777 on April 18, 2015, 02:29:39 am
this properly supports invoking a onetime use, daemonized, daemonized with websockets

static char *syscall[] = { (char *)syscall_func, "syscall", "V", "name", "daemonize", "websocket", "jsonargs", 0 };

curl -k --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"syscall\",\"name\":\"/Users/jl777/libjl777/plugins/echo\",\"websocket\":5067,\"jsonargs\":{\"arg1\":\"val1\"}}"]  }' -H 'content-type: text/plain;'
curl -k --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"syscall\",\"name\":\"/Users/jl777/libjl777/plugins/echo\",\"daemonize\":1,\"jsonargs\":{\"arg1\":\"val1\"}}"]  }' -H 'content-type: text/plain;'
curl -k --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"syscall\",\"name\":\"/Users/jl777/libjl777/plugins/echo\",\"jsonargs\":{\"arg1\":\"val1\"}}"]  }' -H 'content-type: text/plain;'
curl -k --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"syscall\",\"name\":\"/Users/jl777/libjl777/plugins/echo  \"}"]  }' -H 'content-type: text/plain;'

the above are some examples of how to invoke the various modes
"jsonargs" needs to be stringified

WEBSOCKETD is the path to the websocketd, you need to define this in SuperNET.conf

jsonargs as a parameter is now supported to all paths. for daemonized instances, it would be initialization parameters, but dont put anything sensitive as it will appear in the process details output

the plugin.c has a single function that processes the JSON args and its returned JSON string is sent back

in standalone mode, it goes directly to the caller

for daemonized mode, it goes back to the host process (SuperNET)
if you are using websockets, then all of the traffic from all the websockets instances are routed to the permanent plugin process

each websockets instance gets an instanceid and it can be directly addressed if needed

I also have a demo of how to invoke SuperNET API with a "getpeers" example. The websockets will have  form and you can input getpeers, this then sends the request to the host and the result goes back to the permanent plugin process

What this means is the messy work of supporting multiple GUI pages to interact to a single process has been solved and the entire SuperNET API is directly accessible in any language. The plugins are just any executable program that follow the SuperNET plugin protocol, which is documented in plugin.c

Title: Re: SuperNET plugins
Post by: jl777 on April 18, 2015, 07:48:40 am
toenu [10:26 AM]
looks like it works

toenu [10:28 AM]
curl -k --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"syscall\",\"name\":\"/Users/jl777/libjl777/plugins/echo\",\"daemonize\":1,\"jsonargs\":{\"arg1\":\"val1\"}}"]  }' -H 'content-type: text/plain;'  << this one i had to add launch:1 otherwise it wouldn't daemonize, not sure if this is intended

toenu [10:28 AM]
i also played a bit with javascript and ipc yesterday. so i'm now able to send api requests over ipc

toenu [10:29 AM]
in the plugin.c what is the significance of "myid"? i don't get that part

jl777 [10:35 AM]
I changed "launch" -> "daemonize" and without it set, it will do a onetime call, so not daemonized but a system() call

jl777 [10:35 AM]
i changed a few things in the protocol today, so you need to pull

jl777 [10:36 AM]
so there are three totally different modes

jl777 [10:37 AM]
a) onetime call, b) daemonized, c) daemonized with websockets

jl777 [10:37 AM]
a) is pretty self explanatory, the program runs and whatever it prints to stdout is sent back to the caller, should be proper JSON

jl777 [10:39 AM]
b) is totally different, it runs as a permanent process and using the passthru API anything can send requests to it. How it interprets the JSON is pretty freeform, just a few pre-defined fields. "myid" is one of those. it is just a 64-bit random number so each instance can be differentiated. In mode b) it is not very useful, but might as well minimize the differences in the plugins based on modes

jl777 [10:41 AM]
JSON can be sent and the returned JSON is (will be) queued and can be retrieved via the checkmessages API

jl777 [10:41 AM]
enable:                                 if ( dp->websocket == 0 )

jl777 [10:41 AM]
if you want to be able to use the checkmessages. I am redoing some lower level code so this is currently disabled

jl777 [10:42 AM]
OK, now we get to mode c), daemonized and websockets. As you know each webpage that connects to the websocket creates a new instance, but what if there are no websocket instances?

jl777 [10:43 AM]
there needs to be a permanent process that exists regardless of whether there are any (or however many) websocket instances there are

jl777 [10:44 AM]
the permanent process has to do a bit more work than in mode b), but most all of this is done automatically. The permanent process acts as a multiplexor of all websockets instances.

jl777 [10:45 AM]
so you can use each websockets as a user interface and have the permanent process provide caching or whatever other common functions that are needed.

jl777 [10:47 AM]
also a key behavior is that if a websockets instance sends a SuperNET API request, the response is sent back to the permanent process. The reason for this is that the websockets interface can disappear at any moment and rather than deal with all the edge cases I just let the plugin maker decide how to deal with it. Each request is tagged with "myid" so the permanent plugin can tell which websocket instance should get the response.

jl777 [10:47 AM]
I havent fully tested this funneling and rerouting yet, so there could well be bugs. let me know if it isnt behaving right. i only have the debug mode HTML page and it can only submit one line
Title: Re: SuperNET plugins
Post by: jl777 on April 20, 2015, 07:07:48 am
bad news is that plugin.h, host side handler and the baseline plugin code is getting much more complicated...

jl777 [9:50 AM]
however, I just got a new style plugin to compile that supports dynamically extending the SuperNET API

jl777 [9:51 AM]
and theoretically the plugin can be bundled and use in process messaging, or run as a separate process, or even on a different node, but the last one is not guaranteed

jl777 [9:52 AM]
the important usecase for bundling is I can bundle things like sophia for blockchain creation and any other type of key/value store that is needed (there are many)

jl777 [9:53 AM]
so with the dynamic API extensibility and inprocess bundled plugins that use the identical code for independent process (and external node) the only problem is that the "basic" plugin that is needed for each language isnt so basic anymore...

jl777 [9:54 AM]
An alternate way to use a new language is to make it C callable and then the good news is that you can use the new slimmed down demo plugin:

jl777 [9:54 AM]
#define PLUGINSTR echo
   // this will be at the end of the plugins structure and will be called with all zeros to _init
char *PLUGNAME(_methods)[] = { "echo", "echo2" }; // list of supported methods

uint64_t PLUGNAME(_init)(struct plugin_info *plugin,STRUCTNAME *data)
   uint64_t disableflags = 0;
   // runtime specific state can be created and put into *data
   return(disableflags); // set bits corresponding to array position in _methods[]

int32_t PLUGNAME(_process_json)(struct plugin_info *plugin,char *retbuf,int32_t maxlen,char *jsonstr,cJSON *json,int32_t initflag)
   char *str;
   retbuf[0] = 0;
   if ( initflag > 0 )
       // configure settings
       str = stringifyM(jsonstr);
       sprintf(retbuf,"{\"args\":%s,\"milliseconds\":%f,\"onetime\":%d}\n",str,milliseconds(),initflag < 0);

int32_t PLUGNAME(_shutdown)(struct plugin_info *plugin,int32_t retcode) (edited)

jl777 [9:54 AM]
using some compile time magic, just by changing a few #define you can change whether it is bundled or not. The main function is the _process_json

jl777 [9:55 AM]
this function processes json sent to the plugin. you have to distinguish between several contexts:

jl777 [9:55 AM]
1: this is called during init time and things are not running yet

jl777 [9:56 AM]
0: this is called during runtime and for both initflag modes, you need to check whether your instance is the permanent one or a websocket spawn

jl777 [9:56 AM]
-1: this is possible if it is a onetime invokation, ie not daemonized

jl777 [9:59 AM]
_methods is an array of strings for the methods your plugin will support. To the rest of SuperNET, the "requestType":"echo", "method":"<any of _methods>" will be used to route the JSON request to the permanent plugin. any tagging needed to correlate return JSON with the original request is up to the plugin as a lot of use cases can be done totally stateless and this is an unnecessary complexity

jl777 [10:00 AM]
the _shutdown is just called before this instance is shutdown, either due to your _process_json returning negative value to trigger an shutdown or because the parent process (SuperNET that launched plugin) died

jl777 [10:01 AM]
finally (or initially) the _init plugin is passed a pointer to a echo_info data structure (all zeroes) before things are started and you can disable any of the first 64 methods based on runtime factors.

jl777 [10:02 AM]
how does SuperNET know how much space to allocate for your data? just put whatever fields you need inside the STRUCTNAME, that macro expands out to struct echo_info and for each plugin it will end up as struct <name of plugin>_info

jl777 [10:03 AM]
You do need to #define PLUGINSTR to echo (without quotes!)

jl777 [10:03 AM]
Steps to make your own plugin:

jl777 [10:03 AM]
1. make a copy of echodemo.c and change the #define's with echo to the name of your pluging

jl777 [10:04 AM]
2. fill out the STRUCTNAME() fields

jl777 [10:04 AM]
3. add the different methods you will support in _methods array

jl777 [10:05 AM]
4. write _init and shutdown functions, usually it can just be to do nothing as things are all setup for the normal cases, you just get a chance to do whatever you need to at the important points

jl777 [10:05 AM]
5. the above should take just a few minutes, the rest of the time is about processing the json API request and copy it into the retbuf

jl777 [10:06 AM]
You will notice that there is no threading to deal with, no nanomsg, no networking, nothing other than what your plugin needs to do
Title: Re: SuperNET plugins
Post by: jl777 on April 21, 2015, 07:44:53 pm

Title: Re: SuperNET plugins
Post by: jl777 on April 23, 2015, 12:05:18 am
finally got the internal process plugin working. just with an echo plugin, but I wanted to verify the overhead is low enough to use for all the major functionality of SuperNET.

So I fiddled with the plugins for a full day as I never want to have to revisit this, I just want to make plugins, theoretically they will be much faster to add, more robust and best of all will allow others to add SuperNET API in any of a dozen languages.

I used the IPC, interprocess transport and it took about half a millisecond per roundtrip call. I also didnt have a mechanism to get the return data back to the caller. Now that is ~2000 requests per second and since it doesnt use any tcp sockets for each new comm, it is already several times faster than bitcoin RPC. But at that speed, many plugins wont be responsive enough.

The in process plugins benchmarked at 60 microseconds per round trip, which is over 15,000 requests per second, but that was with unoptimized (by compiler) code. The optimized code is exposing some timing bugs, probably cuz it goes too fast.

debugging that as I port a very nice database http://sphia.org/architecture.html into a plugin. It took literally 5 minutes to get a plugin made, but I still need to map all the sophia API to the plugin API. What this means is that the full sophia API will extend the SuperNET API as it will be an internal plugin. So ramchains and MGW can use it as the data store, sophia is designed to use "log files" as its database, so it is perfect for blockchains. Still need to validate all the potential coolness of sophia, but the BDB was just took klunky and had the habit of 100% infinite loops on startup if anything was wrong with any of the files. Hopefully sophia will be much more tolerant of such things

Short term:
1. debug optimized code and benchmark internal plugin overhead
2. port sophia into plugin and make C callable wrapper
3. use sophia API for ramchains and MGW
4. make networking plugin for MGW comms (wont be hard at all)
5. use networking plugin for MGW comms.
6. release brand new shiny MGW plugin
7. make InstantDEX plugin
8. make plugins out of everything else
9. hope other devs can make the plugin adaptor for the dozen other languages

Title: Re: SuperNET plugins
Post by: jl777 on April 23, 2015, 05:01:01 am
no optimizations: elapsed 6300 millis for 100000 iterations ave [63.0 micros]

O1: elapsed 5445 millis for 100000 iterations ave [54.45 micros]

O2: elapsed 5604 millis for 100000 iterations ave [56.04 micros]

O3: elapsed 5584 millis for 100000 iterations ave [55.84 micros]

Os: elapsed 5630 millis for 100000 iterations ave [56.30 micros]

Ofast: elapsed 5772 millis for 100000 iterations ave [57.72 micros]

The performance does not correlate to the optimization levels. this is not uncommon. And now I have verified the possible improvements, at least I can design based on that, but the compile time is VERY long for these optimizations so I just have to remember to do this before official release.

20,000 round trips per second is pretty good. This is only for a single outstanding command, eg. make request and wait for it. The way things are structured, I can handle N in parallel with little extra overhead. So in a massively parallel active environment it is quite possible to get down to the 1 microsecond per call overhead, though with the much higher level of latency.

tl:dr it is fast enough

Now onto the sophia plugin


out of curiosity I tested the performance of IPC: elapsed 9846.000000 millis for 100000 iterations ave [98.5 micros]
so it is 50% slower, definitely worth using inproc, but 10000 RPC call/return is quite good across arbitrary processes
Title: Re: SuperNET plugins
Post by: jl777 on April 23, 2015, 06:17:40 am
Thanks to nexern! He recommended both nanomsg and sophia and they are pure C projects that I feel right at home in.

I am especially liking Sophia http://sphia.org/:

"Sophia is a modern embeddable key-value database.

It has unique architecture that was created as a result of research and reconsideration primary algorithmic constraints of Log-file based data structures, such as LSM-tree. (see architecture)

Sophia is designed for fast write (append-only) and read (range query-optimized, adaptive architecture) small to medium-sized key-values.

Sophia is feature-rich (see features).
BSD licensed and implemented as small C-written library with zero dependencies."

"Sophia database and its architecture was born as a result of research and reconsideration of primary alghorithmic constraints that relate to growing popular Log-file based data structures, such as LSM-tree, B-tree, etc.

Most Log-based databases tend to organize own file storage as a collection of sorted files which are periodically merged. Thus, without applying some key filtering scheme (like Bloom-filter) in order to find a single key, database has to traverse all files that can take up to O(files_count * log(file_key_count)) in the worst case, and it's getting even worse for range scans, because Bloom-filter is incapable to operate with key order.

Sophia was designed to improve this situation by providing faster read while still getting benefit from append-only design."

"Sophia defines a small set of basic operations which can be applied to any database object. Configuration, Control, Transactions and other objects are accessible using the same methods. Methods are called depending on used objects. Methods semantic may slightly change depending on used object."

The API to Sophia is very similar to a JSON API, so mapping it to JSON API only has to deal with the issue of pointers. For now I will limit it to built in plugin mode as then pointers can actually be used. To allow usage by other processes (and nodes) I will need to add a layer that maintains the various pointers and objects for a caller, but this is a tangent so no time for it now. maybe somebody else would want to do this?

"All methods are thread-safe and atomic."

That simple statement has a LOT of implications, all of them good.

"It is possible to use compression for a specified databases using db.database_name.compression.
Supported compression values: lz4, zstd, none (default)."

"Sophia supports single-statement and multi-statement transactions."

"There are no limit on a number of concurrent transactions. Any number of databases can be involved in a multi-statement transaction."

"Snapshots represent Point-in-Time read-only database view.

It is possible to do sp_get(3) or sp_cursor(3) on snapshot object. To create a snapshot, new snapshot name should be set to snapshot control namespace."

"Sophia supports asynchronous Hot/Online Backups.

Each backup iteration creates exact copy of environment, then assigns backup sequential number.
 Procedure call is fast and does not block."

"It is possible to start incremental asynchronous checkpointing process, which will force branch creation and memory freeing for every node in-memory index. Once a memory index log is free, files also will be automatically garbage-collected."

"Database monitoring is possible by getting current dynamic statistics via sp_ctl(3) object.

Also it is possible to get current memory usage or trace every worker thread (scheduler namespace). Database indexes have total number of keys and transactional duplicates (MVCC). Total number of nodes is node_count. Branches distribution can be obtained via branch_count, branch_avg, branch_max, histogram_branch"

From what I can tell sophia has all the features of BDB, just without the bloat. maybe there are some advanced things in BDB like setting up as a synchronized cluster, but I couldnt figure out how to do that quickly. With sophia, it has compact API and best of all nice C code so I can always debug it if there are any bugs (I doubt there will be any meaningful bugs)

To build it you probably wont believe this, but it creates a single source file that includes all the other files! Where have I seen that before?



I dont know about you, but I usually dont get enthusiastic over a key/value store system. With sophia, well, if it does what it says it does, it will be able to be used for pretty much all the DB needs I can foresee for the medium term


Title: Re: SuperNET plugins
Post by: jl777 on April 24, 2015, 05:30:32 am
heads up on some new API, since each plugin extends the API, there are effectively two levels to these API, ie sophia/method

jl777 [8:23 AM]
char *sophia_methods[] = { "get", "set", "object", "env", "ctl","open", "destroy", "error", "delete", "async", "drop", "cursor", "begin", "commit", "type", "create", "close", "add", "find" }; (edited)

jl777 [8:23 AM]
I have a bunch of very nice documentations for a change!

jl777 [8:23 AM]

jl777 [8:24 AM]
all but the last 4 are pass throughs to sophia API, but of course, only the last 4 are really anything external modules will use

jl777 [8:24 AM]
It took me all day, but I reduced the 16 low level sophia calls to 4, that cover most all use cases likely to be needed

jl777 [8:25 AM]
struct db777 *db777_create(char *name,char *compression);
int32_t db777_close(struct db777 *DB);
int32_t db777_add(struct db777 *DB,char *key,char *value);
int32_t db777_find(char *retbuf,int32_t max,struct db777 *DB,char *key); (edited)

jl777 [8:25 AM]
internally they will be these 4 functions. create ad database, add stuff to it, maybe do some finds, then close it

jl777 [8:26 AM]
I will need to add delete at some point, but most my use cases are without delete and for now I need to port ramchains and MGW to use this

jl777 [8:27 AM]
for create there are 2 compression modes that are supported "lz4" and "zstd", the name is the name of the database that will be put into the SOPHIA_DIR path

jl777 [8:28 AM]
I decided to require null terminated strings since compression is supported. so to add, just pass in the key and value

jl777 [8:28 AM]
to find something, find with a buffer large enough

jl777 [8:29 AM]
the actual JSON will use dbind for the DB pointer and the return buffer will just be in the returned JSON
Title: Re: SuperNET plugins
Post by: jl777 on April 24, 2015, 08:03:49 am
curl --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"plugin\",\"plugin\":\"sophia\",\"method\":\"create\",\"dbname\":\"abc\"}"]  }' -H 'content-type: text/plain;'
curl --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"plugin\",\"plugin\":\"sophia\",\"method\":\"add\",\"dbname\":\"abc\",\"key\":\"foo\",\"value\":\"spam\"}"]  }' -H 'content-type: text/plain;'
curl --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"plugin\",\"plugin\":\"sophia\",\"method\":\"find\",\"dbname\":\"abc\",\"key\":\"foo\"}"]  }' -H 'content-type: text/plain;'
curl --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"plugin\",\"plugin\":\"sophia\",\"method\":\"close\",\"dbname\":\"abc\"}"]  }' -H 'content-type: text/plain;' (edited)

jl777 [10:57 AM]
The above 4 API spot tested fine. it is all under the SuperNET "plugin" API call, specify the name of the plugin and the four methods. all of them take dbname as a way to tell which database to operate on. assumption is that you wont have too many databases (for now)

jl777 [10:58 AM]
I think this is about as lean as a key/value store API can get and the "add" method has "key" and "value" and the "find" method has "key"

jl777 [10:58 AM]
create and close only have the "dbname" field

jl777 [10:59 AM]
dont forget to load the plugin at the beginning with:

jl777 [10:59 AM]
curl --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "SuperNET", "params": ["{\"requestType\":\"syscall\",\"plugin\":\"sophia\"}"]  }' -H 'content-type: text/plain;' (edited)


P.S. I forgot to add that the create method has a "compression" field that is optional, can be "lz4" or "zstd"
Title: Re: SuperNET plugins
Post by: jl777 on May 02, 2015, 02:38:04 pm

how to do nanomsg language binding: http://250bpm.com/blog:21
Title: Re: SuperNET agents
Post by: jl777 on June 23, 2015, 05:15:00 am

you send in JSON, it returns JSON

this should be the key to understanding
now the cgi path has messy HTML handling that is needed to allow browsers/curl/GUI to be able to construct the JSON and push it into the blackbox (edited)

so the API is pretty directly invoked via ./BitcoinDarkd SuperNET '{...}'

where the '{...}' is the JSON
so you dont have to understand HOW it is doing, but just what. like driving a car
so if something is broken, then what does that mean?

either the blackbox is broken (JSON -> return JSON is wrong) or the JSON being pushed into the blackbox is wrong

if that is the case, it could be the cgi glue layer
or the GUI
let me know if any questions. people often confuse the difficulty of getting it to work with the difficulty of using it

[GUI] <-> (cgi) <-> {API JSON blackbox}

curl bypasses the GUI and injects into the (cgi)
./BitcoinDarkd bypasses the (cgi) and injects into the {API JSON blackbox}

*** advanced stuff follows ***
now this {API JSON blackbox} is where the SuperNET agents are
the (cgi) actually spawns a new nanomsg connection with a special thread to accept the requests from the cgi, this is fully multithreaded
the SuperNET commandline goes through a parser and the BitcoinDarkd path also goes through a bit of processing
but they all end up in the same place called process_user_json
this processes the user's JSON

now things do indeed get a bit complicated....

I am pushing more and more things through the "busdata" path as that allows for modular authentication, encryption, and other privacy things
but it is still possible (not sure if I wont deprecate this) to directly invoke an agent by not having the "busdata" agent specified
let us ignore this as I expect to use the busdata for as much as possible as it makes for the same processing whether going out to the network or staying within the local node

the busdata path behaves a bit differently if you are running a relay node or not. if you are not, then it converts the user json into a binary format with whatever authentication that is specified and issues a load balanced call to the relays. one thing to note is that the client can specify a "broadcast":"allrelays" or "allnodes"
if it is a relay node, it broadcasts to all relays and then processes the request locally

back to the client path... it ends up at a specific relay node that processes it locally, but if the "broadcast" is specified, then it is broadcast to allrelays or allnodes
notice that in the case of "broadcast":"allrelays", this is having the same state as when the originating node was a relay
the relays are receiving the busdata packet and if it is an "allrelays" one they process it locally, if it is a "allnodes" global broadcast, they currently ignore it

the reason for this is that for something like InstantDEX where your node wants to broadcast its placebid globally, it just needs to get to all the other nodes and not the relays themselves (assuming the relay node is not used as a normal node), so it is like a "doughnut", with a hole in the middle. "allrelays" sends to the middle and "allnodes" to all nodes, but the middle is probably ignoring it. havent figure out whether it is worth to adding a flag to the global broadcast to tell the relays to process locally

now, this "process locally", you are asking what that means
it means to decode the binary data, authenticate it and then route it to the correct place
this can be one of several places, plus there are also some control things like registering a service provider

if the {user json} was a request for a specific service provider, then the relay will send the request to a random node that has previously registered and then it waits for the response and then routes the data back to the original node. In the event that the random relay that received the request does not have any registered nodes for that service, it does an allrelays broadcast, in hopes that some other relay has such a service. havent automated the sending back of the response from the failover path back to the original caller yet

keep in mind this is all happening within a second or so
if the request is for a specific agent on that node, then it is much simpler and it sends a message to that specific agent, gets the response and sends it back
so the above is the simplified explanation of the {API JSON blackbox}

Title: Re: SuperNET agents
Post by: jl777 on June 23, 2015, 07:01:02 am
Now the data flow is much more sane, I was able to add Ddos protection pretty quickly.

I use leverage factors of 9, 81 and 729 for local, relay and global requests

you might see a pause for sending placebid/placeask as these are global requests
the leverage factor of 729 means it takes 729x CPU power to create a valid packet than to validate it

so with just 10 relay nodes, over 7000 servers will be needed to successfully attack and the attack would just slow things down
probably an easier attack is at the relay level, it "only" takes 81x the number of relay servers to saturate the relays, but the peer to peer comms already established wont be affected.

i expect we will start with about 30 relays, so 2000+ attacker CPU's at this level. Since any node can elect to become a relay too, under attack scenarios, more and more nodes can become a relay to make the attack more and more expensive or we can boost the leverage at the cost of higher average latency.

Title: Re: SuperNET agents
Post by: jl777 on June 26, 2015, 10:55:03 pm
jl777 [12:15 AM]
we might want to identify some key agents that are useful for many other agents/GUI use cases

jl777 [12:15 AM]
like a cashier agent that can handle billing and accounting via micropayments

blackyblack [12:16 AM]
so globalizing task is registering agent with relay?

jl777 [12:16 AM]
the interface for all these are just JSON, so the design can just be a spec with various API definitions

jl777 [12:16 AM]
yes globalizing is to make sure the two API calls are working, one to run on the service providing node

jl777 [12:17 AM]
and the other from remote nodes

jl777 [12:17 AM]
./BitcoinDarkd SuperNET '{"plugin":"relay","method":"busdata","submethod":"serviceprovider","servicename":"MGW"}'

jl777 [12:17 AM]
each MGW server does the above

jl777 [12:17 AM]
then from anywhere the following returns a multisig address in 2 seconds:

jl777 [12:17 AM]
./BitcoinDarkd SuperNET '{"method":"busdata","plugin":"relay","servicename":"MGW","destplugin":"MGW","submethod":"msigaddr","coin":"BTCD","userNXT":"NXT-KAK4-SDL7-DHGT-9W37B","userpubkey":"8e7aeb3f92f5aa9d2c32c3c4fcda55deab8eda958237289b7e3d38959cfbf278","buyNXT":100,"timeout":10000}'

blackyblack [12:18 AM]
it's clear

jl777 [12:18 AM]
now we have multi-language, we can implement any of the key agents in any language

jl777 [12:19 AM]
it is a matter to organize the API specs

blackyblack [12:19 AM]
I think you could remove mgw from bundled agents

jl777 [12:19 AM]
it is, unless you say you are a gateway node

jl777 [12:19 AM]
it is only 2000 lines of code and I am too lazy to completely split it out

jl777 [12:20 AM]
plus there will be performance issues to do zillions of RPC via ipc vs direct function call

jl777 [12:20 AM]
it is 100 microseconds overhead per ipc, 40 per inproc, on 2ghz CPU

jl777 [12:21 AM]
@jones: you have a zillion different Jay things, maybe many of them to become agents? i wonder if HTML agents can be globalized...

jl777 [12:22 AM]
i think with some sort of bridge, they could

jones [12:22 AM]
hmm, interesting

blackyblack [12:22 AM]
interesting idea

jones [12:22 AM]
that could work, just embed the html inside of an agent to be parsed by a browser

jl777 [12:23 AM]
we can use the existing C core to just get the JSON to the HTML agents

jl777 [12:23 AM]
I want to unleash the creation of agents

jl777 [12:24 AM]
since these agents are run locally, it is not any risk of some sort of bug being propagated to all nodes

jl777 [12:24 AM]
and this is the key different to this approach vs. ethereum type of approach

jl777 [12:24 AM]
agents are able to be more flexible and have more power as they can do anything

jl777 [12:25 AM]
but with the SuperNET they can easily combine into a decentralized network

jl777 [12:25 AM]
so there will be thousands of different decentralized networks, all running on the same SuperNET

jl777 [12:25 AM]
like all the different protocols running on top of TCP/IP

jl777 [12:26 AM]
as klunky as JSON is, it is quite simple and universal and I think even Windows can deal with it

jl777 [12:26 AM]
so the sum of all possible agents that are combining in all possible ways will create some interesting emergent behaviors

jl777 [12:27 AM]
with the Ddos protection built into SuperNET (i think the only crypto network with this), it makes rogue agents attacking it much more difficult

jl777 [12:27 AM]
I will add sybil resistance next month

jl777 [12:28 AM]
one the core is debugged and stable, I will be shifting to crypto777 more and more, so I want to make sure there is clear understanding of the plan for agents

jl777 [12:28 AM]
at the pure agent level, i think it should be quite self-explanatory to any coder

jl777 [12:29 AM]
anyway, plz ask any questions about any of this, important we are on the same page

jl777 [12:30 AM]
we do need a bit of guidelines for the type of agents, though there wont be any requirement to follow this, just want to avoid having to reinvent things over and over

jones [12:33 AM]
alright, agents just keep getting cooler

jl777 [12:36 AM]
I am envisioning a set of royalty paying agents, like InstantDEX

jl777 [12:36 AM]
then around each of these satellite agents that implement some specialized usecase of one or more of these royalty paying agents

jl777 [12:37 AM]
then there would also be agents that provide specialized calculations for free or a fee

jl777 [12:38 AM]
what these calculations are, I am not sure, just a class of agents that do things that normal nodes cant or dont want to do, gee like hosting the 12 TB bitcoin blockchain

jl777 [12:38 AM]
not sure if any other class of agents are needed....

jl777 [12:39 AM]
then there are the GUI that connect to local/remote agents and make the user experience customized

jones [12:40 AM]
I could do a lot with agents that load up html and javascript in browser

jl777 [12:40 AM]
yes, instead of all the cool stuff on jnxt.org, it becomes agents and all other agents or GUI can get direct access to them

jl777 [12:41 AM]
you see, I did all this so I can just query your next forger data without having to do it myself

jones [12:42 AM]
makes sense

jl777 [12:42 AM]
also all the crypto777 coolness will of course be in agent form

jl777 [12:43 AM]
so to spawn a new crypto network, it is a matter to select the appropriate agents and just need a "mainloop" agent to tie it all together

jl777 [12:43 AM]
but these components seem to fit into the blackbox agent category

jl777 [12:43 AM]
another category of agents: connector agents

jl777 [12:44 AM]
these are agents that deal with establishing and maintaining various network topologies, so it is a lot like blackbox, but the fact that it will be routing data from other nodes, I think makes it different from pure blackbox

jones [12:47 AM]
networking will never be the same again, haha

jl777 [12:50 AM]
networking code is such a pain, no sense for any more coders to suffer with it. but the agent architecture allows for extreme flexibility in how you implement things, while the natural way is via well defined JSON interfaces between major modules.
Title: Re: SuperNET agents
Post by: jl777 on July 08, 2015, 11:44:54 am
after debugging the private messaging control flow, I found out sophia is not working well at all in windows. it has also had some strange behavior and it is in the middle of being changed dramatically and the API is changed

So rather than spend some weeks battling windows and trying to get the 25000 line sophia to work with it, I decided to write my own KV storage.

I know, I know, not another project. Well, I started it and finished the first version, all in the second half of today. Its only 300 lines, but it is threadsafe, and fast and uses nothing system specific so will be totally portable.

the benchmarks range from 150,000 to 800,000 iterations of the test loop per second, depending on the type of machine. Each iteration does a kv777_write and a kv777_read.

Still need to make sure it is working well in Windows, but if it does, then porting the sophia usage to kv777 wont take long and 25000 lines -> 300 for the KV storage.