Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validate the Observer design #2

Closed
5 tasks done
bltavares opened this issue Feb 1, 2020 · 10 comments
Closed
5 tasks done

Validate the Observer design #2

bltavares opened this issue Feb 1, 2020 · 10 comments

Comments

@bltavares
Copy link
Owner

bltavares commented Feb 1, 2020

This design seems to be able to run multi-threaded/tasks already given the colmeia-server implementation.

Is this a good design to implement new servers/clients?

Good:

Bad:

@bltavares
Copy link
Owner Author

Some questions that would impact this design also:

@bltavares
Copy link
Owner Author

The tricky part that led to the creation of the Observer is the circular relationship that Feed and Peer have on Node 1 2

Doing this in Rust is very tricky, and it they instead have to live side by side, instead of contained.

This led to the creation of the Hypercore: which is itself only the storage layer, and the PeeredHypercore, which hosts the Peer connections and the Hypercore data.

I'm not sure what is the alternative, and I would like help validating if this is the approach

@bltavares
Copy link
Owner Author

bltavares commented Jul 16, 2020

There are two open architecture desings as PRs for colmeia-clone:

Feels more Rust-y, like how .map and .filter are built using structs using the same interface, but does not fit well with hypercore-protocol design (requires timeouts on channel loops and the extra tick method to drive the inner loops)

Does not work so well with internal "actors" - a stream that has multiple streams inside of it.

Would fit better if the design of the hypercore-protocol would also be a single-level stream, with channel messages and client events on a single stream.

Way more verbose for pumbling code, but it is composable: Everything is a stream, and you can wrap with another stream.

If we can use a single interface like legacy/colmeia-dat1 that means that we can compose structs, just like .map().filter() returns a new Stream. So we could do tcp stream -> protocol stream -> (hypercores -> hyperstores -> hyperdrives) -> your code* -> executor spawn using futures::Stream as the driver

The state of the peering state machine lives on a wrapping structure. So, to make it work we create *Peered structs that keep the state required per connection, and takes a Arc of the hyperdrive. This was confusing at first, the same confusion I had when I learned that .map also produces a struct.

Next operations are driven by the top-most code. If the code don't loop on .next() no operation is processed (just like stream/futures/iterators). We would somehow simulate the badly the executor logic on the top-level to connect to multiple peers.

  • Can we provide a connection pool on the top level code to better use the executor?

The on_finalize method adds an extra lifecycle to structs that is not checked by the compiler (eg: not calling on_finalize before droping the adapter). External libraries have an easy extension point, as they can implement the trait themselves, and override or extend the code by wrapping the struct with their own.

  • Is there an easy "decorator" macro that delegate everything down to a field, except for our extensions?

The network codes lives "outside" the hypercore struct, you make it peered by wrapping into the state machine.
PeeredHypercore(hypercore).next().wait

After a lot of iterations this landed on a very similar design that hypercore-protocol uses internally. It uses channels as means for passing data from lower layers on other tasks back to the hyperdrive. The design of hypercore-protocol is much more elegant, with the Client dealing with control level events, and a Channel dealing with multiplexed messages per feed. But that requires a background task to drive the channelized events.

The pumbling code is much more straight-forward to write, and it is much less verbose. The code looks much more like the JS version, fitting all in a single file.

It seems to not be so composable: if we want to provide information up, out from spawn, we have to request upstream to provide us a channel sender as argument. Channels don't compose as nice as Streams.

It is more performant as it does not need the same timeout workaround to use hypercore-protocol. Would not be an issue if the protocol was a single stream tho.

It ties us to a single executor ecosystem, and every level has to have their own spawns.
executor(your code) -backchannels-> spawn(tcp stream -> protocol stream -> executor(hyperdrive task -> hypercore tasks))

The state of the peering connection lives on the async function, without an extra (manual) struct. The state machine is encoded on a big match on the function, while we loop. It uses continue; and break; to control weather we stop or not the peering instead of ?.

  • Could we use ? somehow if we refactor it?

Much easier to write the starting code and finalization logic after a peer disconnects. Not so easy to provide extension points for the code using the library, unless it receives a sender to emit data back.

Connection code runs in the background. Better use of the executor logic to operate multiple peers in the background at scale.

The network code can live "inside" hypercore, just like the JS version. That should be easier to port. task::spawn(async { hypercore.replicate(tcp_stream).await })

@bltavares
Copy link
Owner Author

The code is ugly, but that shows how it fits together. I'll try to implement colmeia-sync on both and see how they interact in general with building upper layers.

@bltavares
Copy link
Owner Author

@otaviopace would you want to share some opinions on this ugly code as a potential contributor? :)

@evaporei
Copy link
Contributor

I will look into it! I just will have to read a lot of code hahaha, I will respond in a couple of days 😉

@bltavares
Copy link
Owner Author

Yeah, don't worry :) We can video chat if you want as well. I would really love to be able to discuss this trade-offs with someone

@bltavares
Copy link
Owner Author

Some more reports - the mpsc worker desing on #21 was much simpler to implement the sync process which finds a peer and exchange data.

There is no need to use crosschannel-queue and create a "local thread-pool" for the connections. We can simply rely on the executor. That looks promising.

Also, Yosh suggested to look at https://docs.rs/async-sse/ design, as it also uses mpsc workers. That seems to be a design that has appeared on other places, so that is a good indicator this is onto something interesting.

@evaporei
Copy link
Contributor

Hello @bltavares , so I've taken a look at the code, Pull Requests and all references in this issue. You've put a lot of effort in this! hahahah

Honestly I have very little experience with async Rust, I haven't used Futures, async/await, tokio and mio.

So I don't think I can say which version is the better, or opine on which version is more readable, more rusty, etc.

However, if you want, we can do a call to talk about it, but I would probably learn more than help you on the matter hahahaha.

I would really enjoy a conversation about this because as I said, I haven't digged much into async Rust yet and I find it very interesting 🙂

@bltavares
Copy link
Owner Author

@otaviopace Thank you so much for giving it a try :) I've progressed a bit more on the #21 design if you want to check it also.

I appreciate you taking the time to give it a try reading my bad code haha
If you want to chat about async rust - even if you don't have time to contribute with code - let me know and I can share some of what I've learned so far

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants