-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate the Observer design #2
Comments
Some questions that would impact this design also: |
The tricky part that led to the creation of the Observer is the circular relationship that Doing this in Rust is very tricky, and it they instead have to live side by side, instead of contained. This led to the creation of the I'm not sure what is the alternative, and I would like help validating if this is the approach |
There are two open architecture desings as PRs for Feels more Rust-y, like how Does not work so well with internal "actors" - a stream that has multiple streams inside of it. Would fit better if the design of the Way more verbose for pumbling code, but it is composable: Everything is a stream, and you can wrap with another stream. If we can use a single interface like The state of the peering state machine lives on a wrapping structure. So, to make it work we create Next operations are driven by the top-most code. If the code don't loop on
The
The network codes lives "outside" the After a lot of iterations this landed on a very similar design that The pumbling code is much more straight-forward to write, and it is much less verbose. The code looks much more like the JS version, fitting all in a single file. It seems to not be so composable: if we want to provide information up, out from spawn, we have to request upstream to provide us a channel sender as argument. Channels don't compose as nice as Streams. It is more performant as it does not need the same timeout workaround to use It ties us to a single executor ecosystem, and every level has to have their own spawns. The state of the peering connection lives on the async function, without an extra (manual) struct. The state machine is encoded on a big match on the function, while we loop. It uses
Much easier to write the starting code and finalization logic after a peer disconnects. Not so easy to provide extension points for the code using the library, unless it receives a sender to emit data back. Connection code runs in the background. Better use of the executor logic to operate multiple peers in the background at scale. The network code can live "inside" |
The code is ugly, but that shows how it fits together. I'll try to implement |
@otaviopace would you want to share some opinions on this |
I will look into it! I just will have to read a lot of code hahaha, I will respond in a couple of days 😉 |
Yeah, don't worry :) We can video chat if you want as well. I would really love to be able to discuss this trade-offs with someone |
Some more reports - the mpsc worker desing on #21 was much simpler to implement the There is no need to use Also, Yosh suggested to look at https://docs.rs/async-sse/ design, as it also uses mpsc workers. That seems to be a design that has appeared on other places, so that is a good indicator this is onto something interesting. |
Hello @bltavares , so I've taken a look at the code, Pull Requests and all references in this issue. You've put a lot of effort in this! hahahah Honestly I have very little experience with async Rust, I haven't used Futures, async/await, tokio and mio. So I don't think I can say which version is the better, or opine on which version is more readable, more rusty, etc. However, if you want, we can do a call to talk about it, but I would probably learn more than help you on the matter hahahaha. I would really enjoy a conversation about this because as I said, I haven't digged much into async Rust yet and I find it very interesting 🙂 |
@otaviopace Thank you so much for giving it a try :) I've progressed a bit more on the #21 design if you want to check it also. I appreciate you taking the time to give it a try reading my bad code haha |
This design seems to be able to run multi-threaded/tasks already given the
colmeia-server
implementation.Is this a good design to implement new servers/clients?
Good:
Bad:
The text was updated successfully, but these errors were encountered: