-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run node on host or single container #26
Comments
@endersonmaia I have a few questions regarding the subtasks.
Is it better to add a unique prefix to each service? Or do we just need to make sure that the same variable has the same name across all services?
I don't know if I agree with this. Couldn't you just assign port 0 for the health check so the system assigns a random port to it?
Do you have a guideline for the log format?
Is this necessary for the task? Wouldn't be sufficient to release a docker image with all service binaries? |
Good examples are
I need to know the port to configure this form the outside (a kubernetes manifest, or docker-compose). But maybe this falls into the same problem of prefixes, we need a
This could be another issue, but it's great to be able to download binary releases directly from the GitHub Release page, if you need to run this yourself, without the container stuff. |
The services are not meant to be used on their own since you need a lot of configuration, so I think is a very particular use case. We probably should create a separate issue to discuss that. |
Another example: dispatcher
state-server
|
@endersonmaia Is this a good idea to have all of the services in just one docker image ? 1 - How should they work together ? 2 - What if one of the services crashed ? should we restart the whole container or we can just restart that specific service ? maybe we need to use a process management tool like Supervisord 3 - What if one of our services may fork into multiple processes ? (for example, Apache web server starts multiple worker processes) 4 - What about our services dependencies like |
There's noting bad about it. :)
They should work the same.
Each service should be resilient enough and not depend on an external supervision/orchestration. So it's each service responsibility to retry on failing connections and retry with some retry/backoff/timeout logic until finally fail/exit. With good logs explaining the reason for the failure. Also the supervisor/orchestrator should have its own configuration of retry/backoff/timeouts.
Our services already do that, there's no issue here.
This dependencies can be managed by the supervisor/scheduler used. I'm experimenting with s6-overlays for the single container approach, someone could try systemd if they need to, and solve this there. We're not going do stop releasing container images for each service like we do now, we're only going to have other options, a single-container being one of that options. |
@gligneul it just occurred to me that all services will see the One suggestion would be to have RUST_LOG by default, but that would be overwritten by So, if I want to define the log level globally, I could use In case I want to define an specific service, I could use
|
@endersonmaia RUST_LOG is a variable from Rust, I'm not sure if we can change it. And even if we can change it, I'm not sure if we should. You can already set the log for specific services by specifying the Given Rust module. For instance: RUST_LOG="dispatcher=trace,advance_runner=trace", and so on. |
Yeah, that's why I suggest exposing Nice that I can define the service in |
I agree that
That is not much different from specifying multiple variables. You can still set the default one in RUST_LOG. |
I disagree. I prefer the explicit:
Than reading this:
Maybe it's a matter of taste, IDK. |
It depends on who is the user. For the application developer it will be something very simple like
and we will decide what to do. |
Yes, it looks better but we would have to implement this logic by hand. The RUST_LOG already works out of the box and provides the functionality that we need, even though it looks a bit ugly. |
We merged the health check improvement to allow the configuration of multiple services. @endersonmaia, is there anything else that you need to be prioritized on our side? |
@gligneul nothing that I can think of right now. I'll test these new health-check options at |
@gligneul We could have gracefull shutdown (preStop Hook) config too. This can help us to Manage service Lifecycle better than now . |
This issue has spawned a lot of interesting discussions and issues, but it has become quite confusing. I'll be closing it now, but I've created #80 to tackle its original proposal. |
📚 Context
The current way the off-chain services that compose the cartesi-node solution are released is a container image for each service, and if you deploy each of these services as a separate container (Docker/Kubernetes) everything works just fine.
But when you need to run this directly on the host or inside a single container, you don't have a release available for that.
Why is this problem relevant?
Depending on the environment you need to deploy a cartesi-node, you may have restrictions on how to run multiple services and containers, and make the communication between them.
Although containers are standard, we still need to give support to those that don't use containers.
✔️ Solution
We could have a container image release with all the services together.
We could have binary releases without the container, so anyone can deploy this in a "plain old server" VM over a VPS or bare metal.
📈 Subtasks
linux/amd64
;The text was updated successfully, but these errors were encountered: