Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates handling of scale/replicas parameter in CLI and compose file #1159

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

me-coder
Copy link

@me-coder me-coder commented Mar 5, 2025

Purpose

Addresses issue: #267

Contributor Checklist:

If this PR adds a new feature that improves compatibility with docker-compose, please add a link
to the exact part of compose spec that the PR touches.

This PR attempts to address missing (the code to accommodate this was already present, just broken) --scale option for podman-compose up:
https://docs.docker.com/reference/cli/docker/compose/up/#options

All changes require additional unit tests.

Testing

Test file:

name: podman-compose

services:
  app:
    image: docker.io/library/busybox:latest
    tty: true
    # scale: 3
    # deploy:
    #   mode: replicated
    #   replicas: 2

  app1:
    image: docker.io/library/busybox:latest
    tty: true
    depends_on:
      - app

  app2:
    image: docker.io/library/busybox:latest
    tty: true

  app3:
    image: docker.io/library/busybox:latest
    tty: true

With compose file:

$ python ./podman_compose.py -f ./test.yml up -d
1283f00b651460df07514d3be6d01e0ea76529782ead0292c977077b72229937
d3b036808fff09a6e107c3ba769ad03a2c024d148581a24bdd5e3128b952d976
663cb01d16c4e7194145b63a306d6e225f7c05b1ff97103ae8e2df57144233b7
8dd9e7f5588ec2d1ecd16565cdf371be8fce87442f902e48bec7bf97276026b4

$ python ./podman_compose.py -f ./test.yml ps
CONTAINER ID  IMAGE                             COMMAND     CREATED        STATUS        PORTS       NAMES
d3b036808fff  docker.io/library/busybox:latest  sh          7 seconds ago  Up 7 seconds              podman-compose_app_1
663cb01d16c4  docker.io/library/busybox:latest  sh          6 seconds ago  Up 6 seconds              podman-compose_app_2
8dd9e7f5588e  docker.io/library/busybox:latest  sh          5 seconds ago  Up 5 seconds              podman-compose_app_3

$ python ./podman_compose.py -f ./test.yml down
podman-compose_app_1
podman-compose_app_3
podman-compose_app_2
podman-compose_app_3
podman-compose_app_2
podman-compose_app_1
1283f00b651460df07514d3be6d01e0ea76529782ead0292c977077b72229937
podman-compose_default

Scale existing services using CLI:

$ python ./podman_compose.py -f ./test.yml up -d --scale=3
5a76ea112148b71be792cd4c9e77f7b505cbdc2432118eaa2bff8003753b1bb9
ca5b2c05a5198392d64560b63a57e8db0cf2ecdb7f1171f10e800824cc1974d3
8f835d6759a2614c59ee7d21abd0859e6cc96ffec06bdceae3dc55ab096bd488
1fbc4958fa4439d8e0349459f77f1d59026ed3414b815026992611ee8518e141

$ python ./podman_compose.py -f ./test.yml ps
CONTAINER ID  IMAGE                             COMMAND     CREATED        STATUS        PORTS       NAMES
ca5b2c05a519  docker.io/library/busybox:latest  sh          5 seconds ago  Up 5 seconds              podman-compose_app_1
8f835d6759a2  docker.io/library/busybox:latest  sh          4 seconds ago  Up 4 seconds              podman-compose_app_2
1fbc4958fa44  docker.io/library/busybox:latest  sh          3 seconds ago  Up 3 seconds              podman-compose_app_3

$ python ./podman_compose.py -f ./test.yml down
podman-compose_app_3
podman-compose_app_2
podman-compose_app_1
podman-compose_app_3
podman-compose_app_2
podman-compose_app_1
5a76ea112148b71be792cd4c9e77f7b505cbdc2432118eaa2bff8003753b1bb9
podman-compose_default

To-Do

  • Scale-up with CLI works (although with errors on command line, which should be warnings). But, Scale-down doesn't work.
    => That part of code yet needs to be addressed, may be through a separate issue.
  • Command down (to destroy the scaled containers/replicas) works only based on value assigned in compose yaml file. This instead, needs to be addressed to stop all running replicas.
    => Done

@me-coder me-coder force-pushed the container_scaling_update branch 2 times, most recently from d007b86 to 4a21e9c Compare March 5, 2025 13:05
@@ -2080,7 +2080,17 @@ def _parse_compose_file(self):
container_names_by_service = {}
self.services = services
for service_name, service_desc in services.items():
replicas = try_int(service_desc.get("deploy", {}).get("replicas"), fallback=1)
replicas = try_int(args.scale, fallback=1) if "scale" in args else 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use None to signify that no value has been supplied, and then handle the None case at the very end. Right now if explicit 1 value is passed it may be overridden by next steps.

Copy link
Author

@me-coder me-coder Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @p12tic for the review and suggestion.

I've updated the code to handle the cases:

  1. --scale from the CLI up command
  2. scale from compose yaml file
  3. {"deploy": {"replicas": <NUM>} from compose yaml file

Was tempted to use the new fancy match-cases, but didn't to maintain compatibility with pre-Python-2.10 versions.

Do let me know if the new code is fine.

Also, I've modified the compose_down function to behave more closely to the docker compose down command:
8036f67

@me-coder me-coder force-pushed the container_scaling_update branch 7 times, most recently from a17250b to 0ed56ed Compare March 7, 2025 20:02
@me-coder me-coder requested a review from p12tic March 8, 2025 06:39
)
)
await asyncio.gather(*down_tasks)
await compose.podman.run([], "stop", [*podman_stop_args, cnt["name"]])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will way on all services to stop one by one. This is problematic in case of large number of services. One deployment I have is 40 services for example.

Copy link
Author

@me-coder me-coder Mar 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They would stop in parallel as await yields from the compose.podman.run([], "stop", [*podman_stop_args, cnt["name"]]).
So rather than waiting for asyncio.gather to schedule the future tasks/co-routines, here they get scheduled as soon as they are passed and the control returns.

However, the concern is valid. It won't wait for all the stop tasks to finish before proceeding to await compose.podman.run([], "rm", [cnt["name"]]). I'll try to fix it. (Ideally should be upgraded to TaskGroup, but for that there has to be an upgrade mandate to Python 3.11 for compatibility reasons).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Updated the code to use asyncio.gather for stop and rm command tasks.

@p12tic
Copy link
Collaborator

p12tic commented Mar 11, 2025

No need to add (Issue: containers#267) in commit description.

@me-coder me-coder force-pushed the container_scaling_update branch 2 times, most recently from c82c1c3 to 43d93cd Compare March 12, 2025 06:48
@me-coder me-coder force-pushed the container_scaling_update branch from 43d93cd to b2f1b87 Compare March 12, 2025 06:49
@p12tic
Copy link
Collaborator

p12tic commented Mar 19, 2025

Overall code looks good.

We need integration tests. You've already done most of the work and provided compose file and expected output in the PR description. What's left is to automate it. Check out tests/integration for examples of how this can be done.

Updates to compose_down command to match docker compose behavior should probably live in another PR because the change is unrelated. This would make it easier to land "Updates handling of scale/replicas through CLI & compose file" commit.

Talking about Updates to compose_down command to match docker compose behavior, it also needs tests and release note (probably several, because more than one thing is fixed). Also, I would say that the commit should itself be split into several, because right now it's hard to understand what the commit is trying to achieve. It's clear that it does improve something, but it contains several improvements merged into one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants