-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support threads on local backend #504
Conversation
WalkthroughThe pull request introduces significant changes across multiple files, primarily focusing on simplifying resource validation related to thread management in the executor framework. The Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
executorlib/interactive/shared.py (1)
552-552
: LGTM! Consider documenting the slots calculation.The use of
slots_required
in_wait_for_free_slots
correctly accounts for both cores and threads when checking resource availability.Consider adding a docstring update to clarify that
cores_requested
now represents total slots (cores * threads):def _wait_for_free_slots( active_task_dict: dict, cores_requested: int, max_cores: Optional[int] = None, max_workers: Optional[int] = None, ) -> dict: """ Wait for available computing resources to become available. Args: - cores_requested (int): Number of cores required for executing the next task + cores_requested (int): Total number of slots (cores * threads_per_core) required for executing the next task """
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
executorlib/interactive/executor.py
(0 hunks)executorlib/interactive/shared.py
(1 hunks)executorlib/standalone/inputcheck.py
(0 hunks)tests/test_shared_input_check.py
(0 hunks)
💤 Files with no reviewable changes (3)
- executorlib/interactive/executor.py
- executorlib/standalone/inputcheck.py
- tests/test_shared_input_check.py
🔇 Additional comments (2)
executorlib/interactive/shared.py (2)
556-556
: LGTM! Verify resource tracking behavior.
The active task dictionary now correctly tracks total resource usage by storing slots_required
instead of just cores.
Let's verify that the active task dictionary is properly cleaned up:
✅ Verification successful
The search results show that active_task_dict
is properly managed throughout its lifecycle:
- Initialization as empty dict
- Updated through
_submit_function_to_separate_process
- Cleaned up in
_wait_for_free_slots
by filtering out completed tasks - Final assignment of slots_required for resource tracking
LGTM! Resource tracking cleanup is properly implemented
The active task dictionary has consistent cleanup through dictionary comprehensions that filter out completed tasks (k.done()
). This ensures proper resource tracking and cleanup when tasks complete, preventing resource leaks. The update with slots_required
fits well within this pattern.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all places where active_task_dict is modified to ensure consistent cleanup
rg "active_task_dict.*=|active_task_dict\[.*\].*=" -A 2 -B 2
Length of output: 2707
549-549
: LGTM! Verify thread count calculation across the codebase.
The calculation of total slots as cores * threads_per_core
is correct. The default value of 1 for threads_per_core
maintains backward compatibility.
Let's verify that this is the only place where thread calculations are performed:
✅ Verification successful
Thread calculation is consistently implemented across the codebase
The thread count calculation in shared.py
aligns with its usage in the Flux backend implementation. The threads_per_core
parameter is correctly:
- Used as a multiplier for cores in resource calculation (
shared.py
) - Passed as
cores_per_task
andcores_per_slot
in Flux job specifications (flux.py
) - Documented in examples with appropriate defaults and recommendations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other thread-related calculations that might need similar updates
rg -l "threads_per_core" | xargs rg "cores.*threads|threads.*cores"
Length of output: 771
Script:
#!/bin/bash
# Let's check the implementation in flux.py to understand how threads_per_core is used
rg -A 5 -B 5 "_threads_per_core" executorlib/interactive/flux.py
Length of output: 1294
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
executorlib/standalone/interactive/spawner.py (2)
Line range hint
246-247
: Fix the SLURM command generation bug.There's a critical bug in the SLURM command generation where the
--cpus-per-task
option is malformed:
- The equals sign is missing
- There's no space between the option and value
Apply this fix:
- command_prepend_lst += ["--cpus-per-task" + str(threads_per_core)] + command_prepend_lst += ["--cpus-per-task=" + str(threads_per_core)]This ensures the SLURM command is properly formatted as
--cpus-per-task=N
.
Line range hint
1-262
: Consider adding integration tests for thread support.While the thread management implementation looks good overall, consider:
- Adding integration tests to verify thread allocation works correctly across different backends (subprocess, MPI, SLURM)
- Documenting any limitations or requirements for thread support in the class docstrings
Would you like me to help create integration test cases for thread management?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
executorlib/interactive/shared.py
(1 hunks)executorlib/standalone/interactive/spawner.py
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- executorlib/interactive/shared.py
🔇 Additional comments (2)
executorlib/standalone/interactive/spawner.py (2)
175-175
: LGTM: Proper inheritance implementation.
The threads_per_core
parameter is correctly passed to the parent class, maintaining the inheritance chain.
60-60
: Consider documenting parameter validation and usage.
The threads_per_core
parameter is added but:
- The docstring could specify any validation rules (e.g., must be positive).
- The
_threads_per_core
instance variable isn't used in thegenerate_command
method of this class.
Let's verify if there are any validation rules for this parameter:
Also applies to: 68-68, 77-77
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Tests