Fix PortfolioMetricsUpdateTask
refreshing too many project metrics concurrently
#919
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Fixes
PortfolioMetricsUpdateTask
refreshing too many project metrics concurrently.In order to avoid a high impact on application and database, only up to
$CPU_CORE_COUNT
project metrics are supposed to be refreshed concurrently, when thePortfolioMetricsUpdateTask
is running.To achieve this, projects are supposed to be partitioned into at most
$CPU_CORE_COUNT
partitions. Unfortunately, the method used for partitioning divided the list of projects into N partitions of at most$CPU_CORE_COUNT
elements.This caused too many projects/partitions to be processed concurrently, claiming too many database connections of the connection pool.
I was unable to find an off-the-shelf implementation of the desired partitioning logic in any of the libraries we use, so built a custom one.
Addressed Issue
N/A
Additional Details
N/A
Checklist
This PR implements an enhancement, and I have provided tests to verify that it works as intendedThis PR introduces changes to the database model, and I have updated the migration changelog accordinglyThis PR introduces new or alters existing behavior, and I have updated the documentation accordingly