Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Implement tournament ranking algorithm for action prioritization #15

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

piotrnowakowski
Copy link
Contributor

@piotrnowakowski piotrnowakowski commented Mar 3, 2025

  • Replaced previous qualitative prioritization method with tournament ranking
  • Updated output generation to use tournament ranking for both adaptation and mitigation actions
  • Removed detailed explanations and replaced with generic ranking explanation to maintain same data structure
  • Maintained existing output file structure and naming conventions

Description by Korbit AI

What change is being made?

Implement a tournament ranking algorithm for action prioritization, replacing previous quantitative score-based rankings across various city-specific adaptation and mitigation JSON datasets.

Why are these changes being made?

This modification aims to standardize the ranking process of adaptation and mitigation actions using a consistent tournament ranking approach, improving the prioritization mechanism by providing a uniform rationale ("Ranked by tournament ranking algorithm") across cities such as Camaçari, Corumbá, Caxias do Sul, and Maranguape. The change is intended to enhance clarity and objectivity in the decision-making process for stakeholders reviewing climate action plans.

Is this description stale? Ask me to generate a new description by commenting /korbit-generate-pr-description

- Replaced previous qualitative prioritization method with tournament ranking
- Updated output generation to use tournament ranking for both adaptation and mitigation actions
- Removed detailed explanations and replaced with generic ranking explanation to maintain same data structure
- Maintained existing output file structure and naming conventions
Copy link

@korbit-ai korbit-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review by Korbit AI

Korbit automatically attempts to detect when you fix issues in new commits.
Category Issue Fix Detected
Error Handling Incomplete Tournament Error Handling ▹ view
Performance Missing ML Comparison Result Caching ▹ view
Files scanned
File Path Reviewed
prioritizer/prioritizer.py

Explore our documentation to understand the languages and file types we support and the files we ignore.

Need a new review? Comment /korbit-review on this PR and I'll review your latest changes.

Korbit Guide: Usage and Customization

Interacting with Korbit

  • You can manually ask Korbit to review your PR using the /korbit-review command in a comment at the root of your PR.
  • You can ask Korbit to generate a new PR description using the /korbit-generate-pr-description command in any comment on your PR.
  • Too many Korbit comments? I can resolve all my comment threads if you use the /korbit-resolve command in any comment on your PR.
  • Chat with Korbit on issues we post by tagging @korbit-ai in your reply.
  • Help train Korbit to improve your reviews by giving a 👍 or 👎 on the comments Korbit posts.

Customizing Korbit

  • Check out our docs on how you can make Korbit work best for you and your team.
  • Customize Korbit for your organization through the Korbit Console.

Current Korbit Configuration

General Settings
Setting Value
Review Schedule Automatic excluding drafts
Max Issue Count 10
Automatic PR Descriptions
Issue Categories
Category Enabled
Documentation
Logging
Error Handling
Readability
Design
Performance
Security
Functionality

Feedback and Support

Note

Korbit Pro is free for open source projects 🎉

Looking to add Korbit to your team? Get started with a free 2 week trial here

Comment on lines +445 to +452
while remaining and current_rank <= 20:
print(f"\n--- Running bracket for rank #{current_rank} with {len(remaining)} actions ---")
winner, losers = single_elimination_bracket(remaining, city)

if not winner:
#TODO is there a normal thing that this can happen ?or should this be error
print("No winner found, breaking")
break
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incomplete Tournament Error Handling category Error Handling

Tell me more
What is the issue?

The tournament_ranking function has an unresolved TODO about error handling when no winner is found, leaving uncertainty about proper error handling.

Why this matters

Without proper error handling, the tournament might terminate prematurely or produce incomplete rankings without clear indication of failure.

Suggested change ∙ Feature Preview

Implement proper error handling for the no-winner case:

while remaining and current_rank <= 20:
    winner, losers = single_elimination_bracket(remaining, city)
    
    if not winner:
        print(f"Error: Tournament failed at rank {current_rank}")
        if not full_ranking:
            raise RuntimeError("Tournament failed without producing any rankings")
        return full_ranking  # Return partial results if we have any

Report a problem with this comment

💬 Chat with Korbit by mentioning @korbit-ai.



# Use your ML model to compare
result = ML_compare(actionA, actionB, city)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing ML Comparison Result Caching category Performance

Tell me more
What is the issue?

No caching of ML comparison results between the same action pairs.

Why this matters

Redundant ML model calls for previously compared action pairs waste computational resources and increase API costs.

Suggested change ∙ Feature Preview

Implement a comparison cache to store and reuse ML comparison results:

comparison_cache = {}
def cached_ML_compare(actionA, actionB, city):
    pair_key = tuple(sorted([actionA['ActionID'], actionB['ActionID']]))
    if pair_key not in comparison_cache:
        comparison_cache[pair_key] = ML_compare(actionA, actionB, city)
    return comparison_cache[pair_key]

Report a problem with this comment

💬 Chat with Korbit by mentioning @korbit-ai.

Copy link
Collaborator

@mircorudolph mircorudolph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you. Looks good

@@ -329,31 +328,211 @@ def filter_actions_by_biome(actions, city):
]


def ML_compare(actionA, actionB, city):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you have this wrapper? It does essentially just return the result of my implemented ml_compare so you could use that directly here

if wildcard:
winners.append(wildcard)
print(f" Wildcard {wildcard.get('ActionName', 'Unknown')} automatically advances")

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its better to use the action_id instead of the name? The id is easier for identifying an action compared to the name

# If exactly one winner, we found the bracket winner
if len(winners) == 1:
print(f" Final winner of bracket: {winners[0].get('ActionName', 'Unknown')}")
return winners[0], losers
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here? action_id vs action_name

print(" No winner found, breaking")
break # no more participants

print(f" Rank #{rank}: {winner.get('ActionName', 'Unknown')}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use ActionName as well I guess. Just feeling action_id is unique and the correct identifier.

wildcard = None
if len(actions) % 2 == 1:
wildcard = actions.pop()
print(f" Odd number of actions, wildcard: {wildcard.get('ActionName', 'Unknown')}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

print(f" Round complete. {len(winners)} winners advancing to next round")

# If exactly one winner, we found the bracket winner
if len(winners) == 1:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I understand the tournament ranking completely.
In what case would len(winners) == 1?

We pass in a list of x actions.
Then we do a 1 against 2, 3 against 4, 5 against 6 and so on comparison and append all the winners to the winners list.
So we have as many winners as we have matches no?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants