-
Notifications
You must be signed in to change notification settings - Fork 0
LZ_security - topic_rewards/SafeApplyFuncOnAllActiveEpochEndingTopics used the wrong parameters #97
Comments
The escalation could not be created because you are not exceeding the escalation threshold. You can view the required number of additional valid issues/judging contest payouts in your Profile page, |
Escalate This issue should be high :
|
You've created a valid escalation! To remove the escalation from consideration: Delete your comment. You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final. |
The problem depends heavily on the DefaultPageLimit being configured to a large/small value, which isn't as directly exploitable and straightforward as #56 & #60, which both have more direct attacks making them higher severity. I would classify as medium severity. |
Excuse me for a silly question, but want to confirm since not very familiar with Go, but |
Yes. DefaultPageLimit, MaxPageLimit and topicPageLimit are set by the admin. |
Fair enough, but as I understand, the code can function perfectly depending on the values set by the admin. Hence, it should be invalid, based on the following rule:
Correct me if I'm wrong and the issue is with any admin value. For now, I believe it's not, hence, planning to reject the escalation cause it asks to increase the severity and invalidate the issue. |
admin is certainly trusted, but the code uses the wrong parameter, and the trusted parameter is used twice. |
I cannot agree that it is invalid. First, this issue does not assume that the admin did anything wrong; rather, it is about discovering an error in the code. Secondly, I understand your point—despite the error in the code, the admin could potentially mitigate the issue by adjusting the parameters. However, since the number of topics is continuously increasing, the admin cannot fix the problem simply by modifying the DefaultPageLimit parameter. |
Firstly, excuse me for the confusion, I didn't imply any mistake from the admin. Secondly:
Can you share such a scenario? |
According to the comments in the code(https://github.com/sherlock-audit/2024-06-allora/blob/main/allora-chain/x/emissions/module/rewards/topic_rewards.go#L206-L213): // default page limit for the max because default is 100 and max is 1000
// 1000 is excessive for the topic query Let’s assume that the initial value of DefaultPageLimit is 100, and the number of topics has already reached 9990. The admin notices that the number of topics will exceed DefaultPageLimit * DefaultPageLimit = 100 * 100 = 10,000. |
Oh, excuse me for not noticing the code comments. But the code comments indicate that it's the exact intention to use Default limit instead of Max limit. In this case, I believe it's not an issue or a mistake, it's the exact design of the protocol. Hence, this should remain invalid, planning to reject the escalation, since it asked to increase the severity, but invalidate the severity. |
We’re back to the starting point of the issue. The issue we pointed out is not about using the default limit instead of the max limit. err=SafeApplyFuncOnAllActiveEpochEndingTopics(...,moduleParams.DefaultPageLimit,moduleParams.DefaultPageLimit) This is also why the sponsor confirmed the issue and will fix it. When I gave the scenario example, I mentioned the max limit and pointed out that even if DefaultPageLimit is set to the max limit, the issue could still arise. |
But doesn't the report say that the exact problem is that the Default limit is used as Max limit, no?
Then the report describes different scenarios where the Default limit equals or is bigger than the max limit. So, after reading the report I believe the problem is exactly in the Default limit being used instead of the max limit. The report doesn't say that the problem in both parameters, only the problem in one is mentioned. Additionally, if we use the max instead of the default, i.e. the correct parameter, the issue is even more likely to occur since max is expected to be 10 times larger than the default. Hence, my decision remains the same. The code comments indicate that the default instead of Max is used intentionally and the report talks about this problem, not about another parameter. Hence, planning to reject the escalation since it asked for higher severity, but will invalidate this issue. |
I think you misunderstood the issue a bit. func SafeApplyFuncOnAllActiveEpochEndingTopics(
ctx sdk.Context,
k keeper.Keeper,
block BlockHeight,
fn func(sdkCtx sdk.Context, topic *types.Topic) error,
topicPageLimit uint64,
maxTopicPages uint64,
) There are two parameters related to the page. |
To clarify, I believe using DefaultPageLimit for topicPageLimit is appropriate and works correctly, because we just use the default amount of topics in one page. I see how you saying that using DefaultPageLimit for maxTopicPages is not correct. But, that's the intention as evidenced in these comments. I understand your concern that it uses the default number of topics in one page as the max number of pages. But, I believe it is exactly what the comments mean. Hence, I still believe it's intended, planning to reject the escalation since it asked for higher severity, but will invalidate this issue. |
I cannot agree with that point. 1 DefaultPageLimit is 100, when the number of topics are or greater than 10001, GetAndUpdateActiveTopicWeights() will miss active topics. 2 Let’s take another look at what happens inside the SafeApplyFuncOnAllActiveEpochEndingTopics() function. // Apply a function on all active topics that also have an epoch ending at this block
// Active topics have more than a globally-set minimum weight, a function of revenue and stake
// "Safe" because bounded by max number of pages and apply running, online operations.
func SafeApplyFuncOnAllActiveEpochEndingTopics(
ctx sdk.Context,
k keeper.Keeper,
block BlockHeight,
fn func(sdkCtx sdk.Context, topic *types.Topic) error,
topicPageLimit uint64,
maxTopicPages uint64,
) error {
topicPageKey := make([]byte, 0)
i := uint64(0)
for {
@>> topicPageRequest := &types.SimpleCursorPaginationRequest{Limit: topicPageLimit, Key: topicPageKey}
@>> topicsActive, topicPageResponse, err := k.GetIdsOfActiveTopics(ctx, topicPageRequest)
if err != nil {
Logger(ctx).Warn(fmt.Sprintf("Error getting ids of active topics: %s", err.Error()))
continue
}
for _, topicId := range topicsActive {
topic, err := k.GetTopic(ctx, topicId)
if err != nil {
Logger(ctx).Warn(fmt.Sprintf("Error getting topic: %s", err.Error()))
continue
}
if k.CheckCadence(block, topic) {
// All checks passed => Apply function on the topic
err = fn(ctx, &topic)
if err != nil {
Logger(ctx).Warn(fmt.Sprintf("Error applying function on topic: %s", err.Error()))
continue
}
}
}
// if pageResponse.NextKey is empty then we have reached the end of the list
if topicsActive == nil || i > maxTopicPages {
break
}
topicPageKey = topicPageResponse.NextKey
i++
}
return nil
} DefaultPageLimit is mainly applied in the GetIdsOfActiveTopics() function, which also has an O(n²) sorting operation based on the size of the limit within the function. func (k Keeper) GetIdsOfActiveTopics(ctx context.Context, pagination *types.SimpleCursorPaginationRequest) ([]TopicId, *types.SimpleCursorPaginationResponse, error) {
limit, start, err := k.CalcAppropriatePaginationForUint64Cursor(ctx, pagination)
if err != nil {
return nil, nil, err
}
startKey := make([]byte, binary.MaxVarintLen64)
binary.BigEndian.PutUint64(startKey, start)
nextKey := make([]byte, binary.MaxVarintLen64)
binary.BigEndian.PutUint64(nextKey, start+limit)
@>> rng, err := k.activeTopics.IterateRaw(ctx, startKey, nextKey, collections.OrderAscending)
if err != nil {
return nil, nil, err
}
activeTopics, err := rng.Keys()
if err != nil {
return nil, nil, err
}
defer rng.Close()
// If there are no topics, we return the nil for next key
if activeTopics == nil {
nextKey = make([]byte, 0)
}
return activeTopics, &types.SimpleCursorPaginationResponse{
NextKey: nextKey,
}, nil
} This is also why the team dev doesn’t directly sort all totalTopics at once, but instead breaks them into multiple pages for processing—because the pagelimit is relatively small. The second parameter, maxTopicPages, can be set relatively large(like 1000) because it does not increase the time complexity of the sorting algorithm. Additionally, when the number of topics is insufficient, topicsActive = nil will directly break the loop. if topicsActive == nil || i > maxTopicPages {
break
} |
We did intentionally set it to I would flag it as a confirmed bug, low or medium severity issue, it is true that our administrative parameters can control this, but it also conflates the meaning of DefaultPageLimit, it's bad practice and we should be more carefully setting this with a separate parameter that is just tunable for this particular processing loop alone |
In that case I agree that it should be a valid bug, but I still believe it's medium severity. The admin can partially control the situation, but the issue would arise regularly based on this comment. Planning to reject the escalation and leave the issue as it is. |
Result: |
Escalations have been resolved successfully! Escalation status:
|
The protocol team fixed this issue in the following PRs/commits: |
LZ_security
Medium
topic_rewards/SafeApplyFuncOnAllActiveEpochEndingTopics used the wrong parameters
Summary
SafeApplyFuncOnAllActiveEpochEndingTopics
used the wrong parameters, can lead to acbi deal with too little or too much topic at a time,handling too many topics, in the worst case, in a halt of the chain.
Vulnerability Detail
topic_rewards.go
GetAndUpdateActiveTopicWeights
function calls in theSafeApplyFuncOnAllActiveEpochEndingTopics
function,SafeApplyFuncOnAllActiveEpochEndingTopics
function in the last two parameters fortopicPageLimit
andmaxTopicPages
The number of topics that can be processed at a time is limited to topicPageLimit * maxTopicPages :
But the problem is that call
SafeApplyFuncOnAllActiveEpochEndingTopics
function used the wrong parameters:The caller USES moduleParams.DefaultPageLimit as maxTopicPages
This limits the number of topics to be processed each time: topicPageLimit * topicPageLimit
This can cause
acbi/EndBlocker
to process too many topics at a time.acbi/EndBlocker -> rewards.GetAndUpdateActiveTopicWeights -> SafeApplyFuncOnAllActiveEpochEndingTopics -> with error parameters
Another problem is that if DefaultPageLimit > MaxPageLimit,
CalcAppropriatePaginationForUint64Cursor
function, can let the limit = MaxPageLimit:However, if maxTopicPages = DefaultPageLimit, there is no such restriction,since
maxTopicPages
is in the outer for loop, the problem is made worse.Impact
acbi/EndBlocker handling too many topics, in the worst case, in a halt of the chain.
Code Snippet
https://github.com/sherlock-audit/2024-06-allora/blob/main/allora-chain/x/emissions/module/rewards/topic_rewards.go#L206-L213
Tool used
Manual Review
Recommendation
Use the correct parameters
The text was updated successfully, but these errors were encountered: