You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
UFixed6 totalAllocation;
for (uint256 i; i < message.markets.length; i++) {
marketToGroup[owner][message.markets[i]] = message.group;
_rebalanceConfigs[owner][message.group][message.markets[i]] = message.configs[i];
groupToMarkets[owner][message.group].push(IMarket(message.markets[i]));
totalAllocation = totalAllocation.add(message.configs[i].target);
}
if (message.markets.length!=0&&!totalAllocation.eq(UFixed6Lib.ONE))
revertControllerInvalidRebalanceTargetsError();
Impact
When a rebalancing configuration is set with duplicate markets, the storage updates result in a single market having a target allocation less than 100%, while the validation is bypassed by summing the duplicate targets. For example, using [marketA, marketA] with 60% targets each would pass validation (120% total) but store a 60% target for marketA.
This impacts the rebalancing mechanism in checkGroup which is used by _rebalanceGroup:
Because the system is configured with a target that sums to less than 100% (e.g., 60% through duplicate market inputs), it will perpetually try to rebalance to an impossible state through marketTransfer calls in _rebalanceGroup. This creates a feedback loop where repeated rebalancing attempts lead to continuous fee extraction and MEV opportunities, as the system can never achieve the invalid target allocation. The comment // read from storage to trap duplicate markets indicates this risk was known but the implementation fails to prevent it.
Fix
To fix this, the function should either:
Add a check for duplicate markets before processing them, or
Calculate the total allocation by reading from the storage mappings after they've been updated, which would naturally handle duplicates correctly
For example:
function _updateRebalanceGroup(
RebalanceConfigChange calldatamessage,
addressowner
) private {
if (message.group ==0|| message.group > MAX_GROUPS_PER_OWNER)
revertControllerInvalidRebalanceGroupError();
if (message.markets.length> MAX_MARKETS_PER_GROUP)
revertControllerInvalidRebalanceMarketsError();
// Delete existing group configurationfor (uint256 i; i < groupToMarkets[owner][message.group].length; i++) {
address market =address(groupToMarkets[owner][message.group][i]);
delete _rebalanceConfigs[owner][message.group][market];
delete marketToGroup[owner][market];
}
delete groupToMarkets[owner][message.group];
// Check for duplicates and validate total allocation before state changes
UFixed6 totalAllocation;
for (uint256 i; i < message.markets.length; i++) {
// Check for duplicates in the input arrayfor (uint256 j =0; j < i; j++) {
if (message.markets[i] == message.markets[j])
revertControllerDuplicateMarketError(message.markets[i]);
}
// Accumulate total allocation
totalAllocation = totalAllocation.add(message.configs[i].target);
}
// Validate total allocation equals 100% if group is not being deletedif (message.markets.length!=0&&!totalAllocation.eq(UFixed6Lib.ONE))
revertControllerInvalidRebalanceTargetsError();
// Update state after all validation passesfor (uint256 i; i < message.markets.length; i++) {
uint256 currentGroup = marketToGroup[owner][message.markets[i]];
if (currentGroup !=0)
revertControllerMarketAlreadyInGroupError(IMarket(message.markets[i]), currentGroup);
marketToGroup[owner][message.markets[i]] = message.group;
_rebalanceConfigs[owner][message.group][message.markets[i]] = message.configs[i];
groupToMarkets[owner][message.group].push(IMarket(message.markets[i]));
groupToMaxRebalanceFee[owner][message.group] = message.maxFee;
emitRebalanceMarketConfigured(owner, message.group, message.markets[i], message.configs[i]);
}
emitRebalanceGroupConfigured(owner, message.group, message.markets.length);
}
The text was updated successfully, but these errors were encountered:
Cheerful Taffy Dolphin
Medium
Duplicate Market Allocation Bypass Leads to Continuous Rebalancing and Fee Drain
Summary
A vulnerability exists in
_updateRebalanceGroup
where duplicate markets can be used to bypass the target allocation validation:https://github.com/sherlock-audit/2025-01-perennial-v2-4-update/blob/main/perennial-v2/packages/periphery/contracts/CollateralAccounts/Controller.sol#L277
Impact
When a rebalancing configuration is set with duplicate markets, the storage updates result in a single market having a target allocation less than 100%, while the validation is bypassed by summing the duplicate targets. For example, using [marketA, marketA] with 60% targets each would pass validation (120% total) but store a 60% target for marketA.
This impacts the rebalancing mechanism in
checkGroup
which is used by_rebalanceGroup
:Because the system is configured with a target that sums to less than 100% (e.g., 60% through duplicate market inputs), it will perpetually try to rebalance to an impossible state through marketTransfer calls in _rebalanceGroup. This creates a feedback loop where repeated rebalancing attempts lead to continuous fee extraction and MEV opportunities, as the system can never achieve the invalid target allocation. The comment // read from storage to trap duplicate markets indicates this risk was known but the implementation fails to prevent it.
Fix
To fix this, the function should either:
For example:
The text was updated successfully, but these errors were encountered: