Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cheerful Taffy Dolphin - Duplicate Market Allocation Bypass Leads to Continuous Rebalancing and Fee Drain #37

Open
sherlock-admin3 opened this issue Jan 31, 2025 · 0 comments

Comments

@sherlock-admin3
Copy link

Cheerful Taffy Dolphin

Medium

Duplicate Market Allocation Bypass Leads to Continuous Rebalancing and Fee Drain

Summary

A vulnerability exists in _updateRebalanceGroup where duplicate markets can be used to bypass the target allocation validation:

https://github.com/sherlock-audit/2025-01-perennial-v2-4-update/blob/main/perennial-v2/packages/periphery/contracts/CollateralAccounts/Controller.sol#L277

UFixed6 totalAllocation;
for (uint256 i; i < message.markets.length; i++) {
    marketToGroup[owner][message.markets[i]] = message.group;
    _rebalanceConfigs[owner][message.group][message.markets[i]] = message.configs[i];
    groupToMarkets[owner][message.group].push(IMarket(message.markets[i]));
    
    totalAllocation = totalAllocation.add(message.configs[i].target);
}

if (message.markets.length != 0 && !totalAllocation.eq(UFixed6Lib.ONE))
    revert ControllerInvalidRebalanceTargetsError();

Impact

When a rebalancing configuration is set with duplicate markets, the storage updates result in a single market having a target allocation less than 100%, while the validation is bypassed by summing the duplicate targets. For example, using [marketA, marketA] with 60% targets each would pass validation (120% total) but store a 60% target for marketA.

This impacts the rebalancing mechanism in checkGroup which is used by _rebalanceGroup:

function checkGroup(address owner, uint256 group) public view returns (
    Fixed6 groupCollateral, 
    bool canRebalance,
    Fixed6[] memory imbalances
) {
    // Query collateral and calculate imbalances based on stored targets
    (actualCollateral, groupCollateral) = _queryMarketCollateral(owner, group);
    
    for (uint256 i; i < actualCollateral.length; i++) {
        IMarket market = groupToMarkets[owner][group][i];
        RebalanceConfig memory marketRebalanceConfig = _rebalanceConfigs[owner][group][address(market)];
        (bool canMarketRebalance, Fixed6 imbalance) = RebalanceLib.checkMarket(
            marketRebalanceConfig,  // Contains invalid target < 100%
            groupToMaxRebalanceFee[owner][group],
            groupCollateral,
            actualCollateral[i]
        );
        imbalances[i] = imbalance;
        canRebalance = canRebalance || canMarketRebalance;
    }
}

Because the system is configured with a target that sums to less than 100% (e.g., 60% through duplicate market inputs), it will perpetually try to rebalance to an impossible state through marketTransfer calls in _rebalanceGroup. This creates a feedback loop where repeated rebalancing attempts lead to continuous fee extraction and MEV opportunities, as the system can never achieve the invalid target allocation. The comment // read from storage to trap duplicate markets indicates this risk was known but the implementation fails to prevent it.

Fix

To fix this, the function should either:

  1. Add a check for duplicate markets before processing them, or
  2. Calculate the total allocation by reading from the storage mappings after they've been updated, which would naturally handle duplicates correctly

For example:

function _updateRebalanceGroup(
    RebalanceConfigChange calldata message,
    address owner
) private {
    if (message.group == 0 || message.group > MAX_GROUPS_PER_OWNER)
        revert ControllerInvalidRebalanceGroupError();

    if (message.markets.length > MAX_MARKETS_PER_GROUP)
        revert ControllerInvalidRebalanceMarketsError();

    // Delete existing group configuration
    for (uint256 i; i < groupToMarkets[owner][message.group].length; i++) {
        address market = address(groupToMarkets[owner][message.group][i]);
        delete _rebalanceConfigs[owner][message.group][market];
        delete marketToGroup[owner][market];
    }
    delete groupToMarkets[owner][message.group];

    // Check for duplicates and validate total allocation before state changes
    UFixed6 totalAllocation;
    for (uint256 i; i < message.markets.length; i++) {
        // Check for duplicates in the input array
        for (uint256 j = 0; j < i; j++) {
            if (message.markets[i] == message.markets[j])
                revert ControllerDuplicateMarketError(message.markets[i]);
        }
        
        // Accumulate total allocation
        totalAllocation = totalAllocation.add(message.configs[i].target);
    }

    // Validate total allocation equals 100% if group is not being deleted
    if (message.markets.length != 0 && !totalAllocation.eq(UFixed6Lib.ONE))
        revert ControllerInvalidRebalanceTargetsError();

    // Update state after all validation passes
    for (uint256 i; i < message.markets.length; i++) {
        uint256 currentGroup = marketToGroup[owner][message.markets[i]];
        if (currentGroup != 0)
            revert ControllerMarketAlreadyInGroupError(IMarket(message.markets[i]), currentGroup);

        marketToGroup[owner][message.markets[i]] = message.group;
        _rebalanceConfigs[owner][message.group][message.markets[i]] = message.configs[i];
        groupToMarkets[owner][message.group].push(IMarket(message.markets[i]));
        groupToMaxRebalanceFee[owner][message.group] = message.maxFee;

        emit RebalanceMarketConfigured(owner, message.group, message.markets[i], message.configs[i]);
    }

    emit RebalanceGroupConfigured(owner, message.group, message.markets.length);
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant