Skip to main content

Lido

Smart Contract Security Assessment

November 22, 2022

Lido

SUMMARY


ABSTRACT

Dedaub was commissioned to perform a re-audit of the Lido protocol on the Avalanche blockchain. The audit report from the original audit performed by Dedaub is available here.

Lido is a liquid staking protocol, similar to the Lido protocol on other chains, which allows users to stake AVAX while enjoying the liquidity of their staked tokens. This audit report covers the contracts of the repository AvaLido/contracts listed below at commit hash 94ae5fae15428ddcb77e7d40365622bda2fc9276. Two senior auditors worked over the updated codebase for 3 days.

The audited contract list is the following:

src/
  • AvaLido.sol
  • MpcManager.sol
  • Oracle.sol
  • OracleManager.sol
  • Roles.sol
  • stAVAX.sol
  • Treasury.sol
  • Types.sol
  • ValidatorSelector.sol
  • interfaces/
    • IMpcManager.sol
    • IOracle.sol
    • ITreasury.sol
    • IValidatorSelector.sol

The codebase appears to be well-documented and of high-quality. It also contains an extensive test suite (which was not audited). Users of the protocol should also note that the protocol’s functionality crucially relies on its interaction with off-chain code (oracles, MPC group members, etc) that was not covered in this or the original audit.


SETTING & CAVEATS

The audit’s main target is security threats, i.e., what the community understanding would likely call "hacking", rather than regular use of the protocol. Functional correctness (i.e., issues in "regular use") is a secondary consideration. Typically it can only be covered if we are provided with unambiguous (i.e., full-detail) specifications of what is the expected, correct behavior. In terms of functional correctness, we often trusted the code’s calculations and interactions, in the absence of any other specification.

Functional correctness relative to low-level calculations (including units, scaling, quantities returned from external protocols) is generally most effectively done through thorough testing rather than human auditing. The scope of the audit includes smart contract code. Interactions with off-chain (front-end or back-end) code are not examined other than to consider entry points for the contracts, i.e., calls into a smart contract that may disrupt the contract’s functioning.


VULNERABILITIES & FUNCTIONAL ISSUES

This section details issues that affect the functionality of the contract. Dedaub generally categorizes issues according to the following severities, but may also take other considerations into account such as impact or difficulty in exploitation:

Category
Description
CRITICAL
Can be profitably exploited by any knowledgeable third-party attacker to drain a portion of the system’s or users’ funds OR the contract does not function as intended and severe loss of funds may result.
HIGH
Third-party attackers or faulty functionality may block the system or cause the system or users to lose funds. Important system invariants can be violated.
MEDIUM
Examples:
  • User or system funds can be lost when third-party systems misbehave.
  • DoS, under specific conditions.
  • Part of the functionality becomes unusable due to a programming error.
LOW
Examples:
  • Breaking important system invariants but without apparent consequences.
  • Buggy functionality for trusted users where a workaround exists.
  • Security issues which may manifest when the system evolves.

Issue resolution includes “dismissed” or “acknowledged” but no action taken, by the client, or “resolved”, per the auditors.


CRITICAL SEVERITY

[No critical severity issues]


HIGH SEVERITY

H1

MpcManager::reportGeneratedKey not updating the status of keygens may block the system

H1HIGH

MpcManager::reportGeneratedKey not updating the status of keygens may block the system
open

The main purpose of the function MpcManager::reportGeneratedKey is to verify that it has been called by every participant of the authorized MPC group for the key to be generated.

// MpcManager.sol:reportedGeneratedKey:219
if (confirmationCount == groupSize) {
_keyToGroupIds[generatedPublicKey] = groupId;
lastGenPubKey = generatedPublicKey;
lastGenAddress = _calculateAddress(generatedPublicKey);
emit KeyGenerated(groupId, generatedPublicKey);
// Dedaub: The keygen request should be marked as COMPLETED now
// Dedaub: The following lines should be added
// lastKeygenRequest = KeygenStatusHelpers.
// makeKeygenRequest(groupId, uint8(KeygenStatus.COMPLETED));

Even though this functionality is implemented correctly, the status of the keygen request is not updated to COMPLETED when it is identified that all group members have reported, i.e., confirmationCount == groupSize. As a result, the next call to function requestKeygen will revert as it requires the status of the lastKeygenRequest to not be set to REQUESTED, which is as it is never updated by reportGeneratedKey. The only way to bypass this and allow the protocol to continue functioning would be to call cancelKeygen after every successful key generation to change the status of the lastKeygenRequest to CANCELED. Of course, this would be messy and just a compromise.



MEDIUM SEVERITY

M1

MpcManager::_isSortedAscendingOrder allows same keys

M1MEDIUM

MpcManager::_isSortedAscendingOrder allows same keys
open

The function MpcManager::_isSortedAscendingOrder does not ensure that all keys are unique as it requires that pkey[i] >= pkey[i-1] or in other words that a key at position i of the public keys array to be greater or equal to that at position i-1.

// MpcManager.sol::_isSortedAscendingOrder:339
function _isSortedAscendingOrder(bytes[] calldata publicKeys)
private pure returns (bool) {
uint256 prev = 0;

for (uint256 i = 0; i < publicKeys.length; i++) {
uint256 curr = uint256(bytes32(publicKeys[i][:32]));
if (curr < prev) {
return false;
}
prev = curr;
}
return true;
}

An MPC group should strictly consist of unique public keys (members) as otherwise concepts like the MPC group threshold lose their value/purpose.

M2

Unbounded loop DoS might still be possible under specific conditions

M2MEDIUM

Unbounded loop DoS might still be possible under specific conditions
open

In the original audit, we reported that AvaLido::fillUnstakeRequests contains an unbounded loop that could allow an adversary to perform a DoS by filling a large number of unstake requests (original audit issue H1). This vulnerability has been mostly addressed by performing a partial execution, up to an unstakeLoopBound, so that the remaining requests will be filled in the next execution and progress is always guaranteed.

However, AvaLido::fillUnstakeRequests is written in a way that works even if the queue contains requests that are already filled (it’s not clear whether this can really happen, but it’s a good practice to not make unnecessary assumptions). Filled requests are simply ignored by a check in the code:

for (uint256 i = unfilledHead; i < unstakeRequests.length; i++) {
if (remaining == 0) {
return (false, 0);
}

// Return early to prevent unstake flooding
if (numberFilled == unstakeLoopBound) {
return (false, remaining);
}

// Dedaub: the following 'if' ignores already filled requests
if (unstakeRequests[i].amountFilled < unstakeRequests[i].amountRequested) {
...
unfilledHead = i + 1;
...
}

remaining = inputAmount - amountFilled;
numberFilled++;
}

However, in the context of the DoS vulnerability, unfilled requests might nullify the unstakeLoopBound protection! This is due to the fact that the unfilledHead is only modified inside the if branch, only for unfilled requests. If there is a large number of filled requests in the queue, we might reach unstakeLoopBound without ever entering the if branch, hence without ever modifying the queue head. The loop will terminate, but no actual progress will be made, the next execution will have to iterate the same exact requests starting from the same unfilledHead, so any future execution will also fail.

To ensure that progress is always made, we recommend that unfilledHead is always modified (outside the if branch).



LOW SEVERITY

L1

Some Oracle view functions might return inaccurate data if called during an update of node ids

L1LOW

Some Oracle view functions might return inaccurate data if called during an update of node ids
open

Function Oracle::getLatestValidators checks if there is an ongoing update of node ids in order to not return an array of invalidated pointers back to the node ids list. Nevertheless, there exist other functions such as getAllValidatorsByEpochId and allValidatorNodeIds that do not implement such checks even though they might provide inaccurate data if called during an update. For example, if the id of the latest finalized epoch is provided to getAllValidatorsByEpochId, it should return the same result as getLatestValidators but it does not. The severity of the issue is mitigated by the check in getLatestValidators, which in combination with another check in Oracle::selectValidatorsForStake does not allow the contract execution to reach calls to the other view functions if an update is in progress.

L2

Wasteful check in MpcManager::createGroup

L2LOW

Wasteful check in MpcManager::createGroup
open

The function MpcManager::createGroup checks that the provided publicKeys array parameter is sorted in ascending order by calling _isSortedAscendingOrder. However, the call happens inside a loop which goes over the publicKeys array, thus in every iteration of that loop it is checked that the immutable publicKeys array is sorted, which is clearly wasteful.

// MpcManager.sol::createGroup:141
function createGroup(bytes[] calldata publicKeys, uint8 threshold)
external onlyRole(ROLE_MPC_MANAGER) {
// ..

bytes memory b;
for (uint256 i = 0; i < groupSize; i++) {
if (publicKeys[i].length != PUBKEY_LENGTH) revert InvalidPublicKey();
if (!_isSortedAscendingOrder(publicKeys)) // Dedaub: wasteful op
revert PublicKeysNotSorted();
b = bytes.concat(b, publicKeys[i]);
}

// ..
}

The ordering check should be moved before the loop.

L3

Functions responsible for the partial initialization of AvaLido can be called multiple times

L3LOW

Functions responsible for the partial initialization of AvaLido can be called multiple times
open

According to comments in the code the functions setPrincipalTreasuryAddress and setRewardTreasuryAddress of the AvaLido contract exist to be used just for initialization purposes. Contrary to what the comments describe, the two functions can be called multiple times as there is no check which would lead to the execution’s reversion when the principalTreasury and rewardTreasury storage variables have already been set.

L4

Recent changes have weakened the protocol’s reentrancy protection

L4LOW

Recent changes have weakened the protocol’s reentrancy protection
open

Two recent changes have been made to AvaLido::claim.

First, to save gas, requests are only updated on partial claims:

// To save gas we only update the mapping on partial claims.
// We delete full claims at the end of the function.
if (!isFullyClaimed(request)) {
unstakeRequests[requestIndex] = request;
}

Second, transfers are made with sender.call instead of the previously used sender.transfer (a good practice, to avoid hard-coded gas limits):

(bool success, ) = msg.sender.call{value: amountAVAX}("");
if (!success) revert TransferFailed();

if (isFullyClaimed(request)) {

delete unstakeRequests[requestIndex];

}

Note also that the transfer is made before deleting the full claim.

So the effect of these two changes is that the adversary-controlled receive hook is now called with no gas restriction, and before updating the protocol’s state, which makes it susceptible to reentrancy attacks.

Of course, the function is marked as nonReentrant, so no reentrancy attack is really possible. Nevertheless, there is no need to introduce changes that weaken the protocol’s protection (it is enough to forget a reentrancy guard in any function in the future to introduce a vulnerability). So we recommend simply performing the transfer after all updates to the protocol’s state.



CENTRALIZATION ISSUES

It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocol’s owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-profile, high-value protocols have significant centralization threats.)

The protocol defines several admin/manager roles that serve to give access to specific functions of certain contracts only to the appropriate entities. The following roles are defined and used:

  • DEFAULT_ADMIN_ROLE
  • ROLE_PAUSE_MANAGER
  • ROLE_RESUME_MANAGER
  • ROLE_FEE_MANAGER
  • ROLE_ORACLE_ADMIN
  • ROLE_VALIDATOR_MANAGER
  • ROLE_MPC_MANAGER
  • ROLE_TREASURY_MANAGER
  • ROLE_PROTOCOL_MANAGER

For example, the entity that is assigned the ROLE_MPC_MANAGER is able to call functions MpcManager::createGroup and MpcManager::requestKeygen that are essential for the correct functioning of the MPC component. Multiple roles allow for the distribution of power so that if one entity gets hacked all other functions of the protocol remain unaffected. Of course, this assumes that the protocol team distributes the different roles to separate entities thoughtfully and does not completely alleviate centralization issues.

The contract MpcManager.sol appears to build on/depend on a lot of off-chain logic that could make it suffer from centralization issues as well.


OTHER/ ADVISORY ISSUES

This section details issues that are not thought to directly affect the functionality of the project, but we recommend considering them.

A1

Gas optimization

A1ADVISORY

Gas optimization
info

The function ValidatorSelector::getAvailableValidatorsWithCapacity could return early if the call to Oracle::getLatestValidators returns an empty array (e.g., during an update) in order to save on gas.

A2

No-op in MpcManager::joinRequest

A2ADVISORY

No-op in MpcManager::joinRequest
info

The function MpcManager::joinRequest makes two consecutive calls to ConfirmationHelpers::confirm while ignoring the result of the first one at line 245 of MpcManager.sol.

A3

Inaccurate comment in the ValidatorHelpers library

A3ADVISORY

Inaccurate comment in the ValidatorHelpers library
info

In the function ValidatorHelpers::getNodeIndex the comment “Take 12 bits from the middle which represents our index” is inaccurate after the latest changes to the code. It should be updated to “Take the first 14 bits which represent our index”.

A4

Compiler bugs

A4ADVISORY

Compiler bugs
info

The code is compiled with Solidity 0.8.10, which, at the time of writing, has some known bugs, which we do not believe to affect the correctness of the contracts.



DISCLAIMER

The audited contracts have been analyzed using automated techniques and extensive human inspection in accordance with state-of-the-art practices as of the date of this report. The audit makes no statements or warranties on the security of the code. On its own, it cannot be considered a sufficient assessment of the correctness of the contract. While we have conducted an analysis to the best of our ability, it is our recommendation for high-value contracts to commission several independent audits, a public bug bounty program, as well as continuous security auditing and monitoring through Dedaub Security Suite.


ABOUT DEDAUB

Dedaub offers significant security expertise combined with cutting-edge program analysis technology to secure some of the most prominent protocols in DeFi. The founders, as well as many of Dedaub's auditors, have a strong academic research background together with a real-world hacker mentality to secure code. Protocol blockchain developers hire us for our foundational analysis tools and deep expertise in program analysis, reverse engineering, DeFi exploits, cryptography and financial mathematics.