Predicate (formerly Aethos) AVS
Smart Contract Security Assessment
September 23, 2024

SUMMARY
ABSTRACT
Dedaub was commissioned to perform a security audit of the Aethos Network protocol, an AVS (actively validated service) to launch on EigenLayer. The audit was over smart contract code. This is the second design iteration of the protocol and while the code is of high quality and tested to a great extent, we raise numerous protocol-level and security concerns. We believe that the majority of these issues should be carefully considered and addressed before considering the code can be deemed deployable.
BACKGROUND
Aethos is a system that aims to provide protocols with the ability to specify policies for incoming transactions. Operators are the off-chain entities responsible for validating that the executed transactions comply with a protocol’s specified policy. The network of Operators is a small decentralized network that is economically incentivized by EigenLayer.
SETTING & CAVEATS
The audit report is over the contracts of the currently-private repository https://github.com/AethosNetwork/aethos-avs
at commit 729b71fd364b44cbb3e7977aff139461596ff0a5.
2 auditors worked on the codebase for 3 days on the following contracts:
The audited code is on-chain smart contract code. A large burden of correctness lies on off-chain callers of this smart contract functionality, as well as on other on-chain “clients” of the functionality. We optimistically assume that these environmental elements “do the right thing”. However, occasionally it is clear that well-intentioned non-audited code can easily make mistakes, which we attempt to warn about. Still, such warnings can never be exhaustive.
The audit’s main target is security threats, i.e., what the community understanding would likely call "hacking", rather than the regular use of the protocol. Functional correctness (i.e. issues in "regular use") is a secondary consideration. Typically it can only be covered if we are provided with unambiguous (i.e. full-detail) specifications of what is the expected, correct behavior. In terms of functional correctness, we often trusted the code’s calculations and interactions, in the absence of any other specification. Functional correctness relative to low-level calculations (including units, scaling and quantities returned from external protocols) is generally most effectively done through thorough testing rather than human auditing.
PROTOCOL-LEVEL CONSIDERATIONS
There are issues that we see at the protocol design and integration level and not at the level of coding itself.
P1
EigenLayer middleware concept of “quorum” used possibly inconsistently. Relationship to Aethos model should be clarified.
This is analogous to the P3 issue that was raised in our previous report for the Aethos AVS:
The Aethos Service Manager interacts directly with EigenLayer, skipping the EigenLayer Middleware. (This is not a problem by itself, just a design choice. EigenLayer Middleware would likely replace much of the Aethos Service Manager functionality, with different underlying technology—e.g., BLS signatures instead of ECDSA. Skipping the middleware when its full functionality is not needed is fine.)
However, the central concept of a “quorum” is used partially. The Aethos Service Manager effectively keeps its own “quorum” (the set of participating operators) and inherits its strategies from a quorum:
ServiceManager:350function addStrategy(address _strategy, uint8 quorumNumber, uint256 index) external onlyOwner {
IStakeRegistry.StrategyParams memory strategyParams =
IStakeRegistry(stakeRegistry).strategyParamsByIndex(quorumNumber, index);
if (address(strategyParams.strategy) != _strategy) {
revert ServiceManager__InvalidStrategy();
}
strategy.push(_strategy);
emit StrategyAdded(_strategy);
}
(The above code is not entirely well-motivated: if the _strategy is to match one looked up in internal data structures of the stakeRegistry, why have it be externally supplied at all?)
There is no consistency requirement documented in the above functionality. Are all the strategies that the Aethos Service Manager will use in a single EigenLayer quorum? Are they all the strategies of a quorum? Whatever the policy, it should be clearly documented and the relationship of Aethos consensus to EigenLayer quorums should be very clear.
Another question that is being raised is whether the AVS will make use of a StakeRegistry contract at all, since the version provided by the EigenLayer middleware is incompatible with the current ServiceManager contract:
-
The AVS’s
ServiceManagerand theStakeRegistrycontract would be out of sync in terms of the supported strategies. -
The EigenLayer middleware defines quorums in the
StakeRegistrycontract and then operators use theRegistryCoordinatorcontract in order to a register to an AVS’s quorums ( not theServiceManagercontract directly ). TheRegistryCoordinatorcontract also uses BLS signatures instead of ECDSA signatures
Another instance of such confusion is the function below, which accepts an array of quorum numbers and subsequently ignores it. (Only the length of the array is read, and it is unnecessary, since it is also supplied by other parameters.)
ServiceManager:375function updateOperatorsForQuorum(address[][] calldata operatorsPerQuorum,
bytes calldata quorumNumbers) external {
if (operatorsPerQuorum.length != quorumNumbers.length) {
revert ServiceManager__ArrayLengthMismatch();
}
address[] memory currQuorumOperators;
address currOperatorAddress;
OperatorInfo storage currOperator;
for (uint256 i; i != quorumNumbers.length;) {
... // Dedaub: quorumNumbers is entirely unused after this point
It should be noted that the client contracts should always make sure that the ServiceManager contract gets:
-
The address of the client is as the
AethosTask::targetfield -
The
msg.senderof the message call on the client contract as theAethosTask::senderfield -
The
msg.valueof the message call on the client contract as theAethosTask::valuefield -
The ABI encoded calldata of the message call on the client contract as the
AethosTask::encodedSigAndArgsfield -
The number of required signatures to be recovered as the
AethosTash::quorumThresholdCountfield
The above-mentioned correctness obligations are apparent through the example contracts. All client contracts should inherit the AthosClient contract or alternatively, make sure to be passing the task data to the ServiceManager contract in an analogous fashion:
AethosClient:46function _authorizeTransaction(
AethosMessage memory _aethosMessage,
bytes memory _encodedSigAndArgs
) internal returns (bool) {
AethosTask memory task = AethosTask({
msgSender: msg.sender,
target: address(this),
value: msg.value,
encodedSigAndArgs: _encodedSigAndArgs,
policyID: policyID,
quorumThresholdCount: uint32(_aethosMessage.signerAddresses.length),
taskId: _aethosMessage.taskId,
expireByBlockNumber: _aethosMessage.expireByBlockNumber
});
return serviceManager.validateSignatures(task, _aethosMessage.signerAddresses, _aethosMessage.signatures);
}
Lastly, depending on the resolution of C1(see below), clients may be required to make sure that a task with a particular AethosTask::taskId gets executed only once.
Each obligation not met will have security consequences on the client contract.
In its current implementation, the system can provide security guarantees for a standalone action on a client, since each task that is signed by the operators correlates to a single client action. However, this means that the system may not inherently provide security guarantees regarding:
- Multiple actions involving a client contract
- Multiple actions involving multiple client contracts
This could yield a scenario where an attacker is able to have actions A1, ..., An be approved individually by the system (assuming that each of them adheres to a client’s policies) and then execute them all at once in a single transaction, potentially exploiting a vulnerability in the client’s code.
Additionally, since the initiator of the transaction only commits on actions to be performed on a client contract (and not the entire transaction), a client’s policies should never rely on properties that are not inherently part of the client’s internal accounting. A motivating example is the following:
Assume that:
-
a client has a policy stating “the ETH balance of the client should never be less than N”
-
There exists an exploit allowing an attacker to drain the contract by performing action(s)
A
An attacker may do the following to exploit the vulnerability:
-
The attacker requests to approve a transaction where a hack contract is deployed and exploits the vulnerability by calling
A. In the end, the exploit contract code returns all ETH tokens back to the client contract and thus restoring the ETH balance policy. -
Action
Ais approved by the system, since no policy is violated during simulation -
The attacker submits an on-chain transaction where a hack contract is deployed and exploits the vulnerability by calling
A(which has been signed by the system). However, this time the hack contract’s code does not return the stolen ETH tokens to the vulnerable client.
There’s no way to note the discrepancy between the simulated and live transaction regarding non-client actions (i.e., ETH transfers in this scenario), so nothing prevents the attack in step 3 from succeeding.
VULNERABILITIES & FUNCTIONAL ISSUES
This section details issues affecting the functionality of the contract. Dedaub generally categorizes issues according to the following severities, but may also take other considerations into account such as impact or difficulty in exploitation:
- User or system funds can be lost when third-party systems misbehave
- DoS, under specific conditions
- Part of the functionality becomes unusable due to a programming error
- Breaking important system invariants but without apparent consequences
- Buggy functionality for trusted users where a workaround exists
- Security issues which may manifest when the system evolves
Issue resolution includes “dismissed” or “acknowledged” but no action taken, by the client, or “resolved”, per the auditors.
CRITICAL SEVERITY
C1
Neither the example client contracts nor the ServiceManager contract prevent signature replayability
The ServiceManager contract does not explicitly contain the notion of signature nonces, but it’s fair to assume that the AethosTask::taskId parameter can serve as a unique identifier for a task.
Since the ServiceManager contract does not perform any checks on the field, client protocols are implicitly expected to carry out important checks when invoking ServiceManager::validateSignatures. The client protocol has to make sure that the _task.taskId being validated has not been executed already, otherwise signatures are prone to being replayable until expiry.
ServiceManager:316function validateSignatures(
AethosTask calldata _task,
address[] memory signerAddresses,
bytes[] memory signatures
) external returns (bool isVerified) {
require(_task.quorumThresholdCount != 0, "...");
require(signerAddresses.length == signatures.length, "...");
require(block.number <= _task.expireByBlockNumber, "...");
bytes32 messageHash = hashTaskWithExpiry(_task);
for (uint256 i = 0; i < _task.quorumThresholdCount;) {
...
address recoveredSigner = ECDSA.recover(messageHash, signatures[i]);
require(recoveredSigner == signerAddresses[i], "Invalid signature");
address operator = signingKeyToOperator[recoveredSigner];
require(operators[operator].status == OperatorStatus.REGISTERED, "Signer is not a registered operator");
unchecked {
++i;
}
}
...
return true;
}
ServiceManager:464function hashTaskWithExpiry(
AethosTask calldata _task
) public pure returns (bytes32) {
return keccak256(
abi.encode(
_task.taskId,
_task.msgSender,
_task.target,
_task.value,
_task.encodedSigAndArgs,
_task.policyID,
_task.quorumThresholdCount,
_task.expireByBlockNumber
)
);
}
As an example of the kind of replay attacks that can be performed, consider the MetaCoin::sendCoin function:
MetaCoin:23function sendCoin(address receiver, uint256 amount, AethosMessage calldata aethosMessage) public {
bytes memory encodedSigAndArgs = abi.encodeWithSignature("_sendCoin(address,uint256)", receiver, amount);
require(_authorizeTransaction(aethosMessage, encodedSigAndArgs), "MetaCoin: unauthorized transaction");
// business logic function that is protected
_sendCoin(receiver, amount);
}
This function is protected by AethosClient::_authorizeTransaction, which itself calls ServiceManager::validateSignatures. However, since the validation of the signature is not recorded anywhere, the same signature remains valid after the call, allowing a malicious user to call sendCoin with identical parameters an arbitrary number of times, until _task.expireByBlockNumber is reached.
Note that this is a protocol-level issue, but we list it here due to its critical severity. It is also analogous to the P2 issue that was raised in our previous report for the Aethos AVS:
The current check for signature validity is based on a simple numeric count. Stakes (field totalStake) are only used to compare against a threshold, to tell whether an operator is registered. Signature validation does not use stakes at all, only counts signers.
ServiceManager:197function validateSignatures(
AethosTask calldata _task,
address[] memory signerAddresses,
bytes[] memory signatures
) external returns (bool isVerified) {
require(_task.quorumThresholdCount != 0, "ServiceManager.AethosVerified: quorum threshold count cannot be zero");
require(signerAddresses.length == signatures.length, "Mismatch between signers and signatures");
require(block.number <= _task.expireByBlockNumber, "ServiceManager.AethosVerified: transaction expired");
bytes32 messageHash = hashTaskWithExpiry(_task);
for (uint256 i = 0; i < _task.quorumThresholdCount;) {
...
address recoveredSigner = ECDSA.recover(messageHash, signatures[i]);
require(recoveredSigner == signerAddresses[i], "Invalid signature");
address operator = signingKeyToOperator[recoveredSigner];
require(operators[operator].status == OperatorStatus.REGISTERED, "...");
unchecked {
++i;
}
}
...
return true;
}
This violates the principle of economic security that EigenLayer supports, which is based on stakes, and not simply the number of signers. The weaker economic security will be more vulnerable to sybil attacks and more.
Moreover, AethosTask::quorumThresholdCount can be arbitrary, ServiceManager will accept any value as long as the corresponding signatures are valid for that value. This allows a single malicious operator with minimum stake to fully approve a malicious call by creating a signed task with `quorumThresholdCount == 1.
As an example of a potential attack scenario, consider the following attack against the MetaCoin example: \
-
A malicious user registers himself as operator, possessing only a minimum stake.
-
He then creates an
AethosTaskforMetaCoin::sendCoin,choosing the parametersreceiver,amountarbitrarily, as well asmsgSender,target,value, he setsquorumThresholdCount == 1and signs it. -
Finally he calls
MetaCoin::sendCoinwith an_aethosMessagecontaining only his own signature. This call will pass theserviceManager.validateSignaturescheck and will be thus accepted and executed.
In the PriceAggregatorUniV3 contract, the price is calculated on the minimum tick between a TWAP average and the current spot price tick. This behavior is fine when pricing assets to be traded. However, it can have devastating consequences when it is used to determine the value that flows out of a contract
PriceAggregatorUniV3::_fetchAmountFromSinglePool:234(spotTick, spotLiquidity) = OracleLibrary.getBlockStartingTickAndLiquidity(_pool);
...
(twapTick, twapLiquidity) = OracleLibrary.consult(_pool,SafeCast.toUint32(_twapPeriod));
int24 minTick;
if (_tokenIn < _tokenOut) {
minTick = spotTick < twapTick ? spotTick : twapTick;
} else {
minTick = spotTick > twapTick ? spotTick : twapTick;
}
return OracleLibrary.getQuoteAtTick(
minTick, // can assume safe being result from consult()
SafeCast.toUint128(_amountIn),
_tokenIn,
_tokenOut
);
...
It should be noted that when a tick-changing operation has taken place at the current block, the spot tick is the time-weighted average between the previous and the new tick:
OracleLibrary::getBlockStartingTickAndLiquidity:125...
uint32 delta = observationTimestamp - prevObservationTimestamp;
tick = int24((tickCumulative - int56(uint56(prevTickCumulative))) / int56(uint56(delta)));
...
Even though the time-weighted average of the two restricts the degrees of freedom when manipulating the spot price, the spot price is definitely manipulable provided that an attacker uses a sufficiently large amount of funds.
An attacker can thus:
- Tilt the pool that will be used to determine the value of the assets that will flow out of a rate-limited contract. The aim is to force a lower spot price of the token that will flow out of the contract and the attacker may use a flash-loan for this.
- Perform any rate-limited operations and have the oracle price the out-flowing funds at a much lower price than the asset’s real value
- Restore the tilted pool
HIGH SEVERITY
When calling ServiceManger::registerOperatorToAVS , an operator may specify a different address which will match recovered signatures to the newly registered operator.
ServiceManager:216function registerOperatorToAVS(
address _operatorSigningKey,
ISignatureUtils.SignatureWithSaltAndExpiry memory _operatorSignature
) external {
...
if (totalStake >= thresholdStake) {
operators[msg.sender] = OperatorInfo(totalStake, OperatorStatus.REGISTERED);
signingKeyToOperator[_operatorSigningKey] = msg.sender;
IAVSDirectory(avsDirectory).registerOperatorToAVS(msg.sender, _operatorSignature);
emit OperatorRegistered(msg.sender);
}
}
While EigenLayer’s AVSDirectory contract will verify that the registration signature has been signed by msg.sender, there exists no commitment on the _operatorSigningKey parameter.
As a result, an operator can take over a specific signing address even if it already belongs to another operator. This has consequences such as:
- An operator can register for the minimum number of shares and claim valid signatures without doing any of the validation work themselves.
- An operator can register for the minimum number of shares and grief another operator’s signing address to be
address(0). The only way for an operator to be able to sign again is if the owner de-registers the operator from the AVS and have the griefed operator re-register on the AVS. Griefing a single operator in this manner is enough to cause aServiceManager::validateSignaturescall fail and raise DOS concerns - The above point can be generalized into an attack where a malicious operator removes all other operators’ ability sign any tasks by using the
ServiceManager::rotateAethosSigningKeyfunction:
ServiceManager:194function rotateAethosSigningKey(address _oldSigningKey, address _newSigningKey) external {
require(
operators[msg.sender].status == OperatorStatus.REGISTERED,
"ServiceManager.rotateAethosSigningKey: operator is not registered"
);
require(
msg.sender == signingKeyToOperator[_oldSigningKey],
"..."
);
delete signingKeyToOperator[_oldSigningKey];
signingKeyToOperator[_newSigningKey] = msg.sender;
}
- Assume that the AVS has operators
O1, O2, ... ,On - A malicious operator registers to the Aethos AVS and takes over
O1’ s signature recovery address like we described above - The malicious operator calls
ServiceManager::rotateAethosSigningKeyby specifyingO1’s recovery address as_oldSigningKeyandO2’s recovery address as_newSigningKey. This effectively makes all ofO1’s signatures unrecoverable sincesigningKeyToOperator[<01's recovery address>]will map to the invalid operator ataddress(0) - The above process can be repeated, targeting the rest of the AVS operators.
- As a result, a malicious operator may pause the entire AVS at will.
When a newly rate-limited operation takes place and it does not overlap with the previously rate-limited block batch, the accounting inside the RateLimiter contract tries to subtract the value of all non-relevant batches from the account’s current active amount:
RateLimiter:224function _setTotalActiveTxValue(
uint256 oldestRelevantBlock
) internal {
TxHistory storage history = txHistory[msg.sender];
uint256 total = history.total;
uint256 ptr = history.ptr;
while (ptr < history.txBatches.length && history.txBatches[ptr].endBlockNumber <= oldestRelevantBlock) {
total -= history.txBatches[history.ptr].amount;
ptr++;
}
...
RateLimiter:241function _getTotalActiveTxValue(address sender, uint256 oldestRelevantBlock) internal view returns (uint256) {
...
uint256 total = history.total;
uint256 ptr = history.ptr;
while (ptr < history.txBatches.length && history.txBatches[ptr].endBlockNumber < oldestRelevantBlock) {
total -= history.txBatches[history.ptr].amount;
...
In the above snippets, the problem lies in the fact that only the value of the batch at index history.ptr is subtracted, even though local variable ptr is actually indexing the array within the loops. This does not only create inconsistencies within a rate-limited contract’s accounting, but it can potentially lead to underflows and create transaction reverts.
The “transaction values” that are tracked by the RateLimiter contract are scaled by 1e{token_decimals} * 1e{usdc_decimals}
RateLimiter::evaluateRateLimit:151
...
uint256 txAmount = amount * priceOracle.getPrice(token, amount);
...
However, RateLimitParams::maxAmount is scaled by 1e{usdc_decimals} * 1e6. It is apparent that for every token that has a different amount of decimals than USDC, the following comparison is inconsistent:
RateLimiter::_exceedsRateLimit:273...
return !bypassRateLimit[sender] && totalActiveTxValue + txAmount > rateLimitParams.maxAmount;
...
When the token that gets rate-limited has fewer decimals than USDC, then this check allows somebody to bypass rate-limiting restrictions. On the other hand, when the token that gets rate-limited has more decimals than USDC, then the check might prevent token operations that do not actually break the rate-limiting restrictions, preventing users from properly executing operations.
MEDIUM SEVERITY
When an operator’s signature is verified in ServiceManager::validateSignatures the function only asserts that the recovered operator is labeled as “registered” a t the function is executed:
ServiceManager:324function validateSignatures(
AethosTask calldata _task,
address[] memory signerAddresses,
bytes[] memory signatures
) external returns (bool isVerified) {
...
for (uint256 i = 0; i < _task.quorumThresholdCount;) {
...
require(operators[operator].status == OperatorStatus.REGISTERED, "Signer is not a registered operator");
unchecked {
++i;
}
}
...
}
Depending on whether ServiceManager::updateOperatorsForQuorum has been called in a timely manner before the signature validation, the status of an operator may be stale.
Even if ServiceManager::updateOperatorsForQuorum gets properly called, since a single unregistered operator causes the entire transaction to revert, this potentially gives rise to race conditions between the submission of an Aethos message and updateOperatorsForQuorum calls by delegators and operators alike.
A delegator controlling enough shares to deem a signing operator unregistered or even a malicious operator themselves may frontrun a call to ServiceManager::validateSignatures with a call to DelegationManager::undelegate followed by a ServiceManager::updateOperatorsForQuorum, effectively DOS-ing the execution of an Aethos message.
The above-mentioned issues are traditionally addressed in the EigenLayer middleware by verifying the stakes held by the signers at a reference block prior to the current one.
Since RateLimiter::evaluateRateLimit is defined as an external function, the MetaCoinWithRateLimit::sendCoin example client invokes RateLimiter::evaluateRateLimit by using a this.call pattern, meaning that the msg.sender on which the rate limiting restrictions apply is the MetaCoinWithRateLimit contract itself.
\RateLimiter:145function evaluateRateLimit(string calldata tokenID, uint256 amount) external...
MetaCoinWithRateLimit:23function sendCoin(address receiver, uint256 amount) public {
if (!this.evaluateRateLimit(SYMBOL, amount)) {
...
Judging by the existence of the RateLimiter::setRateLimitBypass and RateLimiter::removeRateLimitBypass functions, this does not seem to be the intended behavior:
RateLimiter:96function setRateLimitBypass(
address user
) external onlyOwner {
bypassRateLimit[user] = true;
}
...
function removeRateLimitBypass(
address user
) external onlyOwner {
bypassRateLimit[user] = false;
}
It seems that the intention is to be able to apply rate limits on a per-user basis, and not on the entire contract.
M3
When the oldest relevant block overlaps with a rate-limiting window, rate limiting accrues even though the rate-limiting window nas ended
This happens because the oldest relevant block might overlap with a rate-limiting window, even when the current block.number does not fall into that window.
RateLimiter:224function _setTotalActiveTxValue(
uint256 oldestRelevantBlock
) internal {
TxHistory storage history = txHistory[msg.sender];
uint256 total = history.total;
uint256 ptr = history.ptr;
while (ptr < history.txBatches.length && history.txBatches[ptr].endBlockNumber <= oldestRelevantBlock) {
total -= history.txBatches[history.ptr].amount;
ptr++;
}
...
RateLimiter:241function _setTotalActiveTxValue(
uint256 oldestRelevantBlock
) internal {
TxHistory storage history = txHistory[msg.sender];
uint256 total = history.total;
uint256 ptr = history.ptr;
while (ptr < history.txBatches.length && history.txBatches[ptr].endBlockNumber <= oldestRelevantBlock) {
total -= history.txBatches[history.ptr].amount;
ptr++;
}
...
We only subtract the value from old batches if the oldest relevant block does not overlap with those durations. However, the current block may clearly not belong to the rate-limiting batches that we are still considering active:
RateLimiter:201function _shouldUpdateExistingBatch(TxHistory storage history, uint256 blockNumber) private view returns (bool) {
return
history.txBatches.length > 0 && blockNumber < history.txBatches[history.txBatches.length - 1].endBlockNumber;
}
Imagine a scenario where a user performs a rate-limited operation of size K at block number S , defining a rate-limiting batch between [S,S+batchSize] . Assume that at block S+batchSize+1 ( after the batch rate-limiting batch has ended )we perform another rate-limited operation of size K'. If we have that batchSize >= duration then:
- it holds that
S<=S+batchSize+1 -duration<=S+batchSize<S+batchSize+ 1 - The recorded total is
K+K'
So the user in this scenario is being punished by accruing total value, even though that the second operation has taken place outside of the initial rate-limiting batch
LOW SEVERITY
[No low severity issues]
CENTRALIZATION ISSUES
It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocol’s owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-profile, high-value protocols have significant centralization threats.)
The owner of the ServiceManager contract can influence any vote outcome, by deregistering operators, changing strategies or changing the required share threshold.
N2
The owner of the PriceAggregatorUniV3 contract is burdened with the maintenance of all pool overrides
Prices returned by the PriceAggregatorUniV3 contract rely on the _fetchAmountCrossingPools and _getPoolForRoute functions:
PriceAggregatorUniV3:304function _getPoolForRoute(
PoolAddress.PoolKey memory _poolKey
) internal view returns (address pool) {
pool = _getOverriddenPool(_poolKey);
if (pool == address(0)) {
revert PriceAggregatorUniV3_NoPoolFound(_poolKey.token0, _poolKey.token1);
}
// TODO: can improve this in future by dynamically getting pool address from factory
}
PriceAggregatorUniV3:258function _fetchAmountCrossingPools(
address _tokenIn,
uint256 _amountIn,
address _tokenOut,
uint256 _twapPeriod
) internal view returns (uint256 amountOut) {
// If the tokenIn:tokenOut route was overridden to use a single pool, derive price directly from that pool
address overriddenPool = _getOverriddenPool(
PoolAddress.getPoolKey(
_tokenIn,
_tokenOut,
uint24(0) // pool fee is unused
)
);
if (overriddenPool != address(0)) {
return _fetchAmountFromSinglePool(_tokenIn, _amountIn, _tokenOut, overriddenPool, _twapPeriod);
}
revert PriceAggregatorUniV3_NoPoolFound(_tokenIn, _tokenOut);
// TODO: can introduce a "crossing" pool with ETH for tokenIn:tokenOut as needed
}
As can be seen, in order for those functions to return a price the code demands that the corresponding tokenIn and tokenOut be overridden, even though the factory contract of Uniswap V3 may be used to determine the pool corresponding to a pair in a decentralized way — without posing any trust assumptions on the owner of the oracle.
N3
The owner of the PriceAggregatorUniV3 contract fully controls the pools that are used to query token prices
With the ability to override all pool routes ( see issue above ) together with the ability to add/remove any token and configure its USDC pool, the owner fully controls the pools that are used for token pricing.
RateLimiter:96function setRateLimitBypass(
address user
) external onlyOwner {
bypassRateLimit[user] = true;
}
RateLimiter:106function removeRateLimitBypass(
address user
) external onlyOwner {
bypassRateLimit[user] = false;
}
OTHER / ADVISORY ISSUES
This section details issues that are not thought to directly affect the functionality of the project, but we recommend considering them.
Issue P2 mentioned that the Aethos system could be re-implemented to leverage the EigenLayer Middleware functionality.
Currently the following state variables within the ServiceManager contract are not being used:
ServiceManager:35uint256 public taskCounter;
ServiceManager:43mapping(address => bool) public operatorCanRegister;
ServiceManager:45mapping(bytes32 => bool) public pendingTasks;
Additionally, the RateLimiter::isRateLimiterEnabled state variable is not used in any meaningful way, since the functions that enforce the rate limiting restrictions do not check it.
\RateLimiter:127function enableRateLimiter() external onlyOwner {
isRateLimiterEnabled = true;
}
...
function disableRateLimiter() external onlyOwner {
isRateLimiterEnabled = false;
}
Currently, each client is expected to call the ServiceManager::setPolicy and ServiceManager::removePolicy functions in order to be enabling policies:
ServiceManager:284function setPolicy(
string memory _policyID
) external {
clientToPolicy[msg.sender][_policyID] = true;
emit SetPolicy(msg.sender, _policyID);
}
ServiceManager:295function removePolicy(
string memory _policyID
) external {
clientToPolicy[msg.sender][_policyID] = false;
emit RemovedPolicy(msg.sender, _policyID);
}
The ServiceManager contract records all enabled policies within the clientToPolicy mapping. However, it’s not possible to get all of the policies for a specific client, since the mapping cannot be iterated.
While it is entirely possible to record enabled policies through other means (e.g., monitoring past on-chain transactions and/or maintaining a DB), the protocol might wish to consider using a data structure like OZ’s enumerable map.
While the AVS’ metatada URI only affects off-chain functionality, in order for the initialization of the ServiceManager contract to be complete, the protocol may consider adding _metadataURI as an additional argument to its initialization function.
The ServiceManager::initialize function could then perform the AVSDirectory::updateAVSMetadataURI call:
AVSDirectory::setMetadataURI:177...
IAVSDirectory(avsDirectory).updateAVSMetadataURI(_metadataURI);
...
Consider defining a constructor in the RateLimiter contract since it's possible for a sub-contract to omit the owner initialization, effectively setting owner as address(0).
RateLimiter:22contract RateLimiter is IRateLimiter, Ownable {
...
This could consequently lock the important owner-specific functions.
It's reasonable to assume that calls for which RateLimiter::evaluateRateLimit returns false will end up reverting, just like in the MetaCoinWithRateLimit example contract.
In such cases, the event emission inside the RateLimiter::evaluateRateLimit function does not serve any practical purpose:
The following lines inside RateLimiter::evaluateRateLimit:
RateLimiter::evaluateRateLimit:151...
uint256 txAmount = amount * priceOracle.getPrice(tokenID, amount);
uint256 oldestRelevantBlock = _getOldestRelevantBlock(block.number);
...
The txAmount and oldestRelevantBlock variables above get originally computed inside the RateLimiter::_checkIfLimitExceeds call that precedes.
RateLimiter::_checkIfLimitExceeds:175...
uint256 txAmount = amount * priceOracle.getPrice(tokenID, amount);
uint256 oldestRelevantBlock = _getOldestRelevantBlock(block.number);
...
The code could be potentially re-written so that the unnecessary re-computation is prevented.
The code is compiled with Solidity 0.8.12. This version has some known bugs which we do not believe affect the functionality of the protocol.
However, we would like to note a few issues that may affect client contracts that are developed using this specific version ( even if they are not part of the AVS ).
Provided that client contracts are encouraged to use abi.encodeWithSignature to encode their arbitrary arguments, client contracts that:
- Have a statically sized array in one of the protected functions
- Have at least one dynamic argument
Are affected by this issue which leads to the corruption of the first 32 bytes in the first dynamic argument.
Also, client contracts might be affected by this issue where malformed calldata encoding nested arrays may not revert as expected.
Lastly, if client contracts choose to use abi.encodeCall , they could be affected by this issue where a selector passed as a literal value is encoded incorrectly.
DISCLAIMER
The audited contracts have been analyzed using automated techniques and extensive human inspection in accordance with state-of-the-art practices as of the date of this report. The audit makes no statements or warranties on the security of the code. On its own, it cannot be considered a sufficient assessment of the correctness of the contract. While we have conducted an analysis to the best of our ability, it is our recommendation for high-value contracts to commission several independent audits, a public bug bounty program, as well as continuous security auditing and monitoring through Dedaub Watchdog.
ABOUT DEDAUB
Dedaub offers significant security expertise combined with cutting-edge program analysis technology to secure some of the most prominent protocols in DeFi. The founders, as well as many of Dedaub's auditors, have a strong academic research background together with a real-world hacker mentality to secure code. Protocol blockchain developers hire us for our foundational analysis tools and deep expertise in program analysis, reverse engineering, DeFi exploits, cryptography and financial mathematics.