Hann Finance ~ CDP Core
Smart Contract Security Assessment
December 30, 2025

SUMMARY
ABSTRACT
Dedaub was commissioned to perform a security audit of Hann Finance’s CDP core. The audited part is a fork of Liquity V2 to be launched on the Kaia network. The aim is to provide a Kaia-native stable asset minted against KAIA (the network’s native asset), hnKAIA (a multi-asset yield bearing receipt token representing exposure to various LSTs and their yield) and earnUSDT (a yield bearing token based on USDT deposits).
BACKGROUND
Despite this engagement’s scope, Hann Finance is not merely a Liquity V2 fork. Instead, the protocol functions as an umbrella of DeFi primitives, including:
- Liquid staking
- Yield aggregation
- Market making (in the form of a stable-swap implementation)
SETTING & CAVEATS
This audit report mainly covers the contracts of the at-the-time private cdp-core repository of Hann Finance: https://github.com/Hann-Finance/cdp-core , on branch audit-freeze at commit 8b8a460b146a09de60667636945832a667db9f0a.
For the zappers module, the protocol team implemented a series of simplifications while the auditors reviewed the core of the Liquity v2 fork. Therefore, the zappers module was examined on branch audit/zappers-updates at commit 6cdb89204d90c8ac0938e289a11e1d0945abe0a8 (corresponding to the following PR: https://github.com/Hann-Finance/cdp-core/pull/5)
Audit Start Date: November 25, 2025
Report Submission Date: December 30, 2025
Since the engagement mainly covers the Liquity fork of the protocol, the auditors consulted the original Liquity codebase (https://github.com/liquity/bold) at commit 9812e0818c25b5473d15a60f4ad2773b73a561d7.
The following paths were directly compared against the liquidity codebase:
cdp-core/packages/core/src↔bold/contracts/srccdp-core/packages/zappers/src/Zappers/↔bold/contracts/src/Zappers
Whenever files shared the same name, or when it was possible to infer identical intended semantics across different files, the file diffs were examined directly. Completely new files were audited in isolation.
2 auditors worked on the following contracts:
[Liquity fork]
[Zappers]
As previously noted, despite the engagement’s limited scope, Hann Finance encompasses numerous additional DeFi primitives. Although the implementation of the supported collateral types, as well as the protocol’s stable-swap implementation, lie outside the scope of this audit, the auditors have consulted code paths relevant to the Liquity V2 fork and have examined the use of such primitives from an economic perspective. Wherever possible, the auditors have informed the development team of major security concerns identified in the inspected code paths.
The audit’s main target is security threats, i.e., what the community understanding would likely call "hacking", rather than the regular use of the protocol. Functional correctness (i.e. issues in "regular use") is a secondary consideration. Typically it can only be covered if we are provided with unambiguous (i.e. full-detail) specifications of what is the expected, correct behavior. In terms of functional correctness, we often trusted the code’s calculations and interactions, in the absence of any other specification. Functional correctness relative to low-level calculations (including units, scaling and quantities returned from external protocols) is generally most effectively done through thorough testing rather than human auditing.
PROTOCOL-LEVEL CONSIDERATIONS
Info
The wearnUSDT pricing scheme after P2 resolutions are applied (private PR) involves using the owed collateral in the Cooldown vault which is natively tracked by Superearn as public getters. The same liquidity requirements apply; it could be possible to manipulate the price downwards if it’s possible to explicitly create debt in the Superearn Cooldown vault by permissionlessly initiating any Superearn strategy interaction (outside Hann Finance):
WEarnUSDTPriceFeed::_pullPrimary:384...
function getLossRatios() public view returns (uint256 shortfallBps, uint256 claimLossBps) {
...
uint256 tv = cooldownVault.totalLockedAssets() + cooldownVault.idleBalance();
...
uint256 shortfall = cooldownVault.totalShortfall();
uint256 claimLoss = cooldownVault.totalClaimLoss();
...
function getDynamicHaircutBps() public view returns (uint256 haircutBps) {
...
//@Dedaub: Imagine that a user has atomically called a Defi
// protocol that uses Superearn and explicitly creates some
// cooldown vault debt. If liquidity is low, it's possible
// that the created debt is enough to cause a price manipulation
(uint256 shortfallBps, uint256 claimLossBps) = getLossRatios();
...
uint256 haircutBps = getDynamicHaircutBps();
if (haircutBps > 0 && price > 0) {
uint256 discount = (price * haircutBps) / 10_000;
if (discount > price) {
discount = price;
}
price = price - discount;
}
...
Both wearnUSDT and HNKAIA are implemented as asset vaults. Therefore their price is being derived from the on-chain redemption rate of each asset:
WEarnUSDTPriceFeed::_tryReadNavUsdPerToken:176...
uint256 oneShare = 10 ** uint256(shareDecimals); // 1 earnUSDT
uint256 assets;
//@Dedaub: will query coolDownVault's exchange rate
try router.previewRedeem(earnUSDT, oneShare) returns (uint256 a) {
...
HNKAIAPriceFeed::_tryReadNavUsdPerToken:176...
uint256 balance = IERC20Metadata(asset).balanceOf(address(hnkaia));
...
uint256 qty18 = _normalizeToDecimals(balance, _getDecimals(asset), 18);
if (hasProvider) {
uint256 rate = _getRedeemRateView(asset);
//@Dedaub: Technically calculating the redemption rate denominated in KAIA
result.navKaia = qty18 * rate / PRECISION;
...
It’s possible for an attacker to flash-loan an amount that is O(shares) (e.g., 100 * shares) and inflate the price of the collateral by directly transferring to the vault contract. In such scenarios, the cost of manipulation is O(shares), so the attacker has to use some additional collateral X, such that X * manipulated_price * (1/1.1) > cost_of_manipulation and directly profit from minting HNUSD if they were to redeem or sell it.
Ultimately, this means that vault-based-assets have to accumulate enough liquidity so that manipulating the price via donations becomes impractical.
Largely resolved
- The peg of USDT with USD will be a protocol-wide assumption. An oracle will be used if needed.
- Regarding the Superearn integration, although no perfect code coverage could be achieved - as no code is public at the time of reviewing - the primary aim was to communicate potential pitfalls to the protocol team. In order to review the Superearn integration we relied on:
- The Superearn interface used in the code
- Supplementary material provided by the Hann Finance team
- Publicly available Superearn documentation
There are some implicit and explicit assumptions on the price behavior of earnUSDT and HNKAIA that may have security consequences if they are not accounted for properly:
[wearnUSDT]
WEarnUSDTPriceFeedreturns the price of wearnUDT denominated in USDT. This carries the implicit assumption that USDT will be very close to 1$. In the event of price deviations in the price market of USDT compared to USD, profitable arbitrage opportunities might arise within the protocol.
WEarnUSDTPriceFeed::_tryReadNavUsdPerToken:176...
uint256 oneShare = 10 ** uint256(shareDecimals); // 1 earnUSDT
uint256 assets;
//@Dedaub: will query coolDownVault's earnUSDT -> USDT exchange rate
try router.previewRedeem(earnUSDT, oneShare) returns (uint256 a) {
...
-
The price technically depends on the available liquidity in SuperEarn’s
CooldownVaultcontract (this is outside the scope of this engagement, but we reviewed the integration for the sake of completeness based on information provided by the protocol team). Since this seems to be intended to facilitate all queued SuperEarn withdrawals, the dynamics of withdrawals have to be considered: If liquidity ever gets low this could lead to a sharp fall in the asset's price within your protocol. While the portfolio of SuperEarn might be in perfect health, such instances of sparse liquidity might give rise to liquidation opportunities (or even a complete branch shutdown if price falls below SCR). -
The code expects a monotonic price increase for wearnUSDT (or a relative stability in the price):
WEarnUSDTPriceFeed::_pullPrimary:152...
if (lastGoodPrice != 0) {
uint256 tol = monotonicTolerance + (lastGoodPrice * monotonicToleranceBps) / 10_000;
if (nav + tol < lastGoodPrice) {
emit OracleFailure("non-monotonic");
_registerOracleFailure();
return (lastGoodPrice, true);
}
...
From an economics perspective, the protocol might end up operating on stale prices that do not reflect the market conditions of earnUSDT, leading to arbitrage opportunities in adverse market conditions.
- On an analogous note, this implicit smoothing out of price falls can give a lot of leeway for parties to avoid liquidations (e.g., a user can notice a sharp fall in price, the last good price will be used and they can exit without risking their liquidity). Although this can benefit UX, it deteriorates the health of a branch when considering its adaptability to market price changes.
[HNKAIA]
- When the redemption rate of HNKAIA decreases, the price feed effectively pauses future deposits in the HNKAIA multi-asset vault for the asset for which the decrease was noted:
HNKAIAPriceFeed::_pullRedeemRateInternal:1324...
RateCache storage cache = redeemRateCache[asset];
uint256 last = cache.value;
if (last != 0 && newRate < last) {
_pauseAssetDeposits(asset, REASON_RATE_DECREASE);
emit RedeemRateFallbackUsed(asset, last);
...
}
...
HNKAIA::_ensureFreshRedeemRate:923function _ensureFreshRedeemRate(address asset) internal view {
//@Dedaub: will end up reverting when paused
try priceFeed.getRedeemRate(asset) returns (uint256) {
return;
} catch (bytes memory reason) {
if (reason.length == 0) revert StalePriceDetected();
assembly {
revert(add(reason, 0x20), mload(reason))
}
}
}
This might create withdrawal pressure on HNKAIA. From a tokenomics perspective, although this might create scarcity in the HNKAIA supply and drive the market price up, this scarcity might affect the ability of users to top up their troves (since the price of their collateral has just fallen). It might make sense to not react to price drops so aggressively and letting the core protocol dynamics encourage minting/burning.
- The HNKAIA price feed holds the implicit assumption of a somewhat constant cardinality of observations in the Uniswap v3 oracle before querying the TWAP price.
HNKAIAPriceFeed::_requiredObservationCount:633function _requiredObservationCount(uint32 period) internal pure returns (uint16) {
uint32 required = period / CARDINALITY_PER_MINUTE;
...
This is not entirely correct, since in practice the number of observations depends on how many swaps have been actually performed. When a query for a TWAP interval which lies outside the oracle's observations is made, the call reverts. Therefore, external UniswapV3 integrators don’t have to try and approximate the cardinality of observations.
UniswapV3Pool::0xc7bBeC68d12a0d1830360F8Ec58fA599bA1b0e9b// @dev Reverts if an observation at or before the desired observation timestamp does not exist.
function observeSingle(
...
On the same note, the observationCardinalityNext of the slot0 field of a pool always yields the length of allocated observations — but this does not mean that they have all been initialized just yet. This means that examining the length of oracle observations does not suffice to determine whether a TWAP query will succeed.
-
Having multiple TWAP windows and choosing the smallest price has the following consequences:
-
Make it easier to defeat the strength of the TWAP if the shorter duration (easier to manipulate) yields the smaller price. Effectively, with this scheme it's easier to manipulate the price downwards with one large trade within the last minute.
-
The TWAP’s lag might create profitable arbitrage redemptions, since the protocol might end up valuing the collateral cheaper than the market value. In such a scenario:
-
The user buys/mints some USDHN
-
The user redeems USDHN
-
The protocol undervalues the collateral, so the user ends up receiving more collateral for the redeemed USDHN than what they have gotten with the market price.
HNKAIAPriceFeed::_computeMinTwapQuote:584for (uint256 i = 0; i < twapWindows.length; ++i) {
...
uint32 period = twapWindows[i];
...
(uint256 windowQuote, bool valid) = _consultTwapQuote(pool, baseToken, quoteToken, period, baseAmount);
...
if (windowQuote < minQuoteRaw) {
minQuoteRaw = windowQuote;
}
...
It’s recommended to have a single TWAP duration that’s both hard to
manipulate and does not create a significant deviation from the market price in
periods of volatility.
- On an analogous note, unconditionally choosing the smallest price between NAV and TWAP prices — even when redeeming - can potentially create the same profitable redemption arbitrage as described in the previous point.
HNKAIAPriceFeed::_selectPrimaryPrice:501...
// use min(NAV, TWAP) for default
if (!v.navAvailable) revert NavUnavailable();
uint256 price = v.navPrice;
if (v.marketAvailable && v.marketPrice > 0) {
price = Math.min(price, v.marketPrice);
}
...
The smaller price always benefits the user when redeeming — that's why the original LiquityV2 codebase had special code paths for when fetching the redemption price.
[KAIA]
- The KAIA price feed enforces a branch shutdown when a significant price change occurs:
KAIAPriceFeed::fetchPrice:214...
if (!_isWithinDeviation(price)) {
emit OracleFailure("Price deviation too high");
bool triggerShutdown = _registerOracleFailure();
if (triggerShutdown) {
return (_shutDownAndSwitchToLastGoodPrice(), true);
}
return (lastGoodPrice, true);
}
...
This is a significant assumption with potentially serious consequences; depending on the deviation threshold that is chosen, can end up prematurely shutting down the branch during times of volatility (where large price upwards/downwards movements might happen).
- The scaling of the Pyth oracle response might end up breaking the protocol’s accounting if the exponent component of the response ever becomes positive:
KAIAPriceFeed::_scalePythPrice:511function _scalePythPrice(int64 priceValue, int32 expo) internal pure returns (uint256) {
if (priceValue <= 0) return 0;
int256 priceInt = int256(priceValue);
int256 exponent = int256(expo) + 18;
...
In such a scenario the scaled price will end up having more than 18 decimals. Although this is fairly untypical of Pyth oracles, it might make sense guarding against such a scenario since the Pyth protocol technically allows it.
Largely resolved
- HNKAIA price feed can still silently fail without shutting down the branch, but it can only happen because :
- Of HNKAIA bootstrapping failures that should never happen in realistic deployments
- Of redemption rate provider failures, which are tied to failures in KAIA’s staking primitive.
- Of TWAP failures, which are tied to low pool liquidity and/or low number of observations in the quoted pool. TWAP failures
- KAIA price feed shuts down a branch only if all price sources fail (while in LiquityV2 a single failure of a pricing sub-component is enough to shut down a collateral branch)
In LiquityV2, a price feed failures automatically trigger a branch shutdown:
RETHPriceFeed::_fetchPricePrimary:46...
if (ethUsdOracleDown) {
return (_shutDownAndSwitchToLastGoodPrice(address(ethUsdOracle.aggregator)), true);
}
if (exchangeRateIsDown) {
return (_shutDownAndSwitchToLastGoodPrice(rateProviderAddress), true);
}
// If the ETH-USD feed is live but the RETH-ETH oracle is down, shutdown and substitute RETH-ETH with the canonical rate
if (rEthEthOracleDown) {
return (_shutDownAndSwitchToETHUSDxCanonical(address(rEthEthOracle.aggregator), ethUsdPrice), true);
}
...
LiquityV2::WETHPriceFeed::_fetchPricePrimary:41...
function _fetchPricePrimary() internal returns (uint256, bool) {
assert(priceSource == PriceSource.primary);
(uint256 ethUsdPrice, bool ethUsdOracleDown) = _getOracleAnswer(ethUsdOracle);
// If the ETH-USD Chainlink response was invalid in this transaction, return the last good ETH-USD price calculated
if (ethUsdOracleDown) return (_shutDownAndSwitchToLastGoodPrice(address(ethUsdOracle.aggregator)), true);
...
LiquityV2::WSTETHPriceFeed::_fetchPricePrimary:46...
if (exchangeRateIsDown) {
return (_shutDownAndSwitchToLastGoodPrice(rateProviderAddress), true);
}
if (ethUsdOracleDown) {
return (_shutDownAndSwitchToLastGoodPrice(address(ethUsdOracle.aggregator)), true);
}
// If the STETH-USD feed is down, shut down and try to substitute it with the ETH-USD price
if (stEthUsdOracleDown) {
return (_shutDownAndSwitchToETHUSDxCanonical(address(stEthUsdOracle.aggregator), ethUsdPrice), true);
}
...
However, with the exception of the KAIA price feed, the HNKAIA and EARNUSDT price feeds do not shut down the branches when oracle failures are detected:
WEarnUSDTPriceFeed::_pullPrimary:146...
(uint256 nav, bool ok) = _tryReadNavUsdPerToken();
if (!ok || nav == 0) {
_registerOracleFailure();
return (lastGoodPrice, true);
}
...
WEarnUSDTPriceFeed::_registerOracleFailure:188function _registerOracleFailure() internal {
if (lastFailureTime == 0 || block.timestamp > lastFailureTime + failureWindow) {
failureCount = 1;
lastFailureTime = block.timestamp;
} else {
unchecked {
++failureCount;
}
}
if (failureCount >= failureThreshold && !oracleHardDown) {
oracleHardDown = true;
emit OracleHardDown(failureCount, lastFailureTime, lastFailureTime + failureWindow);
}
}
ΗΝΚΑΙΑPriceFeed::fetchPrice:286...
(uint256 kaiaUsdPrice, bool kaiaFallback, bool kaiaOk) = _refreshKaiaUsdPrice();
if (!kaiaOk) {
if (lastGoodPrice == 0) revert KaiaPriceUnavailable();
return (lastGoodPrice, true);
}
if (kaiaFallback) {
emit KaiaUsdFallbackUsed(kaiaUsdPrice);
}
(PortfolioValuation memory valuation, bool ok) = _pullPortfolioPrices();
if (!ok) {
return (lastGoodPrice, true);
}
...
Gracefully switching a last valid price can be beneficial in terms of UX, but one has to account for the potential economic consequences: Miss-informed pricing schemes can leave the door to devastating arbitrage attacks against the protocol and it then becomes hard to argue about the economic soundness of the system.
LiquityV2 does not support collateral types that do not have 18-decimal accounting. This can be seen if ones examines the ICR calculations sitting at the heart of the protocol’s accounting:
HannMath::_computeCR:49...
function _computeCR(uint256 _coll, uint256 _debt, uint256 _price) internal pure returns (uint256) {
if (_debt > 0) {
uint256 newCollRatio = _coll * _price / _debt;
...
With the typical design choice of price being represented using 18 decimals, the above calculation is correct only when the collateral is also represented using 18 decimals.
Instead of having an immutable set of initially supported collateral types like the original LiquityV2 codebase does, the protocol’s CollateralRegistry aims to enable the dynamic addition of new collateral types. However, there’s no enforcement with respect to the number of decimals for a new collateral:
CollateralRegistry::queueCollateral:146...
uint8 tokenDecimals = asset.decimals();
uint8 resolvedDecimals = decimals == 0 ? tokenDecimals : decimals;
if (resolvedDecimals != tokenDecimals) revert InvalidDecimals();
...
Since this can have serious security-related consequences, it might make sense adding checks to guard against the addition of unsupported assets.
Partlyresolved
The team’s resolution is limited by the following two factors:
- Deprecating a collateral should allow for a graceful branch shutdown, by encouraging users to close down troves – and thus giving up on their leveraged position.
- Both
TroveManagerandBorrowerOperationscontracts are really close to the bytecode size limit – so it’s not easy to add extra checks without having to split the logic of the original LiquityV2 codebase across different contracts. With those points in mind, the team has opted into enforcing that: - No debt creation operations can take place inside
TroveManagerwhen a collateral is deprecated - Troves can be opened only in active collateral types Borrower still have the power to:
- Adjust the trove’s interest rate to be the minimum one (so that their troves debt increases as slowly as possible)
- Top up their trove with collateral
Those dynamics make it so that it’s possible for troves to remain open indefinitely - long after a collateral type is deprecated. Under current restrictions, it’s believed that this strikes a reasonable balance between maintaining good UX and preventing the creation of new debt when a collateral type is deemed deprecated.
From an economic’s perspective, if the protocol makes sure to only deprecate a collateral in the event of an emergency, it’s very likely that the very same reason that lead to the deprecation of the collateral will discourage users from indefinitely keeping their troves open.
The protocol’s CollateralRegistry contract introduces functionality to dynamically add/deprecate collateral types:
CollateralRegistry::queueCollateral:151...
function queueCollateral(IERC20Metadata asset, ITroveManager troveManager, address priceFeed, uint8 decimals)
public
onlyGov
returns (uint256 index)
{
...
index = _addCollateralRecord(
asset,
troveManager,
priceFeed,
resolvedDecimals,
ICollateralRegistry.CollateralStatus.Pending,
queuedAt,
activateAt,
msg.sender
);
...
CollateralRegistry::_addCollateralRecord:438...
_collaterals.push(
CollateralConfig({
asset: asset,
troveManager: troveManager,
priceFeed: priceFeed,
decimals: decimals,
status: status,
queuedAt: queuedAt,
activateAt: activateAt,
addedAt: uint48(block.timestamp),
addedBy: addedBy
})
);
...
While a collateral type can be in either of the following statuses:
- Pending
- Active
- Deprecated
None of the above-mentioned statuses is enforced within the protocol’s code. Only redemption-routing enforces that the collateral branches from which debt is burnt is active:
CollateralRegistry::redeemCollateral:280...
for (uint256 i; i < collatCount; ++i) {
if (_collaterals[i].status == ICollateralRegistry.CollateralStatus.Active) {
++activeCount;
}
}
if (activeCount == 0) revert NoActiveCollateral();
uint256[] memory activeIndices = new uint256[](activeCount);
uint256 cursor;
for (uint256 i; i < collatCount; ++i) {
if (_collaterals[i].status == ICollateralRegistry.CollateralStatus.Active) {
activeIndices[cursor++] = i;
}
}
...
Users can open/close/adjust/liquidate troves before a collateral becomes active and even after the collateral gets deprecated. From an economic's perspective, selectively turning off redemptions for a collateral branch does not eliminate all incentives to be minting debt on that branch — and it also puts greater redemption pressure on the rest of the branches when opportunities to restore the peg arise.
This even creates an interesting incentive where users can avoid redemptions altogether by using a deprecated collateral — normal (non-urgent) redemptions will never get routed through their troves and their leverage is somewhat guaranteed.
Currently the earnUSDT branch is going to enforce a 105% minimum collateralization ratio:
Constants:38...
uint256 constant MCR_EARNUSDT = 105 * _1pct;
...
While convenient for users, it leaves a theoretical possibility for a price drop to make liquidatable troves have ICR < 100% if the price drop is too significant and there are troves near above the MCR.
Reward distribution inside the PILDistributor contract takes place by iterating over all receivers and calculating the rewards to be distributed for each and one of them:
PILDistributor::_processDistribution:247function _processDistribution(uint256 maxReceivers)
...
while (ctx.index < ctx.receiverCount && processed < ctx.limit && ctx.remainingWeight > 0) {
...
_transferAndNotify(ctx.index, receiver, amount);
...
Since this potentially takes place within each debt-accruing operation, one has to be considerate about the number of possible receivers; this distribution design cannot reasonably scale to many receivers without incurring high gas costs and raising DOS concerns.
The original Liquity design avoided such a design using index-based reward distribution in order to address the above-mentioned concerns.
The developer team has already confirmed that the number of reward receivers will be bound - with the code being changed to enforce this bound - we leave this consideration for the sake of completeness.
Setting a 0 minimum reward distribution delay delay by calling PILDistributor::setMinDistributionInterval :
PILDistributor::setMinDistributionInterval:184function setMinDistributionInterval(uint256 _interval) external onlyOwner {
uint256 previous = minDistributionInterval;
minDistributionInterval = _interval;
emit MinDistributionIntervalUpdated(previous, _interval);
}
PILDistributor::_defaultCanDistribute:344function _defaultCanDistribute() internal view returns (bool) {
return block.timestamp >= lastDistributionTime + minDistributionInterval
&& rewardToken.balanceOf(address(this)) > 0 && totalWeightBps == 10000 && receivers.length > 0;
}
Can give rise to a race condition between PILDistributor::setReceivers and PILDistributor::distribute.
If the owner of the PILDistributor contract decides to change receivers, any current receiver might frontrun a PILDistributtor::distribute() call nonetheless and initiate a distribution on any dust balance in the contract.
Naturally, the impact of this is expected to be non-major so this mostly serves as an informatory note.
Anyone may initiate an InterestRouter::routeInterest operation:
BorrowerOperations::_prepareBatchContextAndApplyWeights:1257function routeInterest(uint256 _amount, bytes calldata _context) external override nonReentrant {
...
uint256 sentToPil = _sendToPilDistributor(_amount);
...
}
InterestRouter::_sendToPilDistributor:1257...
try distributor.distribute() {}
catch (bytes memory reason) {
emit PilDistributionFailed(reason);
}
...
In the scenario that the interest router will be configured as a “distributor” in the PILDistributor contract, a distribution can be initiated even if the minimum distribution delay has not passed:
PILDistributor::_requireAuthorized:226...
function _requireAuthorized() internal view {
require(
block.timestamp >= lastDistributionTime + minDistributionInterval || msg.sender == owner()
|| hasRole(DISTRIBUTOR_ROLE, msg.sender),
...
A problem may arise if a malicious party calls InterestRouter::routeInterest after donating 1 wei of tokens directly to the PILDistributor contract. In the event this is properly timed to trigger the initiation of a new distribution in the PILDistributor contract, an iteration through all receivers will take place, finalizing the distribution (even if 0 tokens end up being transferred).
This would create a really cheap and effective vector to DOS reward distribution altogether.
Naturally, the same consideration would hold for any misconfigured party having the REWARD_DISTRIBUTOR role but we explicitly focus on the scenario that would allow anyone to permissionlessly trigger the DOS scenario.
The binary search in EarnUSDTFlashSwapper::_usdtForTargetWEarn (and in other places with the zappers module) seems to be a potentially expensive and unnecessary method of quoting token amounts:
EarnUSDTFlashSwapper::_usdtForTargetWEarn:213...
uint256 wAtHi = _quoteWEarnFromUsdt(hi);
if (wAtHi == 0) {
revert InsufficientOutput(wEarnTarget, 0);
}
// Hi-doubling phase: ensure we find a hi such that F(hi) >= target, up to 64 iterations
uint8 attempts;
while (wAtHi < wEarnTarget && attempts < 64) {
if (hi > type(uint256).max / 2) {
revert InsufficientOutput(wEarnTarget, wAtHi);
}
hi = hi * 2;
wAtHi = _quoteWEarnFromUsdt(hi);
attempts++;
}
...
// Binary search on [lo, hi] for minimal usdtIn such that F(usdtIn) >= wEarnTarget
uint256 lo = 0;
while (lo < hi) {
uint256 mid = (lo + hi) / 2;
uint256 wAtMid = _quoteWEarnFromUsdt(mid);
if (wAtMid >= wEarnTarget) {
hi = mid;
} else {
lo = mid + 1;
}
}
...
Apart from the design, some features of the code referenced above are also a bit counterintuitive; even though the snippet finds a lower amount (at mid) that satisfies the required target value, the final loop does not terminate there.
We would highly recommend using analytical calculations when quoting any form of swaps. It's easy to argue about the correctness of the quotes and it's vastly more efficient. Afterall, doing a binary search and treating swap quotes as an incrementally queried black box defeats the entire point of having analytical calculations within smart contracts when a swap is performed.
From an economics perspective, when opening a leveraged trove it's not beneficial to attempt to repay as much debt as possible:
FlashSwapper::_handleIncrease:309...
IERC20(payToken).safeTransfer(address(pool), owe);
// Apply any remaining USDHN to repay debt (consume base MIN_DEBT first, avoid flushing)
uint256 usdhnAfter = IERC20(address(usdhnToken)).balanceOf(address(this));
if (usdhnAfter != 0) {
LatestTroveData memory t = troveManager.getLatestTroveData(op.troveId);
uint256 repayable;
if (t.entireDebt > MIN_DEBT) {
uint256 maxRepay = t.entireDebt - MIN_DEBT;
repayable = usdhnAfter > maxRepay ? maxRepay : usdhnAfter;
}
if (repayable != 0) {
borrowerOperations.repayUSDHN(op.troveId, repayable);
usdhnAfter -= repayable;
}
...
Repaying bold and burning the tokens reduces the user's exposure on the leverage token and potentially restricts the user's options.
Depending on intended user flows, this might make things easier for developing around zappers, but this is again a semantic deviation from what the original LiquityV2 codebase does.
For LeveragedGasCompZapper::leverDOwnTrove operations, the flow repays as much debt as the value of the desired amount of collateral to be de-leveraged:
LeveragedGasCompZapper::leverDownTrove:270...
(address flashToken, uint8 flashTokenDec) = _flashTokenFor(LeverageTypes.Kind.LeverDown);
uint256 flashAmount =
_flashAmountForDecrease(flashToken, flashTokenDec, uint256(-params.collDelta));
...
FlashSwapper::_handleDecrease:347...
// Use the full flash USDT as exact-in to repay as much debt as possible.
uint256 usdhnOut = _runStableSwap(
sc,
StableOp({usdtToUsdhn: true, exactOut: false, amountIn: borrowed, amountOut: 0, maxIn: borrowed})
);
if (usdhnOut != 0) borrowerOperations.repayUSDHN(op.troveId, usdhnOut);
...
This means it’s possible to remove the requested collateral. However, because debt is overcollateralized, in order to remove the desired collateral X one is required to repay less than X in USD value.
Ultimately, the current implementation is de-leveraging a bit more than desired.
The original LiquityV2 codebase assumes that an external source has calculated the required bold required just for the collateral removal needed:
LiquityV2::LeverageLSTZapper::receiveFlashLoanOnLeverDownTrove:186...
// Swap Coll from flash loan to Bold, so we can repay and downsize trove
// We swap the flash loan minus the flash loan fee
// The frontend should calculate in advance the `_params.minBoldAmount` to achieve the desired leverage ratio
// (with some slippage tolerance)
uint256 receivedBoldAmount = exchange.swapToBold(_effectiveFlashLoanAmount, _params.minBoldAmount);
// Adjust trove
borrowerOperations.adjustTrove(
_params.troveId,
_params.flashLoanAmount,
false, // _isCollIncrease
receivedBoldAmount,
false, // _isDebtIncrease
0
);
...
if you look at the receiveFlashLoanOnLeverDownTrove function of LeverageLSTZapper — they don't remove more bold than what's required
Strictly following Uniswap accounting’s math, swap-based quoting is asymmetric.
LeveragedGasCompZapper::_flashAmountForDecrease:590...
IQuoterV2.QuoteExactInputSingleParams memory qp = IQuoterV2.QuoteExactInputSingleParams({
tokenIn: collateralToken,
tokenOut: flashToken,
fee: fee,
amountIn: collToRelease,
sqrtPriceLimitX96: 0
});
(uint256 usdtOut,,,) = leverageQuoter.quoteExactInputSingle(qp);
if (usdtOut == 0) revert QuoteFailed();
return usdtOut;
...
The snippet is querying the amount of USDT that will be received as output if collToRelease is provided as input — which is supposed to serve as a quote of how much USD is the collateral worth.
However, it's not guaranteed that if one provides the USDT returned by this snippet they will get the exact same collateral used by the quote because of the execution slippage caused by Uniswap’s accounting
It would perhaps make more sense to use "exact out" quoting for this, since in reality the true query needed for LeveragedGasCompZapper::leverDownTrove is “the amount of USDT needed in order to receive collToRelease”:
LeveragedGasCompZapper::leverDownTrove:270...
uint256 flashAmount =
_flashAmountForDecrease(flashToken, flashTokenDec, uint256(-params.collDelta));
...
VULNERABILITIES & FUNCTIONAL ISSUES
This section details issues affecting the functionality of the contract. Dedaub generally categorizes issues according to the following severities, but may also take other considerations into account such as impact or difficulty in exploitation:
- User or system funds can be lost when third-party systems misbehave
- DoS, under specific conditions
- Part of the functionality becomes unusable due to a programming error
- Breaking important system invariants but without apparent consequences
- Buggy functionality for trusted users where a workaround exists
- Security issues which may manifest when the system evolves
Issue resolution includes “dismissed” or “acknowledged” but no action taken, by the client, or “resolved”, per the auditors.
CRITICAL SEVERITY
Identifying a high severity issue in an area of the codebase that technically lies outside the scope highlights the vast attack surface of the protocol. Despite the engagement’s scope, the auditors made a best effort to provide as much feedback as possible for as large a part of the codebase as possible (even going above the predicted timeline).
The following assumption in the HNKAIA contract is not true for all code paths:
HNKAIA::_requireDepositAuth:951/// @dev Basic guard for deposit flows. Caller/payer/receiver may all differ; authorization is enforced via
/// ERC20 allowance or Permit2. `_vaults[asset]` is retained for introspection but ///not used for gating.
function _requireDepositAuth(address sender, address from, address receiver, address asset) internal view {
sender; // reserved for future use; prevents unused var warning
from;
if (!isSupported[asset]) revert InvalidParameters();
if (receiver == address(0)) revert InvalidParameters();
}
When initiating HNKAIA::depositFrom, users have to approve the HNKAIA contract and not the msg.sender of the deposit operation, as transferFrom is called by the HNKAIA contract itself:
HNKAIA::depositFrom:178function depositFrom(address from, address assetIn, uint256 amount, address receiver, uint256 minSharesOut)
external
nonReentrant
whenNotPaused
returns (uint256 shares)
{
...
_requireDepositAuth(msg.sender, from, receiver, assetIn);
...
IERC20(assetIn).safeTransferFrom(from, address(this), amount);
...
(shares, actualPer) = _finalizeDeposit(receiver, qtyBefore, minSharesOut);
...
Ultimately, because of the way HNKAIA::_requireDepositAuth is implemented an attacker could exploit a previous approval of user A to the HNKAIA contract and force a deposit from A's account which benefits the attacker (this is exacerbated by the fact that they can even set themselves as the receiver).
HIGH SEVERITY
Resolved
Although the relevant code path changed to not always rely on totalSupply, for the fix to take effect, balance aggregation has to be enabled via USDHNToken::setGlobalAggregator which is only callable by governance.
USDHN implements LayerZero’s OFT standard and the token’s liquidity can be therefore distributed across supported chains. Because tokens are burnt and minted during bridging operations. This means that totalSupply is not a reliable metric to get the total BOLD supply when calculating the redemption base rate:
CollateralRegistry::redeemCollateral:337...
totals.usdhnSupplyAtStart = usdhnToken.totalSupply();
uint256 redemptionRate =
_calcRedemptionRate(_getUpdatedBaseRateFromRedemption(_usdhnAmount, totals.usdhnSupplyAtStart));
require(redemptionRate <= _maxFeePercentage, "CR: Fee exceeded provided maximum");
...
In the typical scenario, the impact of this is that as more USDHN tokens are bridged, users will experience heavier redemptions fees:
CollateralRegistry::_getUpdatedBaseRateFromRedemption:519function _getUpdatedBaseRateFromRedemption(uint256 _redeemAmount, uint256 _totalUSDHNSupply)
internal
view
returns (uint256)
{
uint256 decayedBaseRate = _calcDecayedBaseRate();
uint256 redeemedUSDHNFraction = _redeemAmount * DECIMAL_PRECISION / _totalUSDHNSupply; //@Dedaub: after tokens are bridged, the same redemption represents a larger fraction
...
Coordinated bridging operations can even drive the base rate above 1.0, potentially causing the fees to be hitting 100% of redemptions:
CollateralRegistry::_calcRedemptionRate:539function _calcRedemptionRate(uint256 _baseRate) internal pure returns (uint256) {
return HannMath._min(
REDEMPTION_FEE_FLOOR + _baseRate,
DECIMAL_PRECISION // cap at a maximum of 100%
);
}
In practice, even if the 100% fee rate is not hit, it's very likely that a lot of slippage guards will be hit – effectively DOSing redemptions:
CollateralRegistry::redeemCollateral:340require(redemptionRate <= _maxFeePercentage, "CR: Fee exceeded provided maximum");
When a distribution is triggered via PILDistributor::distribute , aside from reward tokens, distribution receivers get a callback by having notifyRewardAmount be called on their staking address:
PILDistributor::_processDistribution:266...
if (amount != 0) {
_transferAndNotify(ctx.index, receiver, amount);
unchecked {
ctx.remainingBalance -= amount;
}
sent += amount;
}
...
PILDistributor::_transferAndNotify:312...
function _transferAndNotify(uint256 index, Receiver storage receiver, uint256 amount) internal {
...
rewardToken.safeTransfer(receiver.staking, amount);
try IPILReceiver(receiver.staking).notifyRewardAmount(address(rewardToken), amount) {
...
This allows a potentially malicious receiver to exploit the following inconsistency in the ActivePool contract:
ActivePool::_mintAggInterest:337...
if (remainder > 0) {
usdhnToken.mint(address(interestRouter), remainder);
//@Dedaub: execution will be hijacked here
interestRouter.routeInterest(remainder, bytes(""));
}
}
lastAggUpdateTime = block.timestamp;
...
Since execution is hijacked before lastAggUpdateTime gets updated, since the current receiver in PILDistributor is not yet updated, the malicious receiver can potentially trigger multiple interest aggregations by calling mintAggInterest() directly.
While the distribute call will revert because of the reentrancy guard that is going to be hit, the following try-catch in InterestRouter will not revert the whole transaction:
InterestRouter::_sendToPilDistributor:100...
try distributor.distribute() {}
catch (bytes memory reason) {
emit PilDistributionFailed(reason);
}
...
Effectively, the reentrant call successfully re-mints USDHN and sends them to the PILDistributor contract to be later distributed. The exacerbation of this attack is for the malicious party to use the above-mentioned mechanism to call distribute multiple times.
Whenever a trove is closed via BorrowerOperations::closeTrove , BorrowerOperations::_prepareBatchContextAndApplyWeights is responsible for preparing the system’s debt accounting information that corresponds to the state after the trove - the payment of which is repaid - is closed down:
BorrowerOperations::closeTrove:466...
LatestBatchData memory batch;
address batchManager;
int256 batchDebtDelta = -int256(trove.entireDebt - trove.redistUSDHNDebtGain);
(batch, batchManager) =
_prepareBatchContextAndApplyWeights(troveManagerCached, _troveId, troveChange, trove, 0, batchDebtDelta);
...
An accounting deviation is created when troves belonging to a batch are closed:
BorrowerOperations::_prepareBatchContextAndApplyWeights:1257...
batch = _troveManager.getLatestBatchData(batchManager);
//@Dedaub: redistributed debt should not be added when a trove closes - its debt
// gets eliminated from the system
uint256 base = batch.entireDebtWithoutRedistribution + _trove.redistUSDHNDebtGain;
uint256 batchDebtBase;
if (_batchDebtDelta >= 0) {
batchDebtBase = base + uint256(_batchDebtDelta);
} else {
batchDebtBase = base - uint256(-_batchDebtDelta);
}
_initialiseBatchChange(_change, batch, 0);
...
_setWeights(_change, batchDebtBase, batch.annualInterestRate, batch.annualManagementFee);
...
Even though a closing trove’s debt is eliminated from the system, the code always adds any redistributed debt to the batch in which the trove belongs to. Other than a general accounting error, this can have security consequences when the Stability Pool liquidity is low (or can be manipulated to be nearly empty ) and redistributions can be guaranteed.
In such scenarios, the extra debt redistribution in batches could lead to troves inside batches becoming prematurely liquidatable.
In the StableSwapZapper contract, anyone may initiate a liquidity removal operation by calling StableSwapZapper::unstakeAndRemoveLiquidity on behalf of a staker:
StableSwapZapper::unstakeAndRemoveLiquidity:215function unstakeAndRemoveLiquidity(UnstakeAndRemoveParams calldata params)
external
nonReentrant
returns (uint256[] memory amounts)
{
PoolAssets memory assets = _validatePool(params.pool);
BalanceSnapshot memory snapshot = _snapshot(assets);
_requireValidUnstakeParams(params);
address staker = params.receiver; // receiver is the staker in this flow
_requireOperatorApproval(staker);
// Pull LP directly to the zapper to avoid extra approvals (W-10 fix)
rewardStaker.unstakeFor(staker, params.liquidity, address(this), staker);
amounts = _removeLiquidityAndReturn(staker, params, assets, snapshot);
}
It’s expected that the following check will succeed when the specified receiver is the staker themselves:
StableSwapZapper::_requireOperatorApproval:424function _requireOperatorApproval(address staker) private view {
if (!rewardStaker.operatorApprovals(staker, address(this))) {
revert OperatorNotApproved();
}
}
While this limits the severity of the issue - since the receiver of the forced withdrawal will be the staker themselves - forced withdrawals can create other economic incentives to be exploited, as the liquidity of a pool can be accordingly manipulated.
The same consideration holds for other liquidity removal operations in the StableSwapZapper contract
It's possible for non-pool contracts to call EarnUSDTFlashSwapper::onStableSwapExactOut by providing the appropriate sender and op.stablePool parameters:
EarnUSDTFlashSwapper::onStableSwapExactOut:478function onStableSwapExactOut(
address sender,
address token0,
address token1,
uint256 amount0Owed,
uint256 amount1Owed,
bytes calldata data
) external override {
LeverageTypes.LeverageOp memory op = abi.decode(data, (LeverageTypes.LeverageOp));
address pool = op.stablePool == address(0) ? earnStablePool : op.stablePool;
if (msg.sender != pool) revert InvalidFlashPool(msg.sender);
if (sender != address(this)) revert InvalidFlashPool(sender);
_requireManagedForFlash(op.troveId);
...
The direct consequence of this is that if any trove is managed by the EarnUSDTFlashSwapper contract (which shouldn't be the general case) one can directly steal USDHN out of it.
Although this has apparent consequences for an atypical user route, we want to emphasize on its resolution since it can serve as an entrypoint gadget for future exploits.
LeveragedGasCompZapper::_scaleDecimals will end up rounding down the result value to 0 when a token value is scaled down (i.e., fromDec decimals is larger than toDec decimals) .
EarnUSDTFlashSwapper::onStableSwapExactOut:478function _scaleDecimals(uint256 amount, uint8 fromDec, uint8 toDec) internal pure returns (uint256) {
if (fromDec == toDec) return amount;
uint256 fromFactor = 10 ** fromDec;
uint256 toFactor = 10 ** toDec;
return Math.mulDiv(amount, toFactor, fromFactor, Math.Rounding.Up);
}
In such a scenario since 10**fromDec might end up being larger than amount * 10**toDec.
This is especially problematic when dealing with wearn/earn accounting in zappers:
LeveragedGasCompZapper::_flashAmountForDecrease:622...
// Unwrapping |collDelta| wEarn yields roughly this much earn
uint256 earnBudget = collToRelease / scale;
if (earnBudget == 0) {
// If |collDelta| < scaleFactor, it cannot produce even 1 earn; fall back to scaling
return _scaleDecimals(collToRelease, 18, flashTokenDec);
}
...
Partly resolved
The protocol opted into enforcing checks on maximum debt that can be created via the maxDebt parameter of leveraged operations.
However, a theoretical sandwich attack is still possible if one combines the second part of P11 with the fact that de-leverage operations do not create additional debt:
When T worth of collateral is deleveraged,
TUSDHN is burnedT + uis actually withdrawable (see P11) What the code considers to be "Tworth of collateral" relies on the spot price of the Collateral-USDT pool. Therefore, in a sandwich scenario:- The user wishes to deleverage
Tworth (market value) of collateral - The pool is tilted by an attacker so that the same amount of collateral is momentarily worth
T-u Tis withdrawable, which is exactly the amount requested by the user sowithdrawCollsucceeds Because the de-leverage code path performs a collateral -> USDT swap, the attacker may extractuin collateral value by tilting the pool and making the collateral worth less when denominated in USDT.
Leverage operations typically involve involve multiple swaps:
- USDT <> Collateral (through a Uniswapv3-like pool)
- BOLD <> USDT (through the protocol’s stable-swap primitive)
Provided sufficient lack of deep liquidity, any of those paths is potentially vulnerable to sandwich attacks if attackers frontrun operations with large tilting swaps.
While such sandwich attacks cannot happen atomically using flash-loans, this is a novel attack surface that does not exist in the original LiquityV2 codebase:
- Liquity pre-calculates the amount of BOLD tokens to be withdrawn from the UI, if the swap is tilted then there won't be enough tokens to repay the flash swap and the transaction will revert: (see receiveFlashLoanOnLeverUpTrove on LeverageLSDTZapper )
LiquityV2::LeverageLSDTZapper::receiveFlashLoanOnLeverUpTrove:151...
// Swap Bold to Coll
// No need to use a min: if the obtained amount is not enough, the flash loan return below won't be enough
// And the flash loan provider will revert after this function exits
// The frontend should calculate in advance the `_params.boldAmount` needed for this to work
exchange.swapFromBold(_params.boldAmount, _params.flashLoanAmount);
// Send coll back to return flash loan
collToken.safeTransfer(address(flashLoanProvider), _params.flashLoanAmount);
...
- The protocol on the other hand directly pulls additional tokens from the trove to cover any swap shortfalls, which creates the attack surface for sandwich attacks:
FlashSwapper::_handleIncrease:282...
// Exact-out quote for the USDHN required to repay USDT.
uint256 usdhnRequired = _safeQuoteExactOut(sc.pool, sc.usdt, owe);
if (usdhnRequired > usdhnBalance) {
uint256 shortfall = usdhnRequired - usdhnBalance;
borrowerOperations.withdrawUSDHN(op.troveId, shortfall, type(uint256).max);
usdhnBalance += shortfall;
}
...
MEDIUM SEVERITY
Typically, it’s possible to specify a non-zero remove-manager for a trove but have the receiver field be address(0). The appointed remove-manager can initiate operations that worsen the trove’s ICR and all removed collateral and minted debt are received by the current owner of the trove’s NFT:
AddRemoveManagers::_requireSenderIsOwnerOrRemoveManagerAndGetReceiver:321...
if (receiver == address(0) || msg.sender != manager) {
return _owner;
}
...
While the logic enforcing the dynamic resolution of the current owner is present in AddRemoveManagers::_requireSenderIsOwnerOrRemoveManagerAndGetReceiver , the normalization layer introduced by the protocol enforces different semantics compared to Liquity if the trove NFT ever gets transferred:
AddRemoveManagers::_setRemoveManagerAndReceiver:106...
function _setRemoveManagerAndReceiver(
uint256 _troveId,
address _manager,
address _receiver,
address _owner,
address _addManager
) internal {
...
address normalizedReceiver = _normalizeReceiver(_manager, _receiver, _owner, _addManager);
removeManagerReceiverOf[_troveId] = RemoveManagerReceiver({manager: _manager, receiver: normalizedReceiver});
...
AddRemoveManagers::_normalizeReceiver:145...
if (receiver == address(0)) {
receiver = _owner;
}
...
When a non-zero remove-manager is specified in AddRemoveManagers::_setRemoveManagerAndReceiver, the code path above will pinpoint the receiver of all removal operations to be the owner at time.
However, this implies that even if the NFT owner changes after a transfer, the receiver of all removal operations will continue being the old owner. The issue is best presented in the following scenario:
The problematic scenario is the following:
- User
Xopens a trove with remove-manager beingRand ( so receiver and manager are 0). The protocol’s code silently setsXas the receiver of all ICR-worsening operations - User
Xtransfers the trove NFT to a different address (userY) - The current owner performs any operation that requires the receiver field; while the original LiquityV2 behavior matches the expected behavior for the same scenario (since the receiver would have been 0, and the current owner would be returned), the current code returns the old owner (
X). Ultimately, funds are potentially sent to an unexpected, or even the wrong address in the above scenario.
Troves can be opened on behalf of other users, which is something not possible in the original LiquityV2 codebase:
BaseZapper::_openBorrowerTrove:201function _openBorrowerTrove(IZapper.OpenTroveParams memory params, uint256 collAmount)
internal
returns (uint256 troveId)
{
IZapper.OpenTroveParams memory p = params;
if (p.owner == address(0)) {
p.owner = msg.sender;
}
p.collAmount = collAmount;
troveId = borrowerOperations.openTrove(
p.owner,
_getTroveIndex(p.owner, p.ownerIndex),
p.collAmount,
p.usdhnAmount,
p.upperHint,
p.lowerHint,
p.annualInterestRate,
p.maxUpfrontFee,
p.addManager,
p.removeManager,
p.receiver
);
}
Because _getTroveIndex can receive an arbitrary p.owner (while the Liquity codebase always uses msg.sender) it's technically possible to front-run a trove creation operation by donating the minimum debt just to DOS an upcoming trove creation.
The severity of this is limited, since the user that is being frontrun never loses ownership of the trove. However, it makes for a bad UX if they have to create a new transaction in order to properly top up their trove.
Additionally, because the frontrunner will open the trove with just the minimum debt, there's a risk of it becoming liquidatable before the user has the chance to use it
A general note is that if for any reason someone frontruns any permit call by submitting the signing information directly (and therefore consuming the permit nonce), codepaths dependent on a successful permit call will end up reverting:
BaseZapper::_pullTokenWithPermit:136function _pullTokenWithPermit(address owner, IERC20 token, uint256 amount, Permit2Transfer memory permit) internal {
IPermit2 permit2Ref = _permit2;
if (address(permit2Ref) == address(0)) revert Permit2NotConfigured();
if (owner == address(0)) revert Permit2InvalidOwner();
if (permit.token != address(token)) revert Permit2TokenMismatch(permit.token, address(token));
if (permit.amount < amount) revert Permit2AmountTooLow(amount, permit.amount);
if (amount > type(uint160).max) revert Permit2AmountOverflow(amount);
...
permit2Ref.permit(owner, single, permit.signature);
permit2Ref.transferFrom(owner, address(this), uint160(amount), permit.token);
}
This is not a major concern, but it could lead to momentary DOS of user operations that rely on BaseZapper::_pullTokenWithPermit if for some reason a front-runner is able to pick up the arguments for permit from the mempool or from a failed transaction and then call Permit::permit directly on the Permit2 contract (or any approval-based token).
LOW SEVERITY
Even when the LayerZero fee recipient is disabled, the bridging fee is always deducted from the amount that users end up bridging:
USDHNToken::_collectLzFee:192function _debit(address _from, uint256 _amountLD, uint256 _minAmountLD, uint32 _dstEid)
...
{
uint256 fee;
/*
Dedaub:
fee = (amountSentLD * bps) / BPS_BASE;
amountReceivedLD = amountSentLD - fee;
*/
(amountSentLD, amountReceivedLD, fee) = _computeDebitAmounts(_amountLD, _minAmountLD, _dstEid);
_collectLzFee(_from, fee);
_burn(_from, amountReceivedLD);
}
...
function _collectLzFee(address _payer, uint256 _fee) internal {
if (_fee == 0 || lzFeeRecipient == address(0)) {
return;
}
_transfer(_payer, lzFeeRecipient, _fee);
}
It seems that a distribution policy should restrict the initiation of a distribution, but currently _requireAuthorized does not take the policy (if set) into account:
PILDistributor::canDistribute:171function canDistribute() external view returns (bool) {
IPILDistributionPolicy currentPolicy = policy;
if (address(currentPolicy) != address(0)) {
try currentPolicy.shouldDistribute(this) returns (bool allow) {
...
PILDistributor::_requireAuthorized:224function _requireAuthorized() internal view {
require(
block.timestamp >= lastDistributionTime + minDistributionInterval || msg.sender == owner()
|| hasRole(DISTRIBUTOR_ROLE, msg.sender),
"PILDistributor: too soon"
);
}
As a result, one may initiate a distribution even if the policy's shouldDistribute returns false
During the deployment of the CollateralRegistry contract:
PILDistributor::constructor:114constructor(IUSDHNToken _usdhnToken, IERC20Metadata[] memory _tokens, ITroveManager[] memory _troveManagers) {
...
for (uint256 i; i < _tokens.length; ++i) {
_seedCollateral(_tokens[i], _troveManagers[i]);
}
...
PILDistributor::_seedCollateral:171function _seedCollateral(IERC20Metadata asset, ITroveManager troveManager) internal {
if (address(asset) == address(0) || address(troveManager) == address(0)) {
revert InvalidAddress();
}
_addCollateralRecord(
asset,
troveManager,
address(0), //@Dedaub: price feed is set as address(0)
asset.decimals(),
ICollateralRegistry.CollateralStatus.Active,
uint48(block.timestamp),
uint48(block.timestamp),
msg.sender
);
}
CollateralRegistry::_seedCollateral sets address(0) to be the price feed address for the collaterals that are initially configured during construction.
The problem is exacerbated by the fact that there's no way to set the proper price feed after deployment.
This on its own is not a high severity misconfiguration, but it could affect all systems that read price feed information from the CollateralRegistry contract.
CENTRALIZATION CONSIDERATIONS
It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocol’s owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-profile, high-value protocols have significant centralization threats.)
While it seems that a governance body is going to exist, some parts of the protocol are intended to be owned by a trusted address:
-
AddressRegistry -
InterestRouter -
PILDistributor - All price feeds
- The
HNUSDtoken
While an administrator is appointed for all zappers
Both parties are able to to control important parameters and system functionality
OTHER / ADVISORY ISSUES
This section details issues that are not thought to directly affect the functionality of the project, but we recommend considering them.
Minting USDHN interest to the interest router only if there’s a remainder after the StabilityPool split does not yield any gas saving in realistic deployments:
ActivePool::_mintAggInterest:331...
if (remainder > 0) {
usdhnToken.mint(address(interestRouter), remainder);
interestRouter.routeInterest(remainder, bytes(""));
}
...
Since StabilityPool yield won't realistically be 100% of the minted USDHN interest — in effect an additional check is inserted if one wishes to be strict about gas savings.
Currently the PILDistributor contract allows for the addition of inactive distribution receivers:
PILDistributor::setReceivers:119...
for (uint256 i; i < receiverCount; i++) {
Receiver calldata newReceiver = _receivers[i];
if (newReceiver.active) {
require(newReceiver.staking != address(0), "PILDistributor: zero staking");
for (uint256 j = 0; j < i; j++) {
require(receivers[j].staking != newReceiver.staking, "PILDistributor: duplicate staking");
}
sum += newReceiver.weightBps;
}
...
However, their weight does not contribute to anything it seems and it's not possible to easily activate them — one has to call PILDistributor::setReceivers again. There seems to be no apparent benefit of tracking invalid receivers.
Since PILDistributor::rewardToken is only set during construction, the variable could be declared as immutable
The length of the __gap field in the PILDistributor contract together with its state variables make it so that more than 50 slots are occupied:
PILDistributor:352...
/// @dev Reserve storage gap for future upgrades
uint256[47] private __gap;
...
This is not an issue, but it might make sense to follow typical practices and have 50 slots per upgradeable contract.
The following call:
PILDistributor::_startDistribution:212...
uint256 weightSum = _sumActiveWeights(0);
if (weightSum == 0) return false;
...
is not needed; the weight sum of active receivers is enforced to be either 10000 (or 9) inside PILDistributor::setReceivers :
PILDistributor::setReceivers:128...
require(sum == 10000 || receiverCount == 0, "PILDistributor: weights");
totalWeightBps = receiverCount == 0 ? 0 : 10000;
...
When checking for duplicate receiver staking addresses in setReceivers:
PILDistributor::setReceivers:122...
for (uint256 i; i < receiverCount; i++) {
Receiver calldata newReceiver = _receivers[i];
if (newReceiver.active) {
require(newReceiver.staking != address(0), "PILDistributor: zero staking");
for (uint256 j = 0; j < i; j++) {
//@Dedaub: receivers is a storage variable
require(receivers[j].staking != newReceiver.staking, "PILDistributor: duplicate staking");
}
...
The O(n) SLOADS per receiver here can be avoided here if one traverses the _receivers calldata parameter instead of the receivers storage variable
While it seems that the PILDistributor contract correctly intends to account for dust balance during distributions, which is bound to exist because of rounding errors of potentially accumulating error in the following depression:
PILDistributor::_processDistribution:259...
MathExt.mulDiv(ctx.remainingBalance, weight, ctx.remainingWeight);
...
the code does not really handle the dust balance in any meaningful way currently:
PILDistributor::_processDistribution:286...
dust = state.remainingBalance;
if (dust > 0) {
uint256 distributedDust = _handleDust(dust);
sent += distributedDust;
dust = rewardToken.balanceOf(address(this));
}
...
PILDistributor::_handleDust:300...
function _handleDust(uint256) internal pure returns (uint256 distributedDust) {
// By retaining dust for the next round, it will be proportionally split with the next distribution amount
// instead of always accruing to the first active receiver.
return 0;
}
...
With that, the following call is unneeded:
PILDistributor::_processDistribution:289...
dust = rewardToken.balanceOf(address(this));
...
The remainingBalance variable of the distribution state already tracks the contract's balance
In the case of zombie troves, BorrowerOperations::applyPendingDebt attempts to pull the trove’s latest data after the aggregated interest gets minted. The aim is to determine if the trove is not longer a zombie - and should therefore be re-inserted into the trove of the branch’s troves:
PILDistributor::_processDistribution:289LatestTroveData memory trove = troveManagerCached.getLatestTroveData(_troveId);
...
activePool.mintAggInterestAndAccountForTroveChange(change, batchManager);
if (status == ITroveManager.Status.zombie) {
LatestTroveData memory updated = troveManagerCached.getLatestTroveData(_troveId);
if (updated.entireDebt >= MIN_DEBT) {
troveManagerCached.setTroveStatusToActive(_troveId);
_reInsertIntoSortedTroves(
_troveId, updated.annualInterestRate, _upperHint, _lowerHint, batchManager, batch.annualInterestRate
);
...
It should be noted that the TroveManager::getLatestTroveData call above is not required; the initially fetched trove information already reflects the recorded debt, any redistribution gains and any pending interest even if the debt has been aggregated in the Active Pool.
Across various code paths in the BorrowerOperations contract, the code ends up re-computing debt values:
BorrowerOperations::lowerBatchManagementFeeends up settingoldWeightedRecordedDebtandoldWeightedRecordedBatchManagementFeetwice:
BorrowerOperations::lowerBatchManagementFee:650...
_initialiseBatchChange(batchChange, batch, 0);
_setBatchWeights(
batchChange, batch, batch.entireDebtWithoutRedistribution, batch.annualInterestRate, _newAnnualManagementFee
);
...
BorrowerOperations::_initialiseBatchChange:1234function _initialiseBatchChange(
TroveChange memory _change,
LatestBatchData memory _batch,
uint256 _extraWeightedDebt
) internal pure {
...
_change.oldWeightedRecordedDebt = _batch.weightedRecordedDebt + _extraWeightedDebt;
_change.oldWeightedRecordedBatchManagementFee = _batch.weightedRecordedBatchManagementFee;
...
BorrowerOperations::_setBatchWeights:1279function _setBatchWeights(
TroveChange memory _change,
LatestBatchData memory _batch,
uint256 _debtBase,
uint256 _annualInterestRate,
uint256 _annualManagementFee
) internal pure {
_change.oldWeightedRecordedDebt = _batch.weightedRecordedDebt;
_change.oldWeightedRecordedBatchManagementFee = _batch.weightedRecordedBatchManagementFee;
tFee = _batch.weightedRecordedBatchManagementFee;
...
- The same duplicate set operations are encountered inside
BorrowerOperations::_prepareBatchContextAndApplyWeights:
BorrowerOperations::_prepareBatchContextAndApplyWeights:1266...
_initialiseBatchChange(_change, batch, 0);
_change.oldWeightedRecordedDebt = batch.weightedRecordedDebt;
_change.oldWeightedRecordedBatchManagementFee = batch.weightedRecordedBatchManagementFee;
...
In the following snippet in BorrowerOperations::setInterestBatchManager :
BorrowerOperations::setInterestBatchManager:744...
_setWeights(change, batchDebtBeforeFee, batch.annualInterestRate, batch.annualManagementFee);
trove.entireDebt = _applyUpfrontFee(trove.entireColl, trove.entireDebt, change, _maxUpfrontFee, true, 0);
_setWeights(
change,
batch.entireDebtWithoutRedistribution + trove.entireDebt,
batch.annualInterestRate,
batch.annualManagementFee
);
...
The first BorrowerOperations::_setWeights will prematurely set newWeightedRecordedBatchManagementFee :
BorrowerOperations::_setWeights:1292...
function _setWeights(
TroveChange memory _change,
uint256 _debtBase,
uint256 _annualInterestRate,
uint256 _annualManagementFee
) internal pure {
_change.newWeightedRecordedDebt = _debtBase * _annualInterestRate;
_change.newWeightedRecordedBatchManagementFee = _debtBase * _annualManagementFee;
}
...
The value is initially set to be the entire debt without any upfront fees. There’s no consequence of this, since the following calculations do not use newWeightedRecordedBatchManagementFee. The informatory note is mostly made for the sake of completeness, so that the developer team is aware of this in the event that the code is ever upgraded in the future.
It's intended for the CollateralRegistry::_indexByTroveManager mapping to serve only as a mechanism to guard against duplicate trove manager insertions in the Collateral Registry:
CollateralRegistry::_addCollateralRecord:431...
function _addCollateralRecord(
IERC20Metadata asset,
ITroveManager troveManager,
address priceFeed,
uint8 decimals,
ICollateralRegistry.CollateralStatus status,
uint48 queuedAt,
uint48 activateAt,
address addedBy
) internal returns (uint256 index) {
...
if (_indexByTroveManager[address(troveManager)] != 0) revert TroveManagerExists();
...
Although If one wanted to be too strict about duplicates, one could highlight the same about the priceFeed field of a collateral:
CollateralRegistry::_addCollateralRecord:442...
_collaterals.push(
CollateralConfig({
asset: asset,
troveManager: troveManager,
priceFeed: priceFeed,
decimals: decimals,
status: status,
queuedAt: queuedAt,
activateAt: activateAt,
addedAt: uint48(block.timestamp),
addedBy: addedBy
})
);
...
Field HNKAIAPriceFeed::RedeemRateParamsInternal::maxAge is not used in any meaningful way:
HNKAIAPriceFeed:68struct RedeemRateParamsInternal {
uint32 maxAge;
bool depositPaused;
bool initialized;
}
The following check in the HNKAIA price feed is is wrong:
HNKAIAPriceFeed::_getKaiaUsdPrice:721...
if (lastKaiaUsdUpdateTime != 0 && block.timestamp - lastKaiaUsdUpdateTime > stalePeriod) {
if (lastGoodKaiaUsd == 0) {
return (0, false);
}
return (lastGoodKaiaUsd, true);
}
...
In practice, the all protocol-critical paths call HNKAIAPriceFeed::_refreshKaiaUsdPrice , which ends up pulling a quote from the KAIA price feed:
HNKAIAPriceFeed::_refreshKaiaUsdPrice:735...
try kaiaUsdPriceFeed.fetchPrice() returns (uint256 feedPrice, bool fallbackUsed) {
if (feedPrice == 0) {
if (previous == 0) {
return (0, false, false);
}
emit OracleFailure(OF_KAIA_ZERO);
return (previous, true, true);
}
if (fallbackUsed) {
if (previous == 0) {
lastGoodKaiaUsd = feedPrice;
lastKaiaUsdUpdateTime = block.timestamp;
return (feedPrice, false, true);
}
emit OracleFailure(OF_KAIA_STALE);
return (previous, true, true);
}
lastGoodKaiaUsd = feedPrice;
lastKaiaUsdUpdateTime = block.timestamp;
...
HNKAIAPriceFeed::fetchPrice:286function fetchPrice() external override returns (uint256 price, bool oracleFailureDetected) {
if (paused) {
return (lastGoodPrice, true);
}
(uint256 kaiaUsdPrice, bool kaiaFallback, bool kaiaOk) = _refreshKaiaUsdPrice();
...
HNKAIAPriceFeed::fetchRedemptionPrice:318function fetchRedemptionPrice() external override returns (uint256 price, bool oracleFailureDetected) {
if (paused) {
return (lastGoodPrice, true);
}
(uint256 kaiaUsdPrice, bool kaiaFallback, bool kaiaOk) = _refreshKaiaUsdPrice();
...
With that being said, this form of pulling information into storage just for it to be atomically read within the same transaction frame is inefficient.
LeverageWKAIAZapper::_payoutCollateral always ends up calling _unwrapTo , so the use of _unwrapOnClose flag does not seem to practically affect anything:
LeverageWKAIAZapper::_payoutCollateral:258function _payoutCollateral(address to, uint256 amount) internal override(GasCompZapper) {
if (amount == 0) return;
if (_unwrapOnClose) {
_unwrapTo(to, amount);
} else {
_payoutWrappedCollateral(to, amount);
}
}
LeverageWKAIAZapper::_payoutWrappedCollateral:35function _payoutWrappedCollateral(address to, uint256 amount) internal {
_unwrapTo(to, amount);
}
The following functions are not used in any meaningful way:
-
WKAIAZApper::_unwrapRecipientPayable -
LeveragedGasCompZapper::_estimateUpfrontFee
It seems there's no need to override GasCompZapper::_afterAdjust with WKAIAZapper::_afterAdjust since the two implementations are identical:
GasCompZapper::_afterAdjust:682function _afterAdjust(address payout, uint256 collateralOut, uint256 usdhnOut) internal virtual {
if (collateralOut != 0) {
_payoutCollateral(payout, collateralOut);
}
if (usdhnOut != 0) {
_usdhnErc20().safeTransfer(payout, usdhnOut);
}
_flushUsdhn(payout);
_sweepAllLeftovers(payout);
}
WKAIAZapper::_afterAdjust:170function _afterAdjust(address payout, uint256 collateralOut, uint256 usdhnOut) internal override {
if (collateralOut != 0) _payoutCollateral(payout, collateralOut);
if (usdhnOut != 0) _usdhnErc20().safeTransfer(payout, usdhnOut);
// Align with base zapper: clear leftover USDHN and all tracked tokens.
_flushUsdhn(payout);
_sweepAllLeftovers(payout);
}
It seems there's no need to override GasCompZapper::withdrawColl with EarnUSDTZapper::withdrawColl since the two implementations are identical:
EarnUSDTZapper::withdrawColl:81function withdrawColl(uint256 troveId, uint256 amount) public override nonReentrant whenNotPaused {
if (amount == 0) revert AmountZero();
// Self-managed only: manager/receiver must already be this zapper.
TroveActors memory actors = _requireManagedReceiver(troveId);
borrowerOperations.withdrawColl(troveId, amount);
address payout = _resolvePayout(actors.receiver, actors.owner);
_payoutCollateral(payout, amount);
_sweepAllLeftovers(payout);
}
GasCompZapper::withdrawColl:227function withdrawColl(uint256 troveId, uint256 amount) external virtual nonReentrant whenNotPaused {
if (amount == 0) revert AmountZero();
TroveActors memory actors = _requireManagedReceiver(troveId);
borrowerOperations.withdrawColl(troveId, amount);
address payout = _resolvePayout(actors.receiver, actors.owner);
_payoutCollateral(payout, amount);
_sweepAllLeftovers(payout);
}
This check in WKAIZapper::withdrawCollToRawKAIA is actually already by enforced _requireManagedReceiver :
WKAIAZapper::withdrawCollToRawKAIA:83...
TroveActors memory actors = _requireManagedReceiver(troveId);
if (actors.receiver != address(this) && msg.sender != actors.owner) {
revert RawKaiaRequiresSelfManaged(troveId, actors.receiver);
}
...
GasCompZapper::_requireManagedReceiver:756...
if (msg.sender != owner) revert UnauthorizedTroveOwner(owner, msg.sender);
ManagerStateLib.ManagedState mstate =
ManagerStateLib.classify(st.add, st.remove, st.receiver, address(this), _flashSwapperOrZero());
if (mstate != ManagerStateLib.ManagedState.SelfManaged) {
revert ReceiverNotAuthorized(troveId, st.receiver, st.remove, address(this), _flashSwapperOrZero());
}
...
The following check in LeverageEARNUSDTZapper::_normalizeLeveragedOpen is always true (and therefore unnecessary):
LeverageEARNUSDTZapper::_normalizeLeveragedOpen:118...
uint256 expectedWrapped = declaredEarn * scale;
if (expectedWrapped / scale != declaredEarn) revert CollateralInputMismatch(expectedWrapped, actualColl);
...
collDelta and maxDebt are always 0 when closing a leveraged trove, so the following checks are not used in FlashSwapper::_handleClose:
FlashSwapper::_handleClose:463...
// Collateral Δ invariant
if (op.collDelta != 0) {
...
if (op.maxDebt != 0) {
...
There's no point in the following check in LeveragedGasCompZapper::_closeTroveByCollateralInternal :
FlashSwapper::_handleClose:463...
// Collateral Δ invariant
if (op.collDelta != 0) {
...
if (op.maxDebt != 0) {
...
LeverageEARNUSDTZapper::_normalizeCollateralDelta is a no-op function:
LeverageEARNUSDTZapper::_closeTroveByCollateralInternal:347...
// First, use any leftover USDHN to reduce debt.
IERC20 usdhnErc20 = IERC20(address(usdhnToken));
uint256 usdhnDust = usdhnErc20.balanceOf(address(this));
if (usdhnDust != 0) {
// If the trove is already closed this would revert; attempt best-effort only.
try borrowerOperations.repayUSDHN(params.troveId, usdhnDust) {} catch {}
}
...
At that point tokens have been already swept by the flash swapper and the trove must be closed if the code path has not reverted.
The following snippet in the LeveragedGasCompZapper enforces a potentially dangerous assumption when a token does not comply with the ERC-20 standard
LeveragedGasCompZapper::_tokenDecimals:500function _tokenDecimals(address token) internal view returns (uint8) {
try IERC20Metadata(token).decimals() returns (uint8 d) {
return d;
} catch {
return 18;
}
}
This shouldn't matter in practice but it's recommended to be explicit on what assets should be ever supported.
FlashSwapper::_swapUsdtToCollateralis unusedFlashSwapper::_swapCollateralToUsdtis unused
By extension of the above-mentioned items,FlashSwapper::_executeSwapis also unused.
This means that the only code path where a normal swap can be initiated is not used at all. Ultimately branch ofFlashSwapper::pancakeV3SwapCallbackthat handles normal swap callbacks does not seem to be practically used either:
FlashSwapper::_executeSwap:688function _executeSwap(IPancakeV3Pool pool, bool zeroForOne, uint256 amountIn) internal returns (uint256 amountOut) {
int256 amountSpecified = -int256(amountIn); // exact input
uint160 priceLimit = zeroForOne ? MIN_SQRT_PRICE_LIMIT : MAX_SQRT_PRICE_LIMIT;
address tokenIn = zeroForOne ? pool.token0() : pool.token1();
_swapTokenIn = tokenIn;
_activeSwapPool = address(pool);
(int256 amount0Delta, int256 amount1Delta) =
pool.swap(address(this), zeroForOne, amountSpecified, priceLimit, "");
if (zeroForOne) {
amountOut = uint256(-amount1Delta);
} else {
amountOut = uint256(-amount0Delta);
}
}
FlashSwapper::pancakeV3SwapCallback:604...
} else {
if (msg.sender != _activeSwapPool) revert InvalidFlashPool(msg.sender);
address tokenIn = _swapTokenIn;
if (tokenIn == address(0)) revert UninitializedSwap();
if (tokenIn != token0 && tokenIn != token1) revert InvalidSwapToken(tokenIn);
uint256 amountToPay = amount0Delta > 0 ? uint256(amount0Delta) : uint256(amount1Delta);
IERC20(tokenIn).safeTransfer(address(pool), amountToPay);
_swapTokenIn = address(0);
_activeSwapPool = address(0);
}
...
FlashSwapper::_handleClose attempts to repay the entire debt at once and then close the trove, but one could actually achieve the same by having only a closeTrove:
FlashSwapper::_handleClose:441...
borrowerOperations.repayUSDHN(op.troveId, debtTarget);
borrowerOperations.closeTrove(op.troveId);
...
On a theoretical note, the following snippet in FlashSwapper::_enforceFeeCap:
FlashSwapper::_handleClose:441...
if (amountToPay <= borrowedInPay) return; // fee <= 0 in pay units
uint256 feePay = amountToPay - borrowedInPay;
uint256 maxFeePay = Math.mulDiv(borrowedInPay, capBps, 10_000);
if (feePay > maxFeePay) revert FlashFeeExceeded(feePay, maxFeePay);
...
does not fully capture fees, but it's mainly driven by the slippage in the execution price - which is dependent on the pool's liquidity. Not a practical issue, but something to keep in mind.
Technically it’s not required to flash any remaining HNUSD balance prior to any adjustments:
GasCompZapper::_beforeAdjust:669...
function _beforeAdjust(
uint256 troveId,
uint256 collChange,
bool isCollIncrease,
uint256 usdhnChange,
bool isDebtIncrease,
AdjustPermits memory permits
) internal virtual returns (address receiver, uint256 appliedCollChange, uint256 appliedUsdhnChange) {
_flushUsdhn(receiver);
...
Attempting to find insertion hints for the system-mainted sorted troves in an on-chain manner via GasCompZapper::_maybePopulateOpenHints can be expensive in terms of gas:
GasCompZapper::_beforeAdjust:669...
try helpers.getApproxHint(index, rate, AUTO_HINT_TRIALS, _hintSeed(params)) returns (
uint256 hint, uint256, uint256
) {
approx = hint;
} catch {
return;
}
if (approx == 0) return;
try troves.findInsertPosition(rate, approx, approx) returns (uint256 upper, uint256 lower) {
params.upperHint = upper;
params.lowerHint = lower;
} catch {}
...
It makes sense to keep such helper logic off-chain, separate from any on-chain functionality.
Allowing the address registry instance to break the interface in the GasCompZapper contract does not yield any benefit:
GasCompZapper::constructor:79...
try IExtendedAddressesRegistry(address(registry)).sortedTroves() returns (ISortedTroves regTroves) {
troves = regTroves;
} catch {}
try IExtendedAddressesRegistry(address(registry)).hintHelpers() returns (IHintHelpers regHelpers) {
helpers = regHelpers;
} catch {}
try IExtendedAddressesRegistry(address(registry)).collateralRegistry() returns (
ICollateralRegistry collRegistry
) {
if (address(collRegistry) != address(0)) {
(bool exists, uint256 index) = collRegistry.findCollateralByAsset(address(collateral));
if (exists) {
collateralIndex = index;
}
}
...
The code is compiled with Solidity 0.8.24. Version 0.8.24, in particular, has some known bugs, which we do not believe affect the correctness of the contracts.
DISCLAIMER
The audited contracts have been analyzed using automated techniques and extensive human inspection in accordance with state-of-the-art practices as of the date of this report. The audit makes no statements or warranties on the security of the code. On its own, it cannot be considered a sufficient assessment of the correctness of the contract. While we have conducted an analysis to the best of our ability, it is our recommendation for high-value contracts to commission several independent audits, a public bug bounty program, as well as continuous security auditing and monitoring through Dedaub Security Suite.
ABOUT DEDAUB
Dedaub offers significant security expertise combined with cutting-edge program analysis technology to secure some of the most prominent protocols in DeFi. The founders, as well as many of Dedaub's auditors, have a strong academic research background together with a real-world hacker mentality to secure code. Protocol blockchain developers hire us for our foundational analysis tools and deep expertise in program analysis, reverse engineering, DeFi exploits, cryptography and financial mathematics.