Skip to main content
OTSea P2P Markets - May 29, 2024

OTSea P2P Markets

Smart Contract Security Assessment

May 29, 2024

OTSea

SUMMARY


ABSTRACT

Dedaub was commissioned to perform a security audit of the OTSea protocol for the Solana network. The overall architecture was well designed following good standards and techniques. The current implementation closely follows the implementation of the smart contracts for the EVM chains for which we have also performed an audit that can be found here: OTSea smart contract audit report | Dedaub Audit Reports. Some vulnerabilities were found including a Critical, a High and a few of Medium severity, all of which were addressed by the team.


BACKGROUND

OTSea Protocol is a peer-to-peer exchange where users can buy spl-tokens for SOL or sell spl-tokens for SOL without going through a liquidity pool, while enjoying extra benefits such as order discounts and private exchange opportunities.

The audit covered the new version of the platform’s main contract, OTSea Solana, which acts as the trusted intermediary between sellers and buyers.

The codebase was accompanied by a test suite which performs some basic checks over the implemented functionalities. However, we would highly recommend more thorough tests to be added to the test suite which should cross examine several parts of the protocol ensuring that all execution flows have the expected outcome. By having such tests, most of the issues of the highest severity could have been detected and sorted out.


SETTING & CAVEATS

This audit report mainly covers the contracts of the at-the-time private repositories given as an archived zip (OTSea_Solana.zip md5: 942b23511d24f00f6bdd70dfce783874) and (OTSea_Solana_2.zip md5: 88d49915bb0a71285385cce2ac1791e7). The second zip file was given midway through the audit and contained some trivial fixes.

As part of the audit, we also reviewed the fixes for the issues included in the report. The fixes were delivered as an archived zip (Audit_fixed_version.zip md5: e02dc03d471e5506ae6cf029db95eb7b) and we found that they have been implemented correctly.

Two auditors worked on the codebase for 10 days on the following files:

programs/otsea/src/
  • constants.rs
  • contexts.rs
  • errors.rs
  • events.rs
  • instructions/
    • admin_instructions.rs
    • cancel_order.rs
    • create_buy_order.rs
    • create_sell_order.rs
    • initialize.rs
    • lock_up.rs
    • owner_instructions.rs
    • swap_sol_for_tokens.rs
    • swap_tokens_for_sol.rs
  • instructions.rs
  • lib.rs
  • state.rs

The audit’s main target is security threats, i.e., what the community understanding would likely call "hacking", rather than the regular use of the protocol. Functional correctness (i.e. issues in "regular use") is a secondary consideration. Typically it can only be covered if we are provided with unambiguous (i.e. full-detail) specifications of what is the expected, correct behavior. In terms of functional correctness, we often trusted the code’s calculations and interactions, in the absence of any other specification. Functional correctness relative to low-level calculations (including units, scaling and quantities returned from external protocols) is generally most effectively done through thorough testing rather than human auditing.


VULNERABILITIES & FUNCTIONAL ISSUES

This section details issues affecting the functionality of the contract. Dedaub generally categorizes issues according to the following severities, but may also take other considerations into account such as impact or difficulty in exploitation:

Category
Description
CRITICAL
Can be profitably exploited by any knowledgeable third-party attacker to drain a portion of the system’s or users’ funds OR the contract does not function as intended and severe loss of funds may result.
HIGH
Third-party attackers or faulty functionality may block the system or cause the system or users to lose funds. Important system invariants can be violated.
MEDIUM
Examples:
  • User or system funds can be lost when third-party systems misbehave.
  • DoS, under specific conditions.
  • Part of the functionality becomes unusable due to a programming error.
LOW
Examples:
  • Breaking important system invariants but without apparent consequences.
  • Buggy functionality for trusted users where a workaround exists.
  • Security issues which may manifest when the system evolves.

Issue resolution includes “dismissed” or “acknowledged” but no action taken, by the client, or “resolved”, per the auditors.


CRITICAL SEVERITY

C1

Misconfigured constraints allow anyone to drain all Buy orders

CRITICAL
resolved
C1
Misconfigured constraints allow anyone to drain all Buy orders

In the swap_tokens_for_sol.rs file, the SwapTokensForSOL validation struct fails to validate the ownership of the creator_token_vault account properly. The constraints applied seem to originate from the validation of the trader_token_vault account since they are the same.

More specifically on the vulnerability, any malicious trader can pass their trader_token_vault account as the creator_token_vault account. The constraints on the latter will have no effect on detecting the malicious input as soon as a valid trader_token_vault account is given. This allows the trader to perform the swap and receive both the SOL tokens that are on sale while also keeping their tokens since they are expected to be transferred from the trader_token_vault to the creator_token_vault.

swap_tokens_for_sol.rs -> SwapTokensForSOL::creator_token_vault:213
#[derive(Accounts)]
#[instruction(trade: Trade)]
pub struct SwapTokensForSOL<'info> {
...
#[account(
mut,
constraint = order.creator == creator.key(),
)]
pub creator: AccountInfo<'info>,
...
// Dedaub: These constraints should use the creator_token_vault
// instead of the trader_token_vault and the creator.key() value
#[account(
mut,
constraint = trader_token_vault.mint == token_mint.key(),
constraint = trader_token_vault.owner == signer.key(),
)]
pub creator_token_vault: Box<Account<'info, TokenAccount>>,

#[account(
mut,
constraint = trader_token_vault.mint == token_mint.key(),
constraint = trader_token_vault.owner == signer.key(),
)]
pub trader_token_vault: Box<Account<'info, TokenAccount>>,
...
}


HIGH SEVERITY

H1

Traders of Buy orders get fewer SOL tokens than supposed and some also become permanently lost in the order vaults

HIGH
resolved
H1
Traders of Buy orders get fewer SOL tokens than supposed and some also become permanently lost in the order vaults

The swap_tokens_for_sol instruction is responsible for performing the swaps of all the Buy orders which aim to trade SOL tokens for any other SPL Token. On each trade, a fee is applied based on the preconfigured global OTSea parameters which include fish (initialized as 1%), whale (initialized as 0.3%) and partner (initialized as 30% of fish/whale) fees.

On every trade, the calculated amount of SOL tokens to be received by the trader is subject to deductions based on those fees. Thus, the calculate_revenue function determines how many of them should be attributed to the revenue vault (and the partners subsequently).

swap_tokens_for_sol::swap_tokens_for_sol
pub fn swap_tokens_for_sol(
ctx: Context<SwapTokensForSOL>,
trade: Trade,
fee_type: FeeType,
) -> Result<()> {
...
let amount_output = execute_trade(
&mut ctx.accounts.order,
ctx.accounts.signer.key(),
,
ctx.accounts.token_mint.key(),
)?;

let order = &ctx.accounts.order
let (total_revenue, distribute_revenue, partner_revenue) =
calculate_revenue(
&ctx.accounts.otsea,
&ctx.accounts.partner,
amount_output,
order.fee_type,
)?;

// transfer SOL(trade amount) from vault to trader
let transfer_sol_amount =
amount_output.checked_sub(total_revenue).unwrap();
...
token::transfer(
CpiContext::new_with_signer(
ctx.accounts.token_program.to_account_info(),
token::Transfer {
from: ctx.accounts.order_token_vault.to_account_info(),
to: ctx.accounts.trader_sol_token_vault.to_account_info(),
...
},
...
),
transfer_sol_amount,
)?;

// transfer SOL(partner fee) from trader to partner
if partner_revenue > 0 {
...
token::transfer(
CpiContext::new(
ctx.accounts.token_program.to_account_info(),
token::Transfer {
from: ctx.accounts
.trader_sol_token_vault.to_account_info(),
to: ctx.accounts
.partner_sol_token_account
.as_ref().unwrap().to_account_info(),
...
},
),
partner_revenue,
)?;
...
}

// transfer wrap SOL(Revenue) from trader to revenue distributor
token::transfer(
CpiContext::new(
ctx.accounts.token_program.to_account_info(),
token::Transfer {
from: ctx.accounts
.trader_sol_token_vault.to_account_info(),
to: ctx.accounts.revenue_vault.to_account_info(),
...
},
),
distribute_revenue,
)?;
...
}

However, the amount_output represents the total amount of SOL tokens that should be removed from the order’s SOL vault, but this does not seem to be followed properly. More specifically, the issue becomes apparent following the steps below:

  • A trader requests to swap 100 Tokens for 100 SOL tokens (assuming a 1:1 ratio for simplicity)

  • Let’s assume that the order’s vault holds exactly 100 SOL tokens which means that the order will be fulfilled

  • The execute_trade call from inside the swap_tokens_for_sol instruction returns that 100 SOL tokens should be given to the trader (aka. amount_output = 100), but also increases the order.input_transacted field by 100 which marks the order as fulfilled

  • The calculate_revenue call afterwards calculates how many of these SOL tokens should be withheld based on the preconfigured fees (fish_fee = 1%). As a result, the total_revenue will be 1 SOL token.

  • Then, we transfer the SOL tokens to the trader subtracting the fees calculated, which means that only 99 SOL tokens are sent to the trader

  • However, when it comes to paying the fees to the revenue_vault and the partner, the tokens are taken from the trader’s account and not from the order’s account as they should since the 1 SOL token was already left behind upon the first transfer to the trader.

  • As a result, this leads to the trader losing another 1 SOL token from their earnings and the 1 SOL token that was left in the order_token_vault to be lost there forever since the account should also be closed due to the order being fulfilled.

  • The total losses are as follows:

  • For the funds, 1% of the total SOL tokens used by all the Buy order vaults become permanently lost

  • For the traders, 1% * 99% of the total SOL tokens used by all the Buy order vaults, who lose this amount as it is being sent to the fee receivers

It seems that this issue resulted from a misconfigured copy of the code from the other swap instruction (swap_sol_for_tokens) which correctly transfers the SOL tokens from the trader to the fee receivers.

To fix this issue, you should either transfer the entire amount_output to the trader and then from their account to the fee receivers or change the from arguments of the SOL token transfer CPIs to be from the order_token_vault and not from the trader.



MEDIUM SEVERITY

M1

The whitelist PDA of an order can remain unclosed even after the order gets fulfilled and fully claimed

MEDIUM
resolved
M1
The whitelist PDA of an order can remain unclosed even after the order gets fulfilled and fully claimed

All the Sell orders that have a lock-up period set (or if the partner has a lock-up override enabled) put the swapped tokens into a lock-up for the users that requested a trade. After the lock-up period expires the traders can claim their tokens by using the lock_up::claim_lock_up instruction. When the last locked-up amount for an order is claimed and the order has been Cancelled by the owner or has been Fulfilled, the PDAs used by that order have to be closed.

However, even though the order_token_vault and the order accounts are successfully closed, if the order owner had created a whitelist PDA, this can remain unclosed in case the last trader does not provide it in the context of the called claim instruction since the account is defined as Option<>.

As a consequence, this can result in leaving whitelist PDAs open withholding the SOL tokens paid for the account creation from being reimbursed to the order creator.

lock_up::claim_lock_up:71
pub fn claim_lock_up(
ctx: Context<ClaimLockUp>,
...
) -> Result<()> {
...
let remain_lockup_amount = order.total_locked_up.checked_sub(
order.unlocked_transacted).unwrap();
if (order.state == State::Cancelled ||
order.state == State::Fulfilled) && remain_lockup_amount == 0 {
// close vault
token::close_account(...)?;

// close order account
ctx.accounts.order.close(ctx.accounts.creator.to_account_info())?;
// Dedaub: If the whitelist is not provided but exists and the
// last locked-up amount is being claimed for a Cancelled
// or Fulfilled order, then the account will remain active
// close order whitelist account
if ctx.accounts.whitelist.is_some() {
ctx.accounts.whitelist.close(
ctx.accounts.signer.to_account_info())?;
}
}
...
}
lock_up.rs -> ClaimLockUp::whitelist:153
#[derive(Accounts)]
#[instruction(order_id: u64)]
pub struct ClaimLockUp<'info> {
#[account(
mut, seeds = [...], bump,
constraint = order.whitelist == whitelist.key(),
)]
pub whitelist: Option,
...
}

M2

The order owner loses the rent-exempt amount paid for the whitelist PDA of orders with lockups

MEDIUM
resolved
M2
The order owner loses the rent-exempt amount paid for the whitelist PDA of orders with lockups

The order creators can create whitelist PDAs for their orders to only allow specific accounts to perform trades with. However, for the orders that have enabled lock-ups or the partners used have an enabled lock-up override, their owners may not be able to get the SOL paid for the rent exemption of the whitelist PDAs back.

This can happen if the order has been Cancelled or Fulfilled, but there are still active lockups to be claimed by the traders. The last trader is responsible for providing the whitelist PDA so once all funds have been claimed all the order-related accounts be closed. However, when closing the whitelist account, the SOL tokens are requested to be transferred to the signer (who is the trader) and not to the order creator as they should.

lock_up::claim_lock_up:72
pub fn claim_lock_up(...) -> Result<()> {
...
let remain_lockup_amount = order.total_locked_up.checked_sub(
order.unlocked_transacted).unwrap();
if (order.state == State::Cancelled ||
order.state == State::Fulfilled) && remain_lockup_amount == 0 {
...
// Dedaub: When the whitelist PDA is closed, the funds are
// transferred to the trader instead of the order creator
if ctx.accounts.whitelist.is_some() {
ctx.accounts.whitelist.close(
ctx.accounts.signer.to_account_info())?;
}
}
...
}

M3

If an order is fulfilled and without lockup, it should be closed

MEDIUM
resolved
M3
If an order is fulfilled and without lockup, it should be closed

Inside swap_tokens_for_sol and swap_sol_for_tokens when the order is set to State::Fulfilled in swap_sol_for_tokens::execute_trade the order, order_token_vault and whitelist accounts are not closed which leads to loss of the rent exemption amounts paid by the order creator when initializing all these PDAs. Moreover, there does not seem to be a way to close the accounts afterwards since the only other functionality that could accomplish this is inside the cancel_order instruction which, however, cannot be used due to it requiring the order to be Open to proceed.



LOW SEVERITY

L1

update_whitelist can only be used once for each order

LOW
resolved
L1
update_whitelist can only be used once for each order

In the owner_instructions file, the update_whitelist function is supposed to allow the owner of an order to set and update a whitelist per order. However, the whitelist account which should be provided with the UpdateWhitelistContext validation struct, has the init constraint applied.

As a result, this means that a call to update_whitelist is only possible for the first time the owner tries to initialize a whitelist for an order. Any subsequent updates will fail since the init constraint fails the verification if the account provided already exists. We suppose that the expected constraint should be the init_if_needed which initializes the account if it does not exist, but also allows continuous updates on later instruction calls.

owner_instructions.rs -> UpdateWhitelistContext::whitelist:244
#[derive(Accounts)]
#[instruction(order_id: u64)]
pub struct UpdateWhitelistContext<'info> {
...
// Dedaub: This should probably be init_if_needed since this only
// allows a single initialization and no further updates
#[account(init, ...)]
pub whitelist: Account<'info, Whitelist>,
...
}

L2

update_lock_up_override incorrectly prevents partners from using the function

LOW
resolved
L2
update_lock_up_override incorrectly prevents partners from using the function

In the owner_instructions file, the update_lock_up_override function is meant to allow the admin and the partners to modify their lock-up parameter. However, the checks applied to ensure that the correct accounts were given have conflicting conditions which effectively only allows the admin to try to update a not initialized partner account.

More precisely, the first require statement should check if the partner’s public key has been initialized and is not the default key.

owner_instructions::update_lock_up_override:140
pub fn update_lock_up_override(
ctx: Context<LockupOverrideContext>, enforce: bool
) -> Result<()> {
let partner = &mut ctx.accounts.partner;

// Dedaub: To ensure the initialization of the partner account the
// check should be != instead of ==
require!(
partner.public_key == system_program::ID,
OTSeaErrorCode::NotAvailable
);
require!(
partner.public_key == ctx.accounts.signer.key()
|| ctx.accounts.otsea.admin == ctx.accounts.signer.key(),
OTSeaErrorCode::Unauthorized
);
...
}


CENTRALIZATION ISSUES

It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocol’s owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-profile, high-value protocols have significant centralization threats.)

N1

Potential for abuse and limitations in blacklist functionality

CENTRALIZATION
acknowledged
N1
Potential for abuse and limitations in blacklist functionality

The blacklist functionality in the OTSea smart contract can be utilized for censorship. This mechanism allows repeated blocking of specific accounts from creating orders. However, it lacks scalability, as the blacklist can only accommodate up to 100 accounts. In the Solana ecosystem, it is relatively easy to create 100 accounts, thereby exhausting the blacklist capacity.



OTHER / ADVISORY ISSUES

This section details issues that are not thought to directly affect the functionality of the project, but we recommend considering them.

A1

Bumps for PDAs can be cached allowing for a minor optimisation in terms of CUs

ADVISORY
resolved
A1
Bumps for PDAs can be cached allowing for a minor optimisation in terms of CUs

The protocol makes use of a lot of PDAs which are derived based on hardcoded or dynamic seeds like order IDs. Most of the validation structs used in the Contexts of the instructions recalculate the derived PDA addresses to verify that the provided accounts are the expected ones.

However, most of the bumps calculated upon the initialization of the PDA accounts could be cached in the corresponding accounts so that they can be easily retrieved for the verification of the parameters of an instruction overcoming the overhead of the recalculation of each bump for each PDA used which can be proven quite expensive.

For example, the otsea PDA is created once by the admin at the initialization of the protocol and is referenced afterwards by multiple instructions. Thus, its bump can be cached inside the state of the account and be reused by any account validation constraint. The same applies to the revenue_vault account too.

*::Contexts
#[account(
seeds = [ OTSEA_SEED.as_ref(), ],
bump,
)]
pub otsea: Account<'info, OTSea>

#[account(
mut, seeds = [ REVENUE_VAULT_SEED, ],
bump,
)]
pub revenue_vault: Account<'info, TokenAccount>

The bumps can be added to the otsea PDA and reused in any invocation.

#[account(
seeds = [ OTSEA_SEED.as_ref(), ],
bump = otsea.bump,
)]
pub otsea: Account<'info, OTSea>

#[account(
mut, seeds = [ REVENUE_VAULT_SEED, ],
bump = otsea.revenue_vault_bump,
)]
pub revenue_vault: Account<'info, TokenAccount>

A2

Unnecessary cloning in claim_lock_up and update_whitelist

ADVISORY
resolved
A2
Unnecessary cloning in claim_lock_up and update_whitelist
lock_up::claim_lock_up::L14
// Dedaub: No need to clone here. Just lockups.list.iter().position()
// should be enough
let lockup_index = lockups.list.clone().into_iter().position(
|x| x.lockup_id == lockup_id
);
owner_instructions::update_whitelist::L42
let whitelist = &mut ctx.accounts.whitelist;
// Dedaub: This clone is unnecessary
let mut new_whitelist_pks = whitelist.whitelist.clone();
new_whitelist_pks.extend(whitelist_pks);
// Dedaub: This could be new_whitelist_pks =
// new_whitelist_pks.iter().unique().cloned().collect();
let unique_whitelist_pks: Vec<Pubkey> =
new_whitelist_pks.clone().into_iter().unique().collect();

A3

Incorrect length in state.rs for instruction structs with enums

ADVISORY
resolved
A3
Incorrect length in state.rs for instruction structs with enums

According to Space Reference - The Anchor Book v0.29.0 which utilizes Borsh serialization, in state.rs any enum is a struct is incorrectly calculated to be 2 bytes large, when none of the Enums have any data other than the discriminant.

A4

Unnecessary accounts are expected in some of the validation structs

ADVISORY
resolved
A4
Unnecessary accounts are expected in some of the validation structs

In some validation structs, there are some accounts and programs defined to be passed upon instruction invocation, but they seem to be unneeded for the proper execution of the instruction. More specifically:

admin_instructions.rs
  • BlacklistContext::system_program
  • WithdrawRevenueContext::system_program owner_instructions.rs
  • OwnerContext::otsea

A5

Duplicate constraints

ADVISORY
resolved
A5
Duplicate constraints

There are a few places in which some of the constraints have been applied more than once with no apparent benefit or reason for that. For example:

  • Duplicate constraints applied on the SwapTokensForSOL validation struct
swap_tokens_for_sol.rs -> SwapTokensForSOL::order:174
#[derive(Accounts)]
#[instruction(trade: Trade)]
pub struct SwapTokensForSOL<'info> {
...

// Dedaub: The has_one constraint is the same as the constraint
// applied on the creator account below
#[account(
mut, seeds = [...], bump,
has_one = token_mint,
has_one = creator,
)]
pub order: Box<Account<'info, Order>>,

/// CHECK: should be order's creator
#[account(
mut,
constraint = order.creator == creator.key(),
)]
pub creator: AccountInfo<'info>,
...
}
  • Duplicate constraints applied in swap_sol_for_tokens::execute_trade function. Of course, this may be good to retain for future uses of the function under different instructions which may lack this validation inside the context structs, but we mention it here for completeness based on the current state of the codebase.
swap_sol_for_tokens::execute_trade:208
pub fn execute_trade(
order: &mut Account<Order>,
trader: Pubkey,
trade: &Trade,
token_mint_key: Pubkey,
) -> Result<u64> {
require!(
order.state == State::Open &&
// Dedaub: This check has already been applied by the
// context validation struct
order.token_mint == token_mint_key,
OTSeaErrorCode::NotAvailable
);
...
}
\swap_sol_for_tokens.rs -> SwapSOLForTokens::order:316
pub struct SwapSOLForTokens<'info> {
...
pub token_mint: Box<Account<'info, token::Mint>>,

#[account(
mut, seeds = [...], bump,
has_one = token_mint,
)]
pub order: Box<Account<'info, Order>>,
...
}
  • Trivial check of order_id in owner_instructions::update_whitelist
owner_instructions::update_whitelist:56
pub fn update_whitelist(
ctx: Context<UpdateWhitelistContext>,
order_id: u64,
whitelist_pks: Vec<Pubkey>,
) -> Result<()> {
...
// Dedaub: order_id < otsea.next_order_id has been verified by
// the UpdateWhitelistContext validations which used
// order_id to verify the order account
require!(
order_id < ctx.accounts.otsea.next_order_id,
OTSeaErrorCode::NotAvailable
);
...
}
\owner_instructions.rs -> UpdateWhitelistContext::order:236
#[derive(Accounts)]
#[instruction(order_id: u64)]
pub struct UpdateWhitelistContext<'info> {
...
#[account(
mut,
seeds = [
order_id.to_string().as_ref(),
ORDER_SEED.as_ref()
], bump,
)]
pub order: Box<Account<'info, Order>>,
...
}

A6

Possible optimization in terms of CUs

ADVISORY
resolved
A6
Possible optimization in terms of CUs

The protocol utilizes the OTSea::blacklist and the Lockups::list vectors that keep track of the blacklisted accounts and the lockups of a user accordingly. However, when it comes to removing an entry from any of the lists, the simple remove function is used which shifts all the remaining entries to the left.

As soon as the order of the entries does not make a difference to the correct functioning of the protocol, it would be suggested to use the swap_remove function instead, which swaps the to-be-deleted entry with the last one and pops the last entry afterwards effectively performing the deletion less expensively.

A7

Small inconsistency between the validation structs of the two swap instructions

ADVISORY
resolved
A7
Small inconsistency between the validation structs of the two swap instructions

The SwapSOLForTokens struct of the swap_sol_for_tokens instruction file defines the otsea and the token_mint accounts as Box<> pointers. On the contrary, the SwapTokensForSOL struct of the swap_tokens_for_sol file defines them as simple Accounts and not as Box<> pointers which introduces a minor inconsistency between the two instructions.

A8

Excessively small swap amounts may result in no fee distribution to the fee receivers

ADVISORY
info
A8
Excessively small swap amounts may result in no fee distribution to the fee receivers

In both the swap instructions, the calculate_revenue calls are expected to return the percentage of the token output amount that should be deducted as fees. However, if an infinitesimal trade.amount_to_swap amount was given it could lead the fee calculations to round down to 0 effectively bypassing the fees for all orders. Of course, such small amounts seem to be non-profitable to be traded as the transaction fees alone should be much more expensive than the platform fees that were saved, but we include this scenario here for completeness.

A similar scenario exists with the execute_trade function in which an excessively small amount to be swapped could lead to the trader receiving no tokens, but, at the same time, this small amount of the deposited tokens gets transferred to the order owner. However, again this could only happen by a trader’s mistaken request as it only leads to losses for them.

A9

Duplicate context structs

ADVISORY
resolved
A9
Duplicate context structs

In the admin_instrudtions file, the structs AdminContext and BlacklistContext seem to be identical, also considering the notes of A5. Thus, only one of them could be used instead for all the instructions that require them.



DISCLAIMER

The audited contracts have been analyzed using automated techniques and extensive human inspection in accordance with state-of-the-art practices as of the date of this report. The audit makes no statements or warranties on the security of the code. On its own, it cannot be considered a sufficient assessment of the correctness of the contract. While we have conducted an analysis to the best of our ability, it is our recommendation for high-value contracts to commission several independent audits, a public bug bounty program, as well as continuous security auditing and monitoring through Dedaub Security Suite.


ABOUT DEDAUB

Dedaub offers significant security expertise combined with cutting-edge program analysis technology to secure some of the most prominent protocols in DeFi. The founders, as well as many of Dedaub's auditors, have a strong academic research background together with a real-world hacker mentality to secure code. Protocol blockchain developers hire us for our foundational analysis tools and deep expertise in program analysis, reverse engineering, DeFi exploits, cryptography and financial mathematics.