Skip to main content
Flare FinanceEnosys - October 20, 2025

Enosys

Smart Contract Security Assessment

October 20, 2025

Flare_Finance

SUMMARY

ID
DESCRIPTION
STATUS
PROTOCOL-LEVEL CONSIDERATIONS
P1
LiquityV2 constants and effectively immutable variables are now updateable
info
HIGH SEVERITY
H1
_updateStakeAndTotalStakes subtracts oldStake at the wrong scale, corrupting totals
resolved
H2
Branch-cap check double-counts trove debt, causing unintended reverts and repay friction
resolved
H3
WETH approvals set once in GasPool can’t be refreshed if BO/TM rotate
resolved
H4
SortedTroves caches auth pointers that can drift from the registry
resolved
MEDIUM SEVERITY
M1
Overflow in scaled redistribution math
resolved
M2
lastGoodPrice fallback returns 0, causing false positives and wasted gas
resolved
M3
On-chain “scan-then-liquidate” pattern redoes expensive scans and risks OOG
resolved
LOW SEVERITY
L1
totalStakes == 0 liveness trap can block redistributions
resolved
L2
On-chain “scan-then-liquidate” pattern redoes expensive scans and risks OOG
resolved
L3
Completion state stores (0, nextTroveIndex) instead of a clean reset
resolved
L4
Missing bounds check on SP_YIELD_SPLIT in ActivePool.initialize
resolved
L5
PriceFeedFtsoV2Connector mis-scales prices when feed decimals exceed 18
resolved
L6
Arithmetic overflow from exponentiation in stake-scaling
acknowledged
L7
Refund/DoS risk from using live ETH_GAS_COMPENSATION() at close time
acknowledged
CENTRALIZATION ISSUES
N1
Admin controls protocol parameters and upgradeable contracts
info
OTHER / ADVISORY ISSUES
A1
Misspelled intializer() can leave FixedAssetReader proxy uninitialized and claimable
resolved
A2
Introduced misspellings
resolved
A3
Misleading comment
resolved
A4
Loop optimisation
resolved
A5
Unused flag
resolved
A6
Unable to compile
acknowledged
A7
Shipped Placeholder Background for WNat
acknowledged
A8
Inconsistent class and token names in new Flare context
acknowledged

ABSTRACT

Dedaub was commissioned to perform a security audit of Enosys, a LiquityV2 fork. We recommend performing thorough quality assurance, at least through testing, of the changes this fork makes to the LiquityV2 codebase, before deployment. Given LiquityV2’s tightly coupled design, many components are interdependent in complex ways. Upstream intentionally leaves some known edge cases unpatched to avoid worse trade-offs. We therefore recommend re-evaluating the necessity of each deviation from upstream and minimizing the changes where possible. Despite applying changes on a live upstream codebase, our time-boxed review surfaces several non-trivial High/Medium issues. We strongly recommend a follow-up review to reassess the protocol’s security.


BACKGROUND

Enosys is a LiquityV2 fork to be deployed on the Flare blockchain. This fork changes several aspects of LiquityV2: main protocol contracts are now upgradeable and many previously constant protocol parameters are now modifiable at runtime by the protocol admin. Another significant change is the introduction of stake scaling, to avoid a previously identified issue with LiquityV2 related to the possibility of trove stakes becoming zero-valued due to repeated redemptions and floor division. The fork also introduces a LiquidationBot that scans collateral branches and executes batch or partial liquidations.


SETTING & CAVEATS

This audit report mainly covers the contracts of the at-the-time private https://github.com/flrfinance/liquity-v2_bold-fork/ repository of the Enosys Protocol at commit e2c09a9bdea266c85948ca8c6c91adbc84871cad in the enosys-fork branch. This was mainly a diff audit, focusing on the changes since the codebase forked from LiquityV2, i.e. the commit c5edb8988bd71fa858d6f57808fb1d208f04ee8e. We further reviewed fixes to issues identified in this report, at commit fef10de25702d962478c6b668b764ee8b54e4a26 in branch fix/dedaub-audit.

Audit Start Date: October 8, 2025

Report Submission Date: October 20, 2025

2 auditors worked on the following contracts:

The audit’s main target is security threats, i.e., what the community understanding would likely call "hacking", rather than the regular use of the protocol. Functional correctness (i.e. issues in "regular use") is a secondary consideration. Typically it can only be covered if we are provided with unambiguous (i.e. full-detail) specifications of what is the expected, correct behavior. In terms of functional correctness, we often trusted the code’s calculations and interactions, in the absence of any other specification. Functional correctness relative to low-level calculations (including units, scaling and quantities returned from external protocols) is generally most effectively done through thorough testing rather than human auditing.


PROTOCOL-LEVEL CONSIDERATIONS

P1

LiquityV2 constants and effectively immutable variables are now updateable

PROTOCOL-LEVEL-CONSIDERATION
info

In LiquityV2 many protocol parameters (Constants.sol) and main protocol contracts’ variables (variables in AddressesRegistry.sol) are effectively immutable. Enosys changes this, allowing many of these to be changed by the protocol admin after deployment. While these changes introduce centralisation to allow for the protocol to adapt on a relatively young network, it has also introduced problematic behaviour not previously possible in LiquityV2, see H3, H4, and M5. We cannot guarantee the rest of the codebase does not have more issues of this nature given the time-limited nature of this audit and the complex interactions of the protocol. We recommend further audit of these changes and thorough tests exercising such changes for each newly mutable variable to add confidence in the lack of further unexpected behaviour.



VULNERABILITIES & FUNCTIONAL ISSUES

This section details issues affecting the functionality of the contract. Dedaub generally categorizes issues according to the following severities, but may also take other considerations into account such as impact or difficulty in exploitation:

CATEGORY
DESCRIPTION
CRITICAL
Can be profitably exploited by any knowledgeable third-party attacker to drain a portion of the system’s or users’ funds OR the contract does not function as intended and severe loss of funds may result.
HIGH
Third-party attackers or faulty functionality may block the system or cause the system or users to lose funds. Important system invariants can be violated.
MEDIUM
Examples:
  • User or system funds can be lost when third-party systems misbehave
  • DoS, under specific conditions
  • Part of the functionality becomes unusable due to a programming error
LOW
Examples:
  • Breaking important system invariants but without apparent consequences
  • Buggy functionality for trusted users where a workaround exists
  • Security issues which may manifest when the system evolves

Issue resolution includes “dismissed” or “acknowledged” but no action taken, by the client, or “resolved”, per the auditors.


CRITICAL SEVERITY

[No critical severity issues]


HIGH SEVERITY

H1

_updateStakeAndTotalStakes subtracts oldStake at the wrong scale, corrupting totals

HIGH
resolved

The codebase adds stake-scaling (system scale via TroveManager.totalStakesCurrentScale and per-trove scale via Troves[_troveId].stakeScale).

In TroveManagerRedemptionLiquidationFacet._updateStakeAndTotalStakes, the code computes newTotalStakes = totalStakes − oldStake + newStake. However, oldStake is stored at the trove’s historical stake scale, while totalStakes is maintained at the current system scale. Subtracting oldStake without first normalizing it to the current system scale is incorrect. This leads to under/over-subtraction from totalStakes, skewing per-unit redistribution math (L_coll, L_boldDebt) and downstream rewards. Over time this mis-accounting can cascade into liveness and precision problems (e.g., premature scale bumps, overflow risk in scaled math, or even drifting toward the totalStakes == 0 trap that blocks redistributions, see L1). Notably, the close path does scale-normalize stakes before updating totals, so the inconsistency is specific to the update path.

Before subtracting, normalize oldStake to the current system scale, i.e., apply the relative scale factor between totalStakesCurrentScale and Troves[_troveId].stakeScale, then update totalStakes using the normalized value alongside the newly computed stake at the current scale. Keep the trove’s stakeScale synchronized to the system scale after the update.

H2

Branch-cap check double-counts trove debt, causing unintended reverts and repay friction

HIGH
resolved

In BorrowerOperations, the branch-cap logic , _requireDebtInMinMaxRange(uint256 _debt) compares getEntireBranchDebt() + _debt against MAX_BRANCH_CAP during adjustments.

Because getEntireBranchDebt() already includes the trove’s current debt, adding the trove’s new entire debt double-counts that position. The correct projection should be branchDebt − oldTroveDebt + newTroveDebt, not branchDebt + newTroveDebt. As a result, legitimate adjustments can spuriously revert when the branch is near its cap, and even deleveraging (repaying) can be blocked if the over-estimated projected total appears to exceed the cap. This creates user-visible failures and practical DoS conditions for healthy operations close to the cap.

H3

WETH approvals set once in GasPool can’t be refreshed if BO/TM rotate

HIGH
resolved

GasPool.initialize grants unlimited WETH allowances to the BorrowerOperations and TroveManager addresses read from the registry at init, then never touches them again. In this fork, those core addresses can later change via governance, but GasPool caches the old pointers and has no admin path to revoke the old approvals or approve the new contracts. After a rotation, refund/compensation flows that rely on transferFrom may revert because the new BorrowerOperations/TroveManager lack allowance, while the old contracts retain a dangling unlimited allowance (residual risk if reused or compromised).

We suggest adding an admin routine to re-wire allowances. Revoke the old spenders, fetch the current BorrowerOperations/TroveManager from the registry (or accept validated new addresses), and grant fresh unlimited approvals and then update stored pointers. Additionally, review all other contracts in the system that grant token allowances (e.g., zappers, pools, exchange adapters): wherever approvals point at registry-managed contracts, ensure there is a callable process to refresh approvals on new spenders and revoke the old ones.

H4

SortedTroves caches auth pointers that can drift from the registry

HIGH
resolved

In the upgradeable build, SortedTroves.initialize stores troveManager and borrowerOperationsAddress once, and all call guards rely on those cached values (_requireCallerIsBOorTM, _requireCallerIsBorrowerOperations). If the registry later rotates either address, SortedTroves will reject legitimate callers and core flows, such as insert, remove, reInsert, insertIntoBatch, and removeFromBatch, can revert, effectively freezing list maintenance and blocking downstream borrower operations that depend on list updates.

Similarly to the previous issue, H4, either commit to not rotating these identity critical addresses post-launch, or add a "resync" function that reloads troveManager and borrowerOperationsAddress from the registry. Additionally, review all other contracts in the system that rotate core addresses and resync them.



MEDIUM SEVERITY

M1

Overflow in scaled redistribution math

MEDIUM
resolved

During redistributions (see the TroveManager*.sol files) the codebase multiplies numerators by a large scale factor and only divides afterward (e.g., computing collRewardPerUnitStaked and boldDebtRewardPerUnitStaked, and the feedback error terms). As the system scale grows, numerator * scaleFactor and perUnit * totalStakes can overflow even before exponentiation limits are reached, causing reverts and blocking liquidations/redistributions at stressful times when the protocol most needs to "self-heal".

Use bounded arithmetic for scaled fractions. Prefer 512-bit mul-div primitives to compute (numerator * scaleFactor) / denominator safely, or restructure expressions to divide first, then multiply only the remainder path, maintaining identical results without exceeding uint256.

M2

lastGoodPrice fallback returns 0, causing false positives and wasted gas

MEDIUM
resolved

LiquidationBot._getLastGoodPrice() first tries lastGoodPrice() and then falls back to fetchPrice(). When the fallback is used, the function returns 0 whenever isValid == false, even though Liquity-style feeds typically still return a last good price in the price field when isValid is false. A zero price drives ICR == 0, which (with the default minLiquidationCR = 0) satisfies ICR >= minLiquidationCR and also ICR < MCR, causing the bot to “find” that everything is liquidatable. The bot will then submit large trove lists to batchLiquidateTroves, but TroveManager re-fetches price and skips non-liquidatable troves, leading to wasted on-chain compute, gas blow-ups (if scanning is done via liquidateAllTroves/liquidatePartial), frequent NoTrovesToLiquidate() reverts, and stuck continuations.

To prevent mass false positives and aligning the bot’s decisioning with Liquity-style oracle semantics, the price returned by fetchPrice() would have to be treated as the last good price regardless of its validity flag, and 0 returned only when the call itself fails (or revert with a dedicated “price unavailable” error).

M3

On-chain “scan-then-liquidate” pattern redoes expensive scans and risks OOG

MEDIUM
resolved

LiquidationBot.liquidatePartial() computes the next (branch, trove) continuation but reverts with NoTrovesToLiquidate() when a slice finds 0 actual liquidations, before persisting lastPartialProgress or emitting PartialLiquidationCompleted. In healthy markets this causes keepers to get stuck retrying the same slice, unless they perform full off-chain pre-scans and submit prefiltered sets. The iterator API loses its purpose as a resumable, forward-progress interface.

We suggest not reverting on zero liquidations: the newly computed continuation pointers can be persisted, PartialLiquidationCompleted emitted with zeros, and a successful return so off-chain loops can advance through healthy sections. This also aligns behavior with the documentation that the continuation helpers “don’t revert if there is nothing to liquidate”, and makes keeper automation robust across long healthy periods.



LOW SEVERITY

L1

totalStakes == 0 liveness trap can block redistributions

LOW
resolved

TroveManagerBase._updateStakeAndTotalStakes allows totalStakes to become zero (e.g., after urgent redemptions or coordinated exits). Subsequent redistribution logic divides by totalStakes, which then reverts and prevents liquidations via redistribution (precisely when the Stability Pool may be empty and redistribution is required). This creates a protocol-level liveness failure mode.

The possibility of this scenario was known by the Liquity V2 developers, and is carried over here, but was determined as very unlikely. Enosys solves another source for this issue by scaling stakes (to avoid these becoming 0 due to flood division), but leaves this possibility intact.

We suggest defining system behavior when totalStakes reaches zero to totally close this issue. Options include (a) force a baseline rescale to a minimum positive totalStakes and increment the system scale so math stays defined, (b) explicitly disallow redistributions when totalStakes == 0 and restrict liquidations to StabilityPool offsets (documented policy), or (c) guard redistributions with a clear revert and operational playbook to restore stake before proceeding.

L2

On-chain “scan-then-liquidate” pattern redoes expensive scans and risks OOG

LOW
resolved

Resolved

Documentation has been added with recommendations on how to avoid expensive scans.

LiquidationBot functions liquidateAllTroves() and liquidatePartial() call the view scanners from within a state-changing call, so the entire scan executes on-chain (O(total troves * branches)). On large systems, especially when most troves are healthy, this is a gas sink and can easily run out of gas.

To avoid this, we suggest reserving the monolithic paths for small datasets or admin/debug use only, and steer operators toward the intended flow: run the scan off-chain using the view helpers, then call liquidateTroves() with the curated trove IDs, or use chunked execution (liquidatePartial{,Continue}) with tight limits. Update the docs to make this explicit and add guardrails (e.g., recommend caps for examineLimit/liquidateLimit) so integrators don’t accidentally DOS themselves.

L3

Completion state stores (0, nextTroveIndex) instead of a clean reset

LOW
resolved

In LiquidationBot, when a partial pass completes, the contract persists lastPartialProgress with nextBranchIndex = 0 but does not reset nextTroveIndex (it keeps the reported continuation trove index). This is inconsistent with the view scanner’s completion semantics (nextBranch = totalBranches, nextTrove = 0) and with the documentation that “after a full pass the convenience continues auto-resets to (0,0).” As a result, the next cycle may begin at branch 0, trove = non-zero, potentially skipping earlier troves in branch 0 and confusing keeper logic.

We suggest that on completion, the stored pointers are normalised to a canonical fresh cycle start: set both nextBranchIndex = 0 and nextTroveIndex = 0. Alternatively, persist the exact completion sentinel (nextBranchIndex = totalBranches, nextTroveIndex = 0) and let liquidatePartialContinue detect completion and reset to (0,0) internally. In either case, the behavior should be kept consistent with the view scanner’s return values and the documentation.

L4

Missing bounds check on SP_YIELD_SPLIT in ActivePool.initialize

LOW
resolved

ActivePool.initialize(IAddressesRegistry _addressesRegistry, uint256 _spYieldSplit, address _owner) assigns SP_YIELD_SPLIT = _spYieldSplit without validation, while the setter setSpYieldSplit(...) enforces 0 < value <= 1e18. If _spYieldSplit > 1e18 at deployment or migration, subsequent interest minting will compute spYield = SP_YIELD_SPLIT * mintedAmount / 1e18 and then remainderToLPs = mintedAmount - spYield, which underflows and reverts. Any operation that mints interest would then fail, effectively DoS-ing core state-changing flows that touch aggregate debt/interest. The initializer and setter are inconsistent, allowing an invalid state at boot that later logic cannot handle.

It is recommended to enforce the same bounds in initialize as in the setter function.

L5

PriceFeedFtsoV2Connector mis-scales prices when feed decimals exceed 18

LOW
resolved

Resolved

The protocol patched _fetchPrice() to scale oracle values both up and down depending on the feed’s reported decimals. This addresses our original finding: prices from feeds with more than 18 decimals will now be down-scaled, and those with fewer than 18 will be up-scaled, removing the asymmetric assumption that caused mis-scaling. We still recommend a small guard on the exponent to prevent extreme powers of ten.

_fetchPrice() computes precisionDifference = 10 ** uint8(DECIMAL_PRECISION - _decimals) and multiplies the raw feed value by this factor, assuming the oracle reports less or equal than 18 decimals. If the feed reports more than 18 decimals, the subtraction becomes negative. After casting to uint8 it turns into a very large positive exponent, yielding a nonsensical scale factor (and potentially blowing up gas or overflowing intermediate math). Even when not underflowing, the implementation lacks a downscaling branch for _decimals > 18, so values would still be wrong by powers of ten. The result is an incorrect on-chain price that can distort collateralization checks, redemption fees, and any logic depending on the price feed.**

We recommend scaling bidirectionally based on the feed’s reported decimals: up-scale when the feed uses fewer than 18 decimals and down-scale when it uses more than 18. Add basic sanity checks (non-zero feed id, code presence for the FtsoV2 address, and, if available, liveness/age bounds).

L6

Arithmetic overflow from exponentiation in stake-scaling

LOW
acknowledged

Acknowledged

The protocol expects that the scale index will remain <=2–3 in practice. However, if the system runs long enough and enough rescale events occur, the limit will be reached. To avoid a future system-wide brick, an upper bound on the scale and a defined “at-cap” behavior need to be implemented.

The codebases’s stake-scaling design (see the TroveManager*.sol files) raises a large base (SCALE_FACTOR = 1e5) to a growing exponent (system or trove scale) at multiple sites (e.g., computing per-trove scale factors, normalizing stakes on close, and scaling per-unit rewards). Once the system scales ~16 times over its lifetime, SCALE_FACTOR ** n exceeds uint256 and reverts. This can brick core paths (open/adjust/close trove, reward accrual, redistribution) at an arbitrary future time, turning an anti-dust safeguard into a system-wide DoS risk.**

Unbounded runtime exponentiation can be avoided with a maximum scale exponent cap (e.g., MAX_STAKE_SCALE <= 15), by making scaling relative (use additive exponents that algebraically cancel), or migrating to per-scale accumulators so only bounded multipliers/dividers are applied at read/write.

L7

Refund/DoS risk from using live ETH_GAS_COMPENSATION() at close time

LOW
acknowledged

Acknowledged

The protocol clarified that the Zapper suite is not part of the intended deployment.

WETHZapper{closeTroveToRawETH,receiveFlashLoanOnCloseTroveFromCollateral}, GasCompZapper.closeTroveToRawETH, receiveFlashLoanOnCloseTroveFromCollateral read TroveManager.ETH_GAS_COMPENSATION() at trove close time. These zapper close paths compute the user refund by reading the current gas compensation value rather than refunding exactly the compensation paid by the trove owner when it was opened. If governance updates that parameter after a trove is opened, the zapper may attempt to forward more ETH than it actually received from LiquityBase.closeTrove and revert (user-visible DoS), or forward less and under-refund the user. Updating the value of this variable is not possible in the original LiquityV2.

A simple pre/post WETH balance delta fixes this in paths without a WETH flash loan. However, in the WETH flash-close callback the naive delta is polluted by the loan legs (borrow WETH -> spend WETH to buy the debt token -> receive WETH from close -> repay WETH loan principal). This normalization is only needed in the WETH flash-close path. LST flash-close uses a collateral-token loan, so the WETH delta equals the gas-compensation refund directly.



CENTRALIZATION ISSUES

It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocol’s owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-profile, high-value protocols have significant centralization threats.)

N1

Admin controls protocol parameters and upgradeable contracts

CENTRALIZATION
info

As mentioned in P1, in contrast with LiquityV2, protocol parameters and main protocol contracts are now updateable/mutable by the protocol admin. Protocol contracts are ownable, ownership of which will be kept in a Safe with 4/6 signor requirements.

Upgrading of contracts after deployment should be done carefully and with appropriate testing, given the complex interaction of the protocol.



OTHER / ADVISORY ISSUES

This section details issues that are not thought to directly affect the functionality of the project, but we recommend considering them.

A1

Misspelled intializer() can leave FixedAssetReader proxy uninitialized and claimable

ADVISORY
resolved

FixedAssetReader exposes an initializer named intialize (misspelled). If a deployer, upgrade or migration tries to call initialize(...) (the correct spelling) on the proxy, that call will revert. If deployers omit the initializer (or their tooling expects initialize and they proceed after the revert), the proxy remains uninitialized. In that state, anyone can later call intialize(...) and take ownership.

Standardize the initializer name to initialize in the contract and update tooling accordingly. Document the correct entrypoint off-chain.

A2

Introduced misspellings

ADVISORY
resolved

Some misspellings were introduced in function names and comments:

  • INonfungiblePositionManager::{109,110}: acheive -> achieve
  • BorrowerOperations._requireValidDelegateAdustment: should be renamed to _requireValidDelegateAdjustment.
  • BorrowerOperations._requireTroveDoesNotExists: should be renamed to _requireTroveDoesNotExist.
  • HintHelpers::32: ouput -> output
  • StabilityPool::417: analyisis -> analysis

A3

Misleading comment

ADVISORY
resolved

ActivePool.setSpYieldSplit:333 returns ActivePool: SP yield split must be between 0 and 100% when it is not the case that _spYieldSplit > 0 && _spYieldSplit <= DECIMAL_PRECISION. Clarify error message that the variable cannot be 0.

A4

Loop optimisation

ADVISORY
resolved

LiquidationBot::616-627 has two loops, one checking that the branch indices are valid, and the other that there are no duplicates. These two loops can be combined into one, resulting in reduction of gas costs for expected inputs.

A5

Unused flag

ADVISORY
resolved

LiquidationBot._claimGasCompensation defines a boolean flag isGasCompensationToClaim but leaves it unused.

A6

Unable to compile

ADVISORY
acknowledged

Compilation was successful on a macOS machine with Foundry v1.4.1, however we were unable to compile the code with the same version of Foundry on two Linux machines.

The outputted error messages point to the issue being that some contracts have the same contract in their dependency chain, but from different libraries.

For example, TroveNFT inherits IERC721 from two libraries:

  • V2-gov/lib/openzeppelin-contracts/:

  • TroveNFT -> ERC721Upgradeable -> IERC721

  • openzeppelin-contracts-upgradeable/lib/openzeppelin-contracts/:

  • TroveNFT -> ITroveNFT -> IERC721

A7

Shipped Placeholder Background for WNat

ADVISORY
acknowledged

The backgrounds library’s _background("WNat", ...) path currently returns a placeholder that reuses the FXRP artwork (the function _wnat() returns _fxrp() with a TODO). This TODO should be dealt with before deployment.

A8

Inconsistent class and token names in new Flare context

ADVISORY
acknowledged

The codebase renames the debt token from BOLD to CDP, but many identifiers, comments, and user-facing strings still reference “BOLD” (e.g., IBoldToken, boldToken, "Not enough BOLD obtained..."), while the registry call addressesRegistry.boldToken() is expected to return the CDP token address. Functionally this works (logic is address-based), but the mismatch can cause confusion. Similarly for code references to WETH.



DISCLAIMER

The audited contracts have been analyzed using automated techniques and extensive human inspection in accordance with state-of-the-art practices as of the date of this report. The audit makes no statements or warranties on the security of the code. On its own, it cannot be considered a sufficient assessment of the correctness of the contract. While we have conducted an analysis to the best of our ability, it is our recommendation for high-value contracts to commission several independent audits, a public bug bounty program, as well as continuous security auditing and monitoring through Dedaub Security Suite.


ABOUT DEDAUB

Dedaub offers significant security expertise combined with cutting-edge program analysis technology to secure some of the most prominent protocols in DeFi. The founders, as well as many of Dedaub's auditors, have a strong academic research background together with a real-world hacker mentality to secure code. Protocol blockchain developers hire us for our foundational analysis tools and deep expertise in program analysis, reverse engineering, DeFi exploits, cryptography and financial mathematics.