Skip to main content
GizatechGizatech AVS - June 30, 2025

Gizatech

AVS Security Audit

June 30, 2025

Gizatech

SUMMARY


ABSTRACT

Dedaub was commissioned to perform a security audit of Giza Protocol’s AVS (Actively Validated Service). The audit found a number of critical, high and medium vulnerabilities, as well as a few high level protocol considerations for any user wanting to interact with the system, and one centralisation issue.


BACKGROUND

Gizatech aims to provide users with a simplified user experience for investing and trading cryptocurrencies. The overall design leverages large language models (LLMs) to make trading decisions on behalf of users, based on their intent. To support this, Giza has developed a platform that includes both the LLM and an infrastructure layer which provides tools for the LLM to generate transactions.

This audit focused on the infrastructure layer, including the AVS to support tool execution (built on top of the Othentic Framework), as well as the on-chain tool registry, and the smart account plugins to allow the protocol to execute transactions on behalf of the users (built on top of ZeroDev).


SETTING & CAVEATS

audit report covers the files listed below, for the at the time private giza-protocol repo at commit 341314f9ac64145eb73dd5d225da4089c282991a:

In total 2 auditors spent 20 days (each) reviewing the codebase.

The audit’s main target is security threats, i.e., what the community understanding would likely call "hacking", rather than the regular use of the protocol. Functional correctness (i.e. issues in "regular use") is a secondary consideration. Typically it can only be covered if we are provided with unambiguous (i.e. full-detail) specifications of what is the expected, correct behavior. In terms of functional correctness, we often trusted the code’s calculations and interactions, in the absence of any other specification. Functional correctness relative to low-level calculations (including units, scaling and quantities returned from external protocols) is generally most effectively done through thorough testing rather than human auditing.


PROTOCOL-LEVEL CONSIDERATIONS

P1

Optimistic User Op Submission

PROTOCOL-LEVEL-CONSIDERATION
acknowledged

Acknowledged

The team plans to implement ERC-7579 validators (a.k.a session keys) to limit the set of actions that can be performed by tools/operators. Whilst this does still allow for operators to perform actions without the users request, and extract value from the user through MEV strategies, the team plans to counteract this through stake slashing.

In the current version of the protocol, validators can issue user-ops without going through any validation from peers or on-chain, as the GizaValidator does not check that user-ops have been properly attested to. This means that a malicious validator can execute any user-op on behalf of any user's smart account.

This issue is further exacerbated by C3, raised later in this report, as it significantly increases the scope of a malicious performer node's privileges.

It should also be noted that currently, the validation service essentially rubber stamps the result. Ideally, there would be some validation and record keeping to implement slashing. If such functionality is not implemented, adding the necessary checks to the GizaValidator contract would not improve security, as attesters would blindly attest to malicious or invalid user-ops.

P2

Malicious/Compromised Tools

PROTOCOL-LEVEL-CONSIDERATION
acknowledged

Tools are a core portion of the protocol and hold significant privileges. Primarily, tools can generate arbitrary user-ops. This means that malicious tools could cause significant financial damages to any users that invoke them (with this problem further exacerbated through C1 and C3).

Tools also allow for arbitrary code execution on the execution node. Whilst docker does provide some isolation the default configuration is not sufficiently locked down. We would suggest operators run the containers with the minimal number of privileges, limitations placed on the total compute allowed (cpu, memory, io), and tools be vetted before being added to the giza tool registry.


Comments:

The on-chain registry will eventually require all tools to request permissions upfront, with session keys enforcing these permissions. Once this is implemented, users will be able to opt in to tools they trust. The team also plans to audit tools for vulnerabilities or malicious intent and implement tighter permissions for the docker containers (which we have not seen at the time of writing).



VULNERABILITIES & FUNCTIONAL ISSUES

This section details issues affecting the functionality of the contract. Dedaub generally categorizes issues according to the following severities, but may also take other considerations into account such as impact or difficulty in exploitation:

CATEGORY
DESCRIPTION
CRITICAL
Can be profitably exploited by any knowledgeable third-party attacker to drain a portion of the system’s or users’ funds OR the contract does not function as intended and severe loss of funds may result.
HIGH
Third-party attackers or faulty functionality may block the system or cause the system or users to lose funds. Important system invariants can be violated.
MEDIUM
Examples:
  • User or system funds can be lost when third-party systems misbehave
  • DoS, under specific conditions
  • Part of the functionality becomes unusable due to a programming error
LOW
Examples:
  • Breaking important system invariants but without apparent consequences
  • Buggy functionality for trusted users where a workaround exists
  • Security issues which may manifest when the system evolves

Issue resolution includes “dismissed” or “acknowledged” but no action taken, by the client, or “resolved”, per the auditors.


CRITICAL SEVERITY

C1

Request forgery

CRITICAL
open

In the current implementation, tasks are not authenticated. This means a malicious third-party (external to the network) could invoke tasks on behalf of any user, potentially leading to significant financial losses for targeted users.

Furthermore, when tasks are submitted to the Attestation Center the performer, attestors, and aggregator are paid out for their work. Therefore, a malicious performer can broadcast bogus tasks and farm protocol fees.

It should be noted, the TaskRequest struct which is used to parse the JSON body of the original request does have an authentication field, which is also copied into the ExecutionRequest struct passed as a parameter to the execution service by the entrypoint node. However, neither node validates this field, resulting in insecure task execution.

C2

Privilege escalation through Docker container name

CRITICAL
open

Users can pass parameters into the docker run command through the container name. By defining two tools, tool xyz and tool --priveleged xyz, I can run tool xyz in privileged mode.

docker_tool_manager.rs
let image = name.trim_matches('"');
let version = version.trim_matches('"');

let network_arg = format!("--network={}", self.docker_network);
let image_version_arg = format!("{}:{}", image, version); // @audit - privilege escalation

let mut command_args = vec![
"run".to_string(),
"--rm".to_string(),
network_arg,
image_version_arg,
];

command_args.extend(args);

info!("Running docker command with args: {:?}", command_args);

C3

User-ops can execute arbitrary actions

CRITICAL
acknowledged

Acknowledged

As mentioned in P1, the team plans to implement ERC-7579 validators (a.k.a session keys) for tools. This will ensure that the actions performed by tools are limited to the set of actions the tool has declared it will make use of. End users should be careful what tools they give permission to.

By default, the Kernel Smart Account executes user-ops through the execute function which allows execution of any arbitrary smart-contract function. The smart accounts generated by create-giza-account.ts include a single plugin, the GizaValidator plugin, which does not impose any restrictions on what functions can be called by user-ops.

Tools should specify constrained actions that they should be allowed to perform and smart accounts should implement restrictions on user-ops ensuring no action outside these constraints can be executed.

Such restrictions should either be added into the GizaValidator contract, or a new plugin should be added to impose these constraints. The Kernel smart account supports numerous plugin types that could be used to implement these constraints, such as policies, actions or ERC-7579 validators.

C4

Tools can submit user-ops on behalf of users that did not invoke the tool

CRITICAL
open

Currently, no validation exists to ensure that the user-ops generated by a tool are executed on the smart account of the user that invoked the tool.

This could result in malicious users invoking either legitimate or malicious tools to generate user-ops affecting other users. It could also result in malfunctioning tools causing financial damages to third-party users.



HIGH SEVERITY

H1

Insecure leader election

HIGH
open

The current algorithm for leader election relies on the entrypoint node being trusted to generate a truly random number. A malicious entrypoint node, or a cabal of malicious nodes, could choose to delegate all execution tasks to themselves to reap the higher rewards issued to performers.

H2

Insecure Peer Authorisation

HIGH
open

The AVS implements an auth mechanism to validate that the peers connected to it are registered operators. The auth mechanism works by sending a message to the peer, which it needs to sign with the private key corresponding to its on-chain address.

There are two issues with the presented solution.

  • Authorisations can be relayed
  • Operators can run multiple nodes

The first issue arises from the format of the challenge mechanism, the message to be signed is othentic-challenge-<timestamp>. By not including the peer-id as part of this challenge I can relay a signed challenge from another peer to authorise myself. The second issue arises from not tracking uniqueness of operators, ensuring that for every operator they only have 1 registered peer.

H3

blocked_peers has unbounded size

HIGH
open

In giza_p2p::libp2p_impl::auth_manager the blocked_peers set is unbounded and ever-expanding. The code does not implement any functionality to remove a peer from blocked_peers or any maximum to the size of the set. Hence, an attacker could, over an arbitrary period of time, populate the set with arbitrarily generated peer IDs to exhaust the process’ available memory.

Furthermore, the permanence of blocking provides a vector for an attacker to escalate temporary DoS vectors on a small number of nodes to long term DoS over a larger number of nodes. It could also lead to benevolent nodes being blocked during normal operation due to networking or timing issues.

We recommend entries in blocked_peers are evicted after fixed intervals and that a maximum is enforced ensuring the process memory cannot be exhausted. We would also recommend that blocking is also performed on the IP level, since this is harder to rotate than Peer IDs. This observation also applies to H4 below.

H4

Insufficient DoS protections

HIGH
open

The codebase implements no mitigations against DoS attacks against the entrypoint and execution_service nodes, which are exposed through libp2p. Hence, malicious peers could exhaust the node’s resources and potentially bring down the nodes. This resource details some DoS mitigations that should be applied to libp2p applications.

H5

TaskMetadata messages are unauthenticated

HIGH
open

Entrypoint nodes broadcast leader election results through the CustomMessage topic, sending a GizaMessage::TaskMetadata message. This message implements no authentication mechanism. Hence, an attacker could establish a malicious node and transmit false leader election results. This would mislead validation nodes, resulting in illegitimate Proofs of Task being accepted and validated, and legitimate proofs being rejected and potentially slashed.



MEDIUM SEVERITY

M1

Use of non-finalized block number

MEDIUM
open

The chainreader relies on eth_blockNumber to get the latest block. We would recommend allowing a distance from the head of the chain to allow the chain to re-org without needing to implement re-org handling code in the protocol.

M2

Auth challenges can be removed by a third-party peer

MEDIUM
open

In giza_p2p::AuthManager::handle_auth_response the remove_challenge boolean determines whether the challenge is to be removed from pending_challenges. This boolean is set to true by default and is set to false in the case the peer sending the auth response is not the peer the challenge is associated with.

However, this case is only considered if drop_peer is not already true. The following validation block is not part of the same if statement as the other conditions and occurs before the peer ID check:

if !drop_peer && (operator_id != auth_response_decoded.operator_id) {
fail_reason = "Operator ID mismatch".to_string();
drop_peer = true;
}

Therefore, a malicious, unauthenticated peer could send crafted auth responses with mismatched operator IDs to remove arbitrary challenges from pending_challenges. This will lead to benevolent peers getting blocked when they eventually send the legitimate auth response.



LOW SEVERITY

L1

Incorrect usage of _packValidationData in GizaValidator

LOW
open

To set an unlimited expiration date the expiration should be set to 0, not type(uint48).max

L2

Incorrect SemVer Parsing

LOW
open

According to the SemVer specification, numbers cannot start with zeros except for the case of a single 0. The following code allows for leading zeros:

function isValidSemver(string memory semver) internal pure returns (bool) {
...
for (uint256 i = 0; i < b.length; i++) {
bytes1 char = b[i];

if (char == ".") {
...
// @audit - no leading zeros, except for zero case
} else if (char >= "0" && char <= "9") {
hasDigit = true;
}
...
}

We would recommend either representing the SemVer as an integer tuple, to avoid the need to parse the string.

L3

Incorrect object access in Swarm Handler

LOW
open

The handler for ResponseExecution calls othentic_auth_protocol instead of execution_protocol. This doesn't have an impact as far as we can see since the final method send_response, just sends the response into the channel without accessing self, but it should be fixed regardless.

SwarmCommand::ResponseExecution(channel, response) => {
self.swarm
.behaviour_mut()
.othentic_auth_protocol // @audit - should be `execution_protocol`
.send_response(channel, response)
.map_err(|e| {
P2PError::ExecutionResponseError(format!(
"Failed to send response: {:?}",
e
))
})?;
Ok(())
}

L4

Use of String type for Addresses

LOW
open

Generally addresses should always internally be represented as bytes, and compared as bytes, not strings (ex: giza_crypto::signature::ecdsa.rs)

L5

Race Condition on Challenge Generation

LOW
open

The auth_manager relies on a challenge mechanism where peers need to sign messages to authenticate themselves. These challenges are recorded in auth_manager alongside metadata about the challenge, which is used to validate the auth challenge response.

The current format of the challenge string: othentic-challenge-<timestamp>, is potentially vulnerable to a race condition if two challenge strings are generated in the same millisecond. This would result in the first peer being blocked by the node, despite not performing any malicious behaviour.

L6

Auth Manager hardcodes Chain for ChainReader

LOW
info

The auth manager lacks the ability to parameterise the chain reader and defaults to Holesky. If this is left as is, then the auth manager is trivially manipulable.

pub async fn new(
command_sender: mpsc::Sender<SwarmCommand>,
keystore_path: String,
keystore_password: String,
) -> Result<AuthManager, P2PError> {
...

// @audit - need parameterization here
let chain_reader = ChainReader::new(Chain::Holesky);
let chain_reader_wrapper = ChainReaderWrapper { chain_reader };

...
}


CENTRALIZATION ISSUES

It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocol’s owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-profile, high-value protocols have significant centralization threats.)

N1

Manual Slashing

CENTRALIZATION
open

When joining any AVS operators should be careful of the slashing conditions as they risk losing funds. During the initial deployment Giza intends to operate the slashing mechanism manually, rather than through a permissionless provable on-chain mechanism.



OTHER / ADVISORY ISSUES

This section details issues that are not thought to directly affect the functionality of the project, but we recommend considering them.

A1

UTF-8 Collisions

ADVISORY
info

Unlike Rust, Solidity does not ensure that string types are valid UTF-8 strings. Whenever dealing with a string in solidity it is internally treated as opaque bytes. When processing this information offchain you need to be aware of this, since if you don't handle the edge case of an invalid utf-8 string (because you're not expecting it) you could crash.

Fortunately the alloy library handles this silently under the hood, by replacing invalid sections with . This does have the result that two different strings will now map to the same string. We currently can't see an issue which is a direct result of this, but you should be aware of the semantics here.

A2

Use of DefaultHasher for MessageID

ADVISORY
info

The current version of Rust uses SipHash as the backend for DefaultHasher, with the hash key hardcoded to (0, 0). This makes the resulting hash values predictable, which is beneficial for consistent message IDs but is somewhat unconventional for SipHash, which is typically used for its resistance to hash-flooding attacks. We would advise using a more secure hash function such as sha3.

A3

Redundant Storage Maps in GizaToolRegistry

ADVISORY
info

The toolExists and versionExists storage maps don't need to exist. Given that you disallow empty tool names, versions and metadataCids, you can use zero checks on the toolVersions and toolRegistry to determine if things exist.

function getToolMetadata(string memory tool_name, string memory tool_version)
external
view
returns (string memory metadataCid, bool isDeprecated)
{
if (!toolExists[tool_name]) revert ToolNotFound(tool_name);
if (!versionExists[tool_name][tool_version]) revert VersionNotFound(tool_name, tool_version);

// @audit the above checks are redundant - just check if the metadataCid is empty
VersionInfo memory info = toolRegistry[tool_name][tool_version];
return (info.metadataCid, info.isDeprecated);
}

A4

Leader election is not weighted by stake

ADVISORY
info

Currently, all performer nodes have an equal chance of being elected leader. This means an operator wishing to provide more compute to the network in exchange for more protocol rewards would need to set up two or more performer nodes, each with the minimum allowable stake. This introduces overheads and does not use the operator’s resources efficiently.

Leader election could instead be weighted by stake. This would allow operators with sufficient capital to stake larger amounts, perform more tasks and earn higher rewards without introducing additional overhead from running multiple copies of the node software.

A5

Optimisation of get_active_operators

ADVISORY
info

The chainreader::reader::ChainReader::get_active_operators function currently polls all OperatorRegisteredToNetwork events and performs an RPC call for each retrieved event to determine whether the operator is still active. Since the beginning of the audit, Othentic has released an update, with the latest AttestationCenter contract offering a getActiveOperatorsDetails function that could significantly simplify and optimise this implementation.

A6

handle_auth_response always returns true

ADVISORY
info

In giza_p2p::libp2p_impl::auth_manager::handle_auth_response, the return type can be Result<(), P2PError>. The function can return Ok values at two points. The first is hard-coded to always return true, and the second returns !drop_peer. However, if drop_peer is true, the function always returns an error from the preceding if branch. Hence, an Ok value can only be returned if drop_peer is false, leading to the returned bool always being true.

A7

Duplicate tracking of gossipsub peers

ADVISORY
info

The struct giza_p2p::libp2p_impl::libp2p_network::TopicInfo contains the peers field which tracks peers that are subscribed to a particular topic. This tracking is duplicated as libp2p_gossipsub tracks subscribed peers internally.

The uses of this field can be replaced with the mesh_peers(topic_hash) function exposed by the gossipsub behaviour.

A8

Typos

ADVISORY
info

The repository contains the following naming errors:

  • validation_service::node::validation_service.rs::121: node_pipe -> node_state
  • validation_service::node::node_state:peding_tasks -> pending_tasks
  • giza_p2p::libp2p_impl::libp2p_network.rs::263: handle_indentify_event -> handle_identify_event
  • giza_tool_registry::registry::giza_tool_registry_worker.rs::129: commnads_rx -> commands_rx

A9

Condition is always true in GizaToolRegistryWorker::initialize

ADVISORY
info

In giza_tool_registry::registry::giza_tool_registry_worker::initialize, the condition !self.config.event_polling_interval.is_zero() in the second arm of the select is always true. If self.config.event_polling_interval is 0 the call to interval in the first line of the function would panic.

A10

Docker image files can be cleaned immediately after install

ADVISORY
info

Currently, a downloaded Docker image file is retained until the tool is removed, at which point, the GizaToolRegistryWorker deletes the image from docker and deletes the file from the docker_images_dir.

When Docker imports a tool using docker load, the image is copied internally and no longer requires the file the tool was imported from. Hence, the GizaToolRegistryWorker::clean_image_files can be called immediately after tool loading is complete, and does not need to be retained, even if the tool is in use.



DISCLAIMER

The audited contracts have been analyzed using automated techniques and extensive human inspection in accordance with state-of-the-art practices as of the date of this report. The audit makes no statements or warranties on the security of the code. On its own, it cannot be considered a sufficient assessment of the correctness of the contract. While we have conducted an analysis to the best of our ability, it is our recommendation for high-value contracts to commission several independent audits, a public bug bounty program, as well as continuous security auditing and monitoring through Dedaub Security Suite.


ABOUT DEDAUB

Dedaub offers significant security expertise combined with cutting-edge program analysis technology to secure some of the most prominent protocols in DeFi. The founders, as well as many of Dedaub's auditors, have a strong academic research background together with a real-world hacker mentality to secure code. Protocol blockchain developers hire us for our foundational analysis tools and deep expertise in program analysis, reverse engineering, DeFi exploits, cryptography and financial mathematics.