A Discussion on the Immunefi Firewall: Design, Use Cases, and Real-World DeFi Security
DeFi Needs Firewalls: From Program Analysis to Real-World Defense
A conversation between Mitchell Amador (CEO, Immunefi) and Yannis Smaragdakis (Co-founder, Dedaub)
Introduction
In this conversation, Dedaub co-founder Yannis Smaragdakis and Mitchell Amador (CEO of Immunefi) discuss the evolution of smart contract security, drawing on decades of program analysis research and hands-on experience securing real-world DeFi protocols.
The discussion spans Yannis’s academic background, the origins of Dedaub’s decompilation and analysis technology, the limits of audits and formal verification, and why on-chain firewalls may be the missing layer in DeFi’s security stack. The conversation closes with reflections on AI, developer productivity, and staying effective at the frontier of complex systems.
From Academic Research to Web3 Security
Mitchell Amador:
Welcome everyone. Today I’m joined by Yannis Smaragdakis, co-founder of Dedaub. Yannis, I’ve known you for years now, and you’ve had a long and distinguished career before crypto. To start, who is Yannis Smaragdakis?
Yannis Smaragdakis:
I’ve been a researcher and professor for about 25 years, primarily working in program analysis. Around eight years ago, I started engaging seriously with smart contracts and Web3. The blockchain was exciting as a domain, but what really drew me in was the challenge of analyzing smart contracts at scale.
Over the past five years, this has become my main focus, both academically and through Dedaub, which I co-founded with Neville Grech. We set out to apply decades of program analysis research to the realities of blockchain security.
A Career in Program Analysis
Mitchell:
You’re being modest. You had a major academic career before this. Can you give some context?
Yannis:
I spent over a decade teaching in the U.S. at the University of Oregon and the University of Massachusetts, where I also completed my PhD. For the past 15 years, I’ve been at the University of Athens.
I’ve been fortunate in terms of research recognition, including receiving ERC grants, which are among the most competitive research grants in Europe. That said, when moving into blockchain security, very little of that recognition carries over. You effectively start from scratch and have to prove that your ideas work in the real world.
Why Decompilation Became the Foundation
Mitchell:
Your approach to security has always felt different. You came in as researchers first. That led to tooling like the Dedaub Decompiler, which many consider best-in-class. How did that journey start?
Yannis:
Our core approach has always been automation. We want algorithms that understand the essence of a smart contract and reason about all possible executions.
Decompilation was a necessary first step. Most deployed contracts don’t have verified source code, especially attack contracts. You need to lift bytecode into a human-understandable representation before you can analyze it.
We started working on decompilation around 2018. The Dedaub Decompiler has been running continuously for over seven years now and has evolved significantly. It’s not just a decompiler; it’s effectively a static analyzer that uses context-sensitive program analysis techniques to model all possible executions.
Today, we have over 10,000 registered users, with hundreds using it daily.
Successes—and the Limits of Tooling
Mitchell:
You’ve also used this tooling to find real vulnerabilities.
Yannis:
Yes. We’ve discovered around 13–14 vulnerabilities that resulted in bug bounties, most of them found automatically before any human code review. Those bounties totaled over $3 million.
That said, I don’t think anyone has yet built a massively scalable business purely on blockchain analysis tooling. The developer market is still small—tens of thousands, not millions—and that limits what tooling alone can support economically.
This isn’t unique to us. Tooling across crypto has struggled to achieve reliable, scalable adoption.
Would You Do It Again?
Mitchell:
If you could go back seven years, would you do anything differently?
Yannis:
Honestly, probably not. Coming from research, I’m drawn to technically deep problems. Was it the most business-optimal choice? Maybe not. But building something innovative often requires following what genuinely interests you.
I don’t have a clear alternative path that I can say would have been better.
“Trust Me Bro” Security and Its Limits
Mitchell:
Let’s talk about how teams think about security today. What do you think when a team says, “Trust us, our code is secure”?
Yannis:
I respect the intent. High-quality code and rigorous audits are essential. But that mindset is dangerous if it’s the only line of defense.
We’ve seen repeatedly that a single small bug—a rounding error, a missed edge case—can collapse an entire financial system and lead to losses in the tens or hundreds of millions.
Even top-tier teams following state-of-the-art practices, with multiple audits, have been hacked or narrowly avoided catastrophe.
Defense in Depth: The Firewall Analogy
Mitchell:
So what’s the right mental model?
Yannis:
In traditional software, we never rely solely on code correctness. You run servers behind firewalls. You filter traffic. You monitor patterns of interaction.
Web3 lacks most of these safeguards by design. That gives us permissionless composability—but also leaves protocols exposed.
The key question is whether we can introduce additional safeguards that don’t replace code quality but complement it. That’s where the firewall concept comes in.
What Is the Immunefi Firewall?
Editor’s note: The following section discusses the Immunefi Firewall, an on-chain defense system built by Dedaub in partnership with Immunefi.
Yannis:
The Immunefi Firewall is an on-chain firewall consisting of smart contracts and off-chain analysis. A protected protocol asks the firewall, on every interaction, whether a call should be allowed.
It’s highly configurable. Protocols can choose to run it permissively most of the time and switch to stricter modes when needed. Policies can be based on caller identity, protocol-specific invariants, transaction simulation, or address reputation.
A protocol might normally allow everything, but if an invariant violation is detected during testing or monitoring, it can temporarily restrict access while investigating.
Caller Vetting: The Key Innovation
Yannis:
The most novel feature is caller vetting. Instead of reviewing every transaction, we vet the code of contracts that interact with the protected protocol.
If a smart contract wants to integrate, its code is analyzed automatically—and in rare cases manually—to determine whether it’s a legitimate integration or a potential attack vector.
This preserves DeFi composability while adding a powerful security layer. It’s well-suited to automated program analysis, which is why we believe it hasn’t been done effectively before.
Adoption Challenges and Trade-offs
Mitchell:
What are the biggest hurdles to adoption?
Yannis:
There are real trade-offs. Firewalls can make mistakes. They might temporarily block legitimate actors or fail to stop a sophisticated attack.
But engineering is always about minimizing risk, not eliminating it. Once a protocol accepts that trade-off, integration is straightforward—especially for new deployments. You add a modifier to entry points and manage policies through the UI.
The main hurdle is the design decision, not the implementation.
Censorship Resistance and Practical Reality
Mitchell:
Some people worry this conflicts with censorship resistance.
Yannis:
We should be precise. Most interactions—EOAs, normal users—are not filtered at all. Scrutiny applies mainly to automated contract-to-contract interactions.
And vetting is largely automatic. If legitimate code is blocked, there’s an appeal path. Protocols can always choose to be more permissive.
The question is whether it’s truly essential for every protocol to accept completely unvetted bots at all times. In many cases, a small delay is an acceptable price for safety.
AI, Productivity, and What Stays Hard
Mitchell:
Let’s shift gears. How do you see AI changing software development?
Yannis:
AI already accelerates many tasks. Prototyping, porting systems, and exploring unfamiliar technologies can be dramatically faster.
But AI still struggles with large, complex systems. Interestingly, giving models more context often makes results worse, not better. They fail to abstract correctly at scale.
What remains hard—and valuable—are deep system understanding, algorithmic reasoning, mathematical insight, and making design decisions under complexity.
Staying Relevant at the Frontier
Mitchell:
Any advice for people trying to stay at the edge?
Yannis:
Adopt a playful mindset. Not everything should feel like forced learning. Exploration, experimentation, and curiosity lead to the best insights—and they’re far less stressful.
Technology keeps changing. You can’t chase everything. But if you stay curious and playful, you’ll keep learning what matters.





