The World of ‘Trust-Minimized’ Middleware: Consensus Verification & Bridging

The World Of 'trust Minimized' Middleware Part 1 Consensus Verification & Bridging 副本 (8)


Over the past few months, some crypto chads have openly discussed the meaning behind “trust-minimization”, a term that was slapped onto almost all of the new and flashy middleware/infra that comes with optimistic or ZKP for security. And the discussions around security assumptions behind trust-minimized bridges have spurred further discourse to reach a consensus on what that term really means.

Comment by Mustafa, the trust-minimization engineer at Celestia;))

Consensus on the term “trust-minimization”

Trust is an assumption about one’s behaviour holding true to the expectation. Trust-minimization in the context of blockchain refers to a general set of approaches to reducing the set of trusted components so that the outcome is more predictable. And there are different ways to achieve trust-minimization.

Vitalik’s trust model

Dissecting the approaches for trust-minimization

There are ongoing discussions and innovations that leverage both few-of-N and 1-of-N trust to further bolster the on-chain security. Here, I will discuss some of the notable examples for each different trust model.

  • Random sampling of validators’ signatures for the Casper FFG vote to estimate the entire Ethereum consensus verification. The goal here is to verify the full consensus of Ethereum without actually verifying the signatures of the entire validator set. The general random sampling problem could be categorized under the few-of-N trust as it requires only a smaller subset of honest entities to correctly estimate the behaviour of a network that relies on an honest supermajority. The Ethereum light client is also taking this approach via the sync committee, and I will discuss its underlying security assumptions later on in more detail; this type of few-of-N trust is NOT trust-minimized.
  • ZK approach allows any provers to generate a validity proof to verify the validity of a statement, such as a consensus state. This approach at its core is reliance on the correctness of the mathematical proof without any economic incentives as an intermediator. Unlike some optimistic models, the incentive is given out to provers for guaranteeing the liveness, not the security. This maximally minimizes the attack surfaces of system collusion, leaving only the system liveness as a major attack vector (system liveness is trivial as the system only needs at least one incentivized prover), truly enabling 1-of-N trust for security. One caveat on the trust-assumption is the honesty assumption during the trusted setup, which is required to generate the common reference string (CRS), a common knowledge shared between provers and verifiers to provide non-interactive proof. This is important because broken CRS will allow the prover to know the “Tau” (the secret hidden in the random) and always generate valid proofs. Though this may be less of a problem with STARK and some SNARK (Spartan, Halo2 etc.) as it relies on publicly verifiable randomness to generate the shared knowledge (public parameters). In either case, I believe ZKP is the end goal of 1-of-N trust, and there is more and more trust-minimized middleware leveraging it entering this space.

Verifying the Ethereum state and data

Most of the ZK middlewares to date are using ZKP for performing succinct verification of Ethereum state/data applied across different contexts, not actually using the Zero Knowledge aspect. Bridging/oracle (SuccinctElectron LabBrevis by CelerPolymer Labs …) and co-processing/storage proof (AxiomRelicHyper OracleHerodotusLagrange…) are some of the notable use cases. While each application uses ZKP in different ways, all of them are trying to do a somewhat similar thing: verifying the Ethereum state and data (some include other chains too ofc). Let’s cut into the topic starting with bridging, one of the most notoriously debated middleware.

  1. Superior security: Abstracting away the signature verification should come with little to no additional security overhead (security assumptions may stem from the trusted setup for SNARK). And ZKP achieves that by leaving the trust in math, not humans (OP is this).
Overview of the Telepathy
Overview of Electron Lab’s IBC on Ethereum
Proof Generation Time vs Signature per Proof
  1. Perform STARK-based proof recursions to aggregate proofs into a single proof
  2. Use hardware acceleration.
  3. Reduce the on-chain footprint via SNARK
Scheme for client end proving
Overview on zkMint
Light clients for modular chains

zkBridge security = Light client security != underlying chain security

Security of light clients

As mentioned, light clients on many chains have full validator set access, but Ethereum light clients and a few others do not due to the “heavy” compute overhead from verifying the signatures of more than half a million validators. Succinct Labs has previously performed analysis on the likelihood of colluding sync committee. They have suggested that the sync committee has nearly ~0% (1.72*10^-34 chance) of collusion under the condition that ⅓ of the validator are honest and requires 90% of the sync committee to agree for a valid block result. Although the sync committee “seems” to offer few-of-N trust from that analysis, I believe that it does not grant trust-minimized security and the analysis is inconclusive for the following reasons. To start, check out the following statistics on Ethereum full nodes.

Full nodes on Ethereum according to
  1. Coordination risk without time constraints: Probability analysis is an effective measure for the likelihood of system collusion where the collusion via private coordination is improbable, possibly due to time constraints. However, since the sync committee rotation happens every 27 hrs, it is well within the realm of possibility for sync validators to coordinate and collude post-rotation, which renders the probability analysis ineffective in practice.
  2. Weak replicability of the consensus: A few-of-N trust system only needs to trust a few to guarantee that the system is behaving as expected. Such a guarantee comes from the replicability of the system’s state where the probability of false replication decreases exponentially with the number of sampling S. For example, data availability sampling (DAS) allows the replication of the available data via erasure correcting Reed-Solomon codes resulting in an exponentially decaying probability of R-S as S increases (The ratio R= N/K where N is the N-bit representation of the Reed-Solomon code and K is the K-bits representation of the original data). And with a high enough S, which depends on the configuration of the erasure coding, the original data can be fully reconstructed pseudo-deterministically. However such isn’t the case for the sync committee; it will be at best a good approximation unless each validator also enforces the “erasure coding” type of behaviour where a validator holds attestation of N’ other validators for redundancy, which enables replication of the consensus via random sampling.
  3. Incentive alignment: While there is a sync committee reward to the partaking validators, there are no penalties for their misbehaviours. Furthermore, the sync committee is picked randomly, so there is no consensus and no added incentives on safeguarding the selection of the committee.

Verifying the full Ethereum Consensus

To fully verify the consensus, we need to verify the signatures of all the attesting validators of a block, which is about ~600K signatures as of May 2023. Naively generating the proofs and verifying all the signatures would be prohibitively expensive, and the verification process also needs to be cryptographically verifiable to achieve minimal trust overhead (ideally 1-of-N security). Currently, there are four approaches that can be explored in achieving trust-minimized full consensus verification, all of them are variations of ZK light clients:

  1. Random Sampling approach (extension of sync committee)
  2. Commitment-based approach
  3. Folding-based approach
Recursive proof scheme for Plonky2 STARK
  • slower proving comes with smaller proof
Celer’s benchmark for proof generation performance across current proving frameworks
A diagram of the prover-verifier flow
Scheme of folding two instances
  1. and parallelize the folding steps.
Folding-based proof scheme

Cost of trust-minimized ZK light client

I have discussed four non-mutually exclusive options to build a ZK light client to perform full consensus verification. Most of them provide more trust-minimized security than existing Ethereum light clients. However, it is important to note their drawbacks.

  1. High Compute Cost: There will be added compute costs for proof generation. Depending on the proof scheme, this cost could be prohibitively expensive; running many provers in parallel is a big expense. And as mentioned in the previous point, adding redundancy to the prover set could incur additional compute costs.

Consensus verification != State Validity

Besides consensus verification, there is also another important aspect of the blockchain that is worth mentioning in the context of bridging: State validity, the idea that the transactions and their blocks are valid. Consensus light clients, including ZK light clients, infer the state validity by making sure that the majority of validators agree to the proposed block, but they do not have a direct view of the validity of the block. This nuance is important in some edge cases. Let’s imagine a scenario where Ethereum is 51% attacked and invalid blocks are proposed. If dApps were only relying on ZK light clients, they will not catch any signs of attack because newly proposed blocks receive over majority attestation. However, if the dApps are also monitoring the state validity potentially via full node, they will be able to detect and reject invalid blocks and prevent any malicious transactions. To achieve this with ZK light client is improbable due to the amount of block data that needs to be downloaded to perform the re-execution of transactions, which essentially makes it a full node. One possible workaround to access state validity in a trust-minimized manner is to rely on a verifiable full node whose execution trace can be recorded and proven via ZKP. And state validity proof could be combined with consensus proof to generate proof that verifies both the consensus and the state validity. This would be a similar approach to developing a ZK-Archival node for trustless data retrieval, except specifically for state validity in this case. Even relying on a single verifiable full node can potentially help avoid damages from invalid blocks, and it could run in parallel to ZK light clients as an added security measure. The feasibility of a verifiable full node requires further deliberation, but using a zkVM like RiscZero and taking “images” of each execution step of the full node could be one possible way to achieve a verifiable full node. With RiscZero, state validity proof can be generated with 1-of-N security with 1-of-N liveness, assuming there is a distributed set of provers performing compute.

Mother of all proof LOL

Concluding thoughts on consensus verification and trust-minimised bridging

The recursive composition approach, random sampling approach, and commitment-based approach would be feasible within the short to medium-term future, many teams like Jump, Succinct, Polytope, NiL and HyperOracle are already working on it. And the folding-based approach could be the end goal for providing both superior performance, security, and flexibility. Having said that, there are varieties of contexts requiring consensus verification for different purposes with varied levels of security guarantees, such as bridging small vs large amounts of assets. Different approaches offer different sets of tradeoffs between performances and security assumptions. Folding-based approaches seem to theoretically offer the least tradeoff so far, but we shall see how it actually performs in practice.

You will also be interested to read

Analyzing Missed Slot Through The Lens of Relays — A potential path for relay incentivization

Introduction Relays in MEV-Boost are doubly-trusted stakeholders within the existing PBS landscape and yet they are not properly incentivized. People have discussed and are concerned over many ways in which validators and builders game the current system, such as slot proposal delay for timing games and colocation for latency optimization, all to maximize their MEV […]

January 26, 2024
Screenshot 2024 02 21 At 6.06.00 Pm
The Pursuit of Relay Incentivization

Opening remark On September 26th, 2023, Blocknative announced its decision to halt its block building and relay operations. When I heard the news ahead of the announcement, I was surprised. Even though I was well aware that companies that were operating the relays for PBS were bleeding cash, I did not expect anyone to actually […]

January 25, 2024
Screenshot 2024 02 21 At 6.30.36 Pm
An Overview of NFT Financialization

Key takeaways: · NFT financialization is still in the early stages, with peer-to-peer borrowing/lending appearing to be the best solution at present. · Oracle-based financialization platforms, including peer-to-pool lending and derivatives, find challenges to solve the issue of NFT floor price manipulation, which hinders business expansion. · The future trend of the NFT industry towards […]

January 24, 2024
The World Of 'trust Minimized' Middleware Part 1 Consensus Verification & Bridging 副本 (12)
Ending LP’s Losing Game: Exploring the Loss-Versus-Rebalancing (LVR) Problem and its Solutions

Key Takeaways: LVR is important because 1) it disincentivizes AMM LPs from providing liquidity to the pool, and 2) it leads to the centralization of block building because searcher-builders who are performing CEX-DEX arbitrages can generate far more valuable blocks than neutral builders due to their exclusive orderflow. There are broadly two design philosophies to […]

January 20, 2024
The World Of 'trust Minimized' Middleware Part 1 Consensus Verification & Bridging 副本 (10)