Author: Robin Linus, BTC Study
The original BitVM design was limited to two participants. Subsequent work combined parallel and redundant instances to introduce multiple parties based on the 1-of-n honesty assumption. The main limitation of these contracts is that all validators must be defined at compile time. Moreover, the startup cost increases with the number of validators. This implies that only a limited number of participants can ever be bribed to break a contract.
BitVM2 is a bold variant: anyone can be a validator. This still requires a one-time installation with the 1-of-n honest participant assumption, but at runtime, anyone can challenge an invalid assertion without having to be a member of the initialization group (not necessarily one of the n participants). This overcomes the limitations of previous schemes and optimizes their trust assumptions. Moreover, it simplifies the overall design, reducing the maximum number of trial rounds to two.
The bridge contract still additionally requires some predefined set of operators, and at least one of the m operators must be honest. However, even if all operators are dishonest, they cannot steal your money, and can only burn it.
Introduction
For a given program f, we want to verify some assertion: given some input x, it will output y, that is, f(x)=y. For example, f can be a SNARK verifier, such as the verifier of the Groth16 proof system. Then x is a proof, and y is the output state of some SNARK that proves its validity.
In the case of a SNARK verifier, the size is too large to be represented by a Bitcoin script. Implementing a Groth16 verifier may require scripts up to 20MB in size. However, the upper limit of the allowable script size is the upper limit of the Bitcoin block size: 4MB. Even if it is compressed to this size, it may still be too large.
Naive solution
"Lamport signatures" provide a way to split a program f(x)=y into multiple steps. For example, step: n=42.
In this way, the calculation of f can be divided into 43 transactions in order and executed in multiple blocks. Each transaction uses the output state of the previous transaction as its input state. Whenever the prover is ambiguous on any state z_i, everyone can use the conflicting Lamport signature as a fraud proof.
This is indeed a trustless way to challenge the prover. However, a major limitation of this solution is its dense on-chain footprint, because it still requires the prover to perform the entire calculation. In addition, because of the Lamport signature, it introduces the overhead of switching states.
Balanced Solution
We can significantly reduce the on-chain footprint by shifting some of the heavy work from the prover to the fraud proof of the verifier. Now, the prover only needs to commit to x and y once, as well as all intermediate results z1, z2, ..., z42.
Any verifier can disprove any wrong assertion. At the start-up stage, we define a Taptree containing 43 scripts to disprove any of the computations f1, f2, f3, f43. As long as an assertion
is not true, anyone can spend from the corresponding script. This reduces the worst-case computation to one step f_i, performed by the verifier. This step may still require a considerable script implementation. In theory, as long as it can fit into a block, there is no big problem, or better yet, a standardized size of 400 KB can be achieved. In practice, for some specific f implementations, we will try to find an optimal balance between the prover's commitment size and the verifier's script size.
Essentially, this allows anyone to destroy the prover's output as long as the prover makes any incorrect assertions. Otherwise, if no one can disprove any computation, then when the script times out, the honest prover can spend the output. It only takes two rounds at most.
This mechanism can serve as a basic building block for permissionless verification of bridge contracts.
Optimistic solution
The following protocol improves the happy path in the above design (hopefully the most commonly used path), at the cost of adding two rounds of interaction in the worst case:
1. The prover commits to the output state y
2. If incorrect, anyone can start a challenge
3. The prover commits to the intermediate results z1, z2, ... z42
4. If incorrect, anyone can disprove the assertion f _i
Permissionless bridge contract design
Limitation: Fees
In the above design, the prover can steal some fees. In this case, the fund reserve is still safe, but the verifier will lose some collateral funds.
The attack scenario is as follows:
The prover is malicious
The prover uses its own KickOff_Tx (without a valid PegOut_Tx)
The prover waits for a challenger to execute Challenge_TX and pays the prover who executes the challenge
The prover does not execute the challenge and stops responding directly
The following revised diagram solves this problem. Two more n-of-n pre-signed transactions are required.
Limitations: Honest Operators
This design requires at least one honest operator, otherwise the funds will eventually become unspendable. In reality, liveness failures can be combined with kidnapping attacks to steal funds (for example: unless you pay me 50% of the ransom, your funds will not be unlocked.)