Author: mo Source: X, @no89thkey Translation: Shan Ouba, Golden Finance
Let me try to answer this question with a number:
Is it possible that we will just converge to a magical sweet spot in the trade-off plane? No, the future of off-chain verifiable computation is a continuous curve that blurs the line between specialized ZK and general ZK. Allow me to explain how these terms have evolved historically and how they will converge in the future.
Two years ago, "specialized" ZK infrastructure meant low-level circuit frameworks like circom, Halo2, and arkworks. ZK applications built with these were essentially hand-written ZK circuits. They were fast and cheap for very specific tasks, but were generally difficult to develop and maintain. They were similar to the various application-specific integrated circuit chips (physical silicon) in today's IC industry, such as NAND chips and controller chips.
However, over the past two years, "specialized" ZK infrastructure has evolved into a more "generalized" infrastructure.
We now have ZKML, ZK Coprocessor, and ZKSQL frameworks, which provide easy-to-use and highly programmable SDKs to build different categories of ZK applications without writing a single line of ZK circuit code. For example, ZK Coprocessor allows smart contracts to trustlessly access historical blockchain states/events/transactions and run arbitrary computations on this data. ZKML enables smart contracts to reliably leverage AI inference results for a wide range of machine learning models.
These evolving frameworks have significantly improved programmability within their target domains while still maintaining high performance and low cost because the abstraction layer (SDK/API) is thin and close to bare metal circuits. They are analogous to GPUs, TPUs, and FPGAs in the IC market: they are programmable domain experts.
ZKVM has also made great strides in the past two years. It is worth noting that all general-purpose ZKVM is built on top of the low-level, specialized ZK framework. The idea is that you can write ZK applications in a high-level language (even more user-friendly than the SDK/API) that compile to a combination of specialized circuits with instruction sets (RISC-V or WASM-like). In our analogy to the IC industry, these are like CPU chips.
ZKVM is an abstraction layer on top of the low-level ZK framework, just like ZK coprocessors and the like, albeit a thicker layer.
As a wise man once said, one layer of abstraction solves every computer science problem, but creates another. Tradeoffs, my friend, are the name of the game here. Fundamentally, with ZKVM, we are making a tradeoff between performance and generality.
Two years ago, the "bare metal" performance of ZKVM was really bad. However, in just two years, ZKVM's performance has improved dramatically. Why?
Because these "general" ZKVMs have become more "specialized"! One key area of performance improvement comes from "precompiles". These precompiles are specialized ZK circuits that can compute commonly used high-level procedures, such as SHA2 and various signature verifications, much faster than the normal process of breaking them down into instruction circuits.
So the trend is clear now.
Specialized ZK infrastructure is becoming more general, and generalized ZKVM is becoming more specialized!
For both solutions over the past few years, optimization was the trade-off point of achieving something better than before: doing better at one point without sacrificing another. That’s why both sides feel like “we are definitely the future”.
However, computer science wisdom tells us all that at some point we will hit the “Pareto optimal wall” (green dashed line), where we cannot improve one feature without sacrificing another.
So the million dollar question arises: will one of these completely replace the other in due time?
If an analogy from the IC industry is any help: CPUs are a $126 billion market, the entire IC industry, plus all “specialty” ICs, is a $515 billion market. I do believe that on a micro level, history will rhyme here, and neither will replace the other.
That being said, no one today is saying, “Hey, I’m using a computer that’s completely powered by a general-purpose CPU,” or “Hey, look at this fancy robot that’s powered by a specialized IC.”
Yes, we should indeed look at this at a macro level, and the future is about providing a trade-off curve that gives developers the flexibility to choose based on their individual needs.
In the future, domain-specialist ZK infrastructure and general-purpose ZKVM can and will work together. This can happen in many forms.
Today, the simplest approach is already possible. For example, you might use a ZK coprocessor to generate some computational results on a long history of blockchain transactions, but the computational business logic on top of this data is so complex that you can’t easily express it in an SDK/API.
What you can do is get high-performance and low-cost ZK proofs of the data and intermediate computational results, and then funnel them to the generalized VM through proof recursion.
While I do think these types of debates are interesting, I know we are all building for a future of asynchronous computation on blockchains powered by off-chain verifiable computation. I believe this debate can be easily resolved when we see use cases with mass user adoption emerge in the coming years.