Author: rick awsb Source: X, @rickawsb
The Twitter exchanges between the two founders of zksync and solana are rare examples of expert exchanges. Their understanding of the underlying logic of the ecology will determine the direction of the evm ecology and solana ecology.
This is important information that every investor must read carefully!
The summary is as follows:
The discussion originated from a fund manager A tweet from manager @Justin_Bons :
"No matter how you feel about SOL reaching the scaling limit today, at least SOL is trying, and ETH has already Scaling was abandoned a long time ago, and SOL's actual daily usage has exceeded 1,000TPS! And ETH has been stagnant at 100TPS for the past three years, they have no place to stand!"
@zksync 's founder Alex @gluk64 wrote a long tweet reply:
"1k TPS for the global response to Web3 It’s just a drop in the bucket when it comes to demand. The Internet cannot run on a single server. The Internet of Value cannot run on a single massive blockchain, no matter how fast it is or how much decentralization you are willing to sacrifice.
The ultimate goal is the singularity of zero-knowledge proof:
⧫ Thousands of permissionless super chains, each with a number of Thousands of TPS; ⧫ Zero-knowledge proofs of each block of each chain are recursively aggregated into one block; ⧫ Final confirmation is completed on the most decentralized, fair and neutral settlement layer (gm Ethereum!);
Every one of millions of transactions is verified by every user on every smartphone, in less than 1 second (if every user couldn't Verification, is that really a blockchain?)
TLDR: Zero-knowledge proofs enable infinite scalability without compromise (yes, in the long run Look, we can also solve the data availability problem; most data will be hosted by end users).
But what about user experience and liquidity fragmentation?
Every user of these chains will be able to interact seamlessly with any contract or user on the other chain within seconds - no additional costs or trust assumptions. Just A wallet confirmation provides instant confirmation, similar to how today we can send an email from any mailbox to any other mailbox.
Liquidity will flow freely between zero-knowledge superchains without trusting the validators of these chains or the bridges between them.
Some of these chains will be private and privacy-preserving (operated by banks and financial institutions), but still seamlessly interoperate with the rest of the ecosystem . This will create network effects for liquidity that are many times greater than anything the on-chain world has ever seen before. ”
Netizen asked:
alex, do you think people at Solana can’t see this? What is the reason? Do you think this is just economic reasons, or are there other factors that we have not explained clearly?
alex's reply:
@Justin_Bons and @aeyakovenko (solana founder), what do you think?
"No Consider factions, let's reinforce each other's arguments.
I admire Solana's deep commitment to its thesis: pushing the boundaries of what a single synchronized blockchain engine can handle. Its innovations are outstanding, especially in terms of parallel execution and local fee markets. This is something all second layer solutions must embrace.
There is some truth in this statement, and different second-layer solutions may lead to fragmentation of user experience and liquidity. We’ll start with a self-contained, infinitely scalable zero-knowledge layer 2 ecosystem. Initially, @zksync 's hyperchains will be infinitely scalable and they will be able to interoperate seamlessly, but not with Polygon or Scroll. But a successful zero-knowledge ecosystem is enough to realize this vision. Eventually, we may be heading down the road of building a single L2 bridge contract on Ethereum that will connect all solutions. "
toly, @aeyakovenko, Solana founder, join the discussion, reply to alex:
" Zero-knowledge proofs (zkps) are really great! But they don't solve database hot spots. If it can be solved, there is approximately $100 billion in database revenue available to exploit. Nor can they spread information around the world faster. Two problems Solana is grappling with are synchronizing state at the speed of light, and handling as many concurrency hotspots as possible in a single atomic state machine. If zkps helps solve these problems, Solana will definitely use them. High TPS is just a by-product of efficient channel utilization, but it is not the goal. ”
alex replied:
"You did not strengthen my argument.
Yes, you can build an efficient database state synchronization engine that can handle 1,000,000TPS.
But how are users of this system supposed to verify it?
By using zero-knowledge proofs, it is not feasible to process 1,000,000 TPS simultaneously. There is no problem with asynchronous processing. Verification time: <1 second. ”
toly reply:
Run a full node. Those get more expensive than running a full node Higher value people will do it.
alex replied:
1,000,000 to process A full node of TPS will require a computing cluster (huge computing power).
Thank you, I would rather choose to verify the zero-knowledge proof on my mobile phone.
toly reply:
1 million TPS may require less than 16 cores. You can It's really nice to verify zero-knowledge proofs. But as I said, I care about maximum concurrency hotspots and synchronizing all state at the speed of light. This is a high-bandwidth, high-availability system anyway.
alex replied:
"1 million is 16 cores" is very superficial, because Solana's peak TPS today is only 1,000. And, with real-world usage of 1 million TPS, how many petabytes of state do I need to keep in RAM to maintain this speed?
toly reply :
Transactions can occupy more computing units or fewer computing units. The system only needs to keep an amount in RAM that matches the state it is processing simultaneously. Once loaded 64k transactions processed on 16 cores, using 10MB of state, less than 1GB.
What exactly are we arguing here? Solana is globally synchronized The most efficient atomic single state machine implementation as fast as possible. If zero-knowledge proofs are helpful in this regard, it will be used. The purpose of a system is its functionality.
If a zero-knowledge proof system can't do that specific task, it doesn't matter if it can handle 10M TPS. If there is functional overlap with other systems, it doesn't matter. Users will choose the solution that works best for them.
alex replied:
The argument is: if the most efficient implementation of a state synchronization system is approximately Around 1,000TPS, then you definitely need to use zero-knowledge proofs to achieve truly globally verifiable computational scale (via parallelization of proofs and recursive proof aggregation).
I question the argument that throughput can be significantly improved without placing unrealistic requirements on users with full nodes.
Another netizen asked:
If within any 48-hour time window in 2024 , the sustained TPS of zkSync does not reach 1000TPS, while using ETH L1 as the data availability layer, what will you do?
alex replied:
@zksync Era∎ - a separate ZK Stack super chain Example - Sustained 60 TPS during Inscription Mania, more than any other second layer solution
At its peak, it measured 200 TPS. Today, you only need to deploy 4 additional ZK Stack superchains to reach 1000TPS.