Author: Vitalik, founder of Ethereum; Translation: 0xjs@金财经
On March 13, the Dencun hard fork was activated, enabling one of Ethereum’s long-awaited features: proto- danksharding (aka EIP-4844, aka blob). Initially, the fork reduced transaction fees for Rollups by more than 100x, as blobs were virtually free. On the final day, we finally saw a surge in the number of blobs and an active fee market as blobscriptions protocols started using them. Blobs are not free, but they are still much cheaper than calldata.
Left: With Blobscriptions, blob usage eventually soared to a target of 3 per block.
Right: blob fee thus "enters price discovery mode". Source: https://dune.com/0xRob/blobs
This milestone represents a key shift in Ethereum’s long-term roadmap: blob is the moment when Ethereum scaling is no longer a "zero to one" problem, but becomes a "one to N" problem. From here, significant scaling work, whether increasing the number of blobs or improving rollups' ability to fully utilize each blob, will continue, but it will be more incremental. The basic paradigm of how Ethereum operates as an ecosystem and scaling-related changes are gradually becoming a thing of the past. Additionally, the emphasis has slowly shifted and will continue to shift slowly from L1 issues such as PoS and scaling to issues closer to the application layer. The key question this article will discuss is: Where will Ethereum go?
The future of Ethereum scaling
Over the past few years, we have seen Ethereum slowly transform into an L2-centric ecosystem. Major applications have started to move from L1 to L2, payments have started to be based on L2 by default, and wallets have started to build user experiences around the new multi-L2 environment.
From the beginning, A key part of the Rollup-centric roadmap has been the idea of independent data available space: A special space part in the block. EVM cannot access this space, but it can save data of 2-layer projects such as rollup. Since this data space is not accessible through the EVM, it can be broadcast separately from the block and verified separately from the block. Ultimately, verification can be done using a technique called data availability sampling, which allows each node to verify that the data was published correctly by just randomly checking a few small samples. Once implemented, the blob space can be greatly expanded; the ultimate goal is 16 MB per slot (approximately 1.33 MB per second).
Data availability sampling: Each node only needs to download a small portion of the data to verify the availability of the entire data.
EIP-4844 (aka "blob") does not provide us with data availability sampling. But it does set up the basic scaffolding in such a way that from here, data availability sampling can be introduced and the blob count can be increased behind the scenes, all without any involvement from the user or application. In fact, the only "hard fork" required is a simple parameter change.
Two developments need to be continued from here:
1. Gradually increase the blob capacity, and ultimately achieve the complete vision of data availability sampling of 16 MB per slot data space.
2. Improve L2 to better utilize the data space we have
Making DAS a reality
The next phase may be A simplified version of DAS, called PeerDAS. In PeerDAS, each node stores a large portion (e.g. 1/8) of all blob data, and nodes maintain connections to many peers in the p2p network. When a node needs to sample a specific piece of data, it asks one of the peers it knows is responsible for storing that piece of data.
If each node needs to download and store all data 1/8, then PeerDAS theoretically allows us to expand the blob 8 times (actually 4 times, because the redundancy of erasure coding causes us to lose 2 times). PeerDAS can be rolled out over time: we could have a phase where professional stakers continue to download the full blob, while individual stakers only download 1/8 of the data.
In addition to this, EIP-7623 (or alternatives such as 2D pricing) could be used to place tighter limits on the maximum size of execution blocks (i.e. "regular transactions" within a block), which would Made blob targets and L1 gas limits safer. In the long term, more sophisticated 2D DAS protocols will take us all the way and increase blob space even further.
Improve L2
There are four key areas that can be improved in the current Layer 2 protocol.
1. Use bytes more efficiently through data compression
I have written an overview of data compression before: simply put, a transaction will occupy about 180 bytes of data. However, there are a range of compression techniques that can be used to reduce this size in several stages; with optimal compression, it is possible to reduce the size of each transaction to less than 25 bytes.
2. Optimistic data technology, only use L1 to protect L2 under special circumstances
Plasma is a class of technology that allows you to obtain the equivalent security of rollup for certain applications while retaining data under normal circumstances On L2. As with EVM, Plsma cannot protect all tokens. But Plasma-inspired structures can protect most coins. A much simpler structure than Plasma could greatly improve effectiveness today. L2s unwilling to put all their data on-chain should explore such technology.
3. Continuous improvement of execution-related constraints
Once the Dencun hard fork is activated, it will be 100 times cheaper to set up Rollup to use the blobs it introduces. The usage of Base rollup soared immediately:
This is reversed This caused Base to hit its internal gas limit, causing the cost to spike unexpectedly. This has led to a broader recognition that the Ethereum data space is not the only thing that needs to scale: Rollup needs to scale internally as well.
Part of this is parallelization; rollups can implement something like EIP-648. But equally important is storage and the interaction between computing and storage. This is an important engineering challenge for Rollup.
4. Continue to improve security
We are still far away from a world where rollups are truly protected by code. In fact, according to l2beat, only these five (of which only Arbitrum is full EVM) have even reached what I call "stage one."
This needs to be addressed head on . While we don't yet have enough confidence in sophisticated code for optimistic or SNARK-based EVM validators, we can definitely get halfway there and have safety committees that can restore behavior and only limit code in the case of higher thresholds (For example, I recommend using 6 of 8; Arbitrum is doing 9 of 12).
The standards of the ecosystem need to become more stringent: so far, we have been tolerant of any project that claims to be "on the path to decentralization." By the end of this year, I think our standards should be raised and only when the project actually reaches at least Phase 1, we should It is considered a Rollup.
Thereafter, we can move cautiously towards the second stage: a world where Rollup is truly supported by code, and the Security Council can only do so when the code is "provably inconsistent with itself" intervening (e.g. accepting two incompatible state roots, or two different implementations giving different answers). One way to do this securely is to use multiple prover implementations.
What does this mean for the broader development of Ethereum?
In my speech at ETHCC in the summer of 2022, I gave a speech describing the development status of Ethereum as an S-curve: We are entering a very rapid transformation period. After the rapid transformation, the development will Slowing down again as L1 is consolidated and development refocuses on the user and application layers.
< span style="color: rgb(0, 112, 192);">Today, I think we are definitely on the right side of the deceleration of the S-curve. As of two weeks ago, the two biggest changes to the Ethereum blockchain—the move to proof-of-stake and the re-architecting of blobs—were a thing of the past. Further changes are still important (e.g. Verkle trees, single-slot finality, intra-protocol account abstraction), but they are not as drastic as Proof-of-Stake and sharding. In 2022, Ethereum is like a plane changing engines in flight. In 2023, it replaced its wings. The Verkle tree transition is the main, really important transition (we already have a testnet for this); the others are more like skeg replacements.
The goal of EIP-4844 is to make a large one-time change to set up Rollup for long-term stability. Now that the blob is out, future upgrades to full danksharding with 16 MB blobs and even switching encryption to STARK via the 64-bit goldilocks field are possible without any further action by Rollup and the user. It also reinforces an important precedent: the Ethereum development process is executed according to a long-standing and well-understood roadmap, and applications built with the "New Ethereum" concept in mind (including L2) gain a long-term stable environment.
What does this mean for applications and users?
The first ten years of Ethereum were very much a training phase: the goal was to get Ethereum L1 off the ground, and applications happened mostly among a small group of enthusiasts. Many believe that the lack of mass adoption over the past decade proves that cryptocurrencies are useless. I've always argued against this: nearly all non-financial speculative crypto applications rely on low fees. So while we charge high fees, we shouldn't be surprised that we mostly see financial speculation!
Now that we have blobs, this key constraint that has been holding us back starts to disappear. Fees are finally much lower; I said seven years ago that the monetary Internet should cost no more than five cents per transaction, and now it’s finally happening. We're not completely out of the woods yet: if usage grows too quickly, fees may still increase, and we'll need to continue working to further scale blobs (and rollups individually) over the next few years. But we see the… uh… light at the end of the dark forest.
What this means for developers: We no longer have any excuses. Until a few years ago, we set a low bar for ourselves, building applications that were clearly unusable at scale, as long as they worked as prototypes and were reasonably decentralized. Today we have all the tools we need, in fact most of the tools we will ever have, to build applications that are both cypherpunk and user-friendly. So we should go out and do it.
Many people are rising to the challenge. Daimo Wallet clearly describes itself as Venmo on Ethereum, aiming to combine the convenience of Venmo with Combined with Ethereum’s decentralization. In the field of decentralized social networking, Farcaster combines true decentralization with excellent user experience. Unlike previous waves of “social network” hype, Farcaster’s average users aren’t there to gamble — a key test for a crypto app to be truly sustainable.
The article above was published on the main Farcaster client Warpcast, this screenshot was taken from the alternative Farcaster + Lens client Firefly.
We need to build on these successes and extend them to other application areas, including identity , reputation and governance.
Applications built or maintained today should be designed with the Ethereum of the 2020s in mind
The Ethereum ecosystem still has a large number of applications that are basically on runs around the “Ethereum of the 2010s” workflow. Most ENS activity remains in Tier 1. Most token issuance happens on layer 1, with no serious thought being given to ensuring that bridging tokens on layer 2 are available (e.g. see fans of the ZELENSKYY memecoin appreciating the token's continued donations to Ukraine, but complaining that L1 fees are too expensive ). In addition to scalability, we are also behind on privacy: POAP is all on the public chain, which may be the right choice for some use cases, but not the best choice for others. Most DAOs and Gitcoin Grants still use fully transparent on-chain voting, which makes them highly susceptible to bribery (including retroactive airdrops), and this has been shown to severely distort contribution patterns. Today, ZK-SNARKs have been around for years, but many applications still haven't started using them properly.
These are hard-working teams that have to deal with a large existing user base, so I wouldn't blame them for not upgrading to the latest wave of technology at the same time. But soon, this upgrade will need to happen. Here are some key differences between "Ethereum workflow in the 2010s" and "Ethereum workflow in the 2020s":
Basically,Ethereum No longer just a financial ecosystem. It is a full-stack alternative to most "centralized technologies" and even provides things that centralized technologies do not have Some features (such as governance-related applications). We need to take into account this wider ecosystem.
Conclusion
Ethereum is in the era of "very rapid L1 progress" to L1 progress is still very significant, But a decisive shift for a more mature, less disruptive era of applications.
We still need to complete the expansion. This work will be more behind the scenes, but it's still important.
Application developers are no longer building prototypes; we are building tools that are used by millions of people. Across the ecosystem, we need to adjust our mindset accordingly.
Ethereum has upgraded from "just" a financial ecosystem to a more thoroughly independent decentralized technology stack. Across the entire ecosystem, we need to adjust our mindset accordingly.