Author: Mahesh Ramakrishnan, Vinayak Kurup, CoinDesk; Translator: Tao Zhu, Golden Finance
In late July, Mark Zuckerberg wrote a letter explaining why “open source is essential for a positive AI future,” in which he extols the necessity of open source AI development. The once nerdy teenage founder, now a water-skiing, gold-chain-wearing, jiu-jitsu-doing “Zuckerberg,” has been called the savior of open source model development.
But so far, he and the Meta team have not detailed how these models are deployed. As model complexity increases in computational requirements, if model deployment is controlled by a small number of participants, do we succumb to a similar form of centralization? Decentralized AI promises to address this challenge, but the technology requires advances in industry-leading cryptography and unique hybrid solutions.
Unlike centralized cloud providers, decentralized AI (DAI) distributes the computational process of AI inference and training across multiple systems, networks, and locations. If implemented correctly, these networks, a type of decentralized physical infrastructure network (DePIN), will bring benefits in terms of censorship resistance, computational access, and cost.
DAI faces challenges in two main areas: the AI environment and the decentralized infrastructure itself. Compared to centralized systems, DAI requires additional protections to prevent unauthorized access to model details or the theft and copying of proprietary information. As a result, this is an underexploited opportunity for teams that focus on open source models but recognize the potential performance disadvantages of open source models compared to closed source models.
Decentralized systems particularly face obstacles in terms of network integrity and resource overhead. For example, client data distributed across different nodes exposes more attack vectors. An attacker can spin up a node and analyze its computation, attempt to intercept data transfers between nodes, or even introduce biases that degrade system performance. Even in secure decentralized reasoning models, there must be mechanisms to audit the computation process. Nodes save resource costs by presenting incomplete computations, and verification is complicated by the lack of a trusted centralized actor.
Zero-Knowledge Proof
Zero-Knowledge Proof (ZKP), while currently computationally prohibitive, is one of the potential solutions to address some of DAI’s challenges. ZKP is a cryptographic mechanism that enables one party (the prover) to convince another party (the verifier) of the truth of a statement without revealing any details about the statement itself, except for its validity. This proof can be quickly verified by other nodes, and provides each node with a way to prove that it acted in accordance with the protocol. The technical differences between proof systems and their implementations (which will be explored in depth later) are important to investors in this space.
Centralized computation limits model training to a small number of well-positioned and resource-rich actors. ZKPs could be part of unlocking idle compute on consumer hardware; for example, a MacBook could use its extra compute bandwidth to help train large language models while earning tokens for the user.
Deploying decentralized training or inference using consumer hardware is a focus for teams like Gensyn and Inference Labs; unlike decentralized compute networks like Akash or Render, sharding compute adds complexity, namely floating point issues. Utilizing idle distributed compute resources opens the door for small developers to test and train their own networks — as long as they have access to tools that address the challenges involved.
Currently, ZKP systems appear to cost four to six orders of magnitude more than running the compute locally, and are prohibitively slow for tasks that require high compute (like model training) or low latency (like model inference). By comparison, a six-order-of-magnitude drop means that a cutting-edge system like a16z’s Jolt running on an M3 Max chip can prove a program 150 times slower than running it on a TI-84 graphing calculator.
AI’s ability to process large amounts of data makes it compatible with zero-knowledge proofs (ZKPs), but more progress in cryptography is needed before ZKPs can be widely used. Ongoing work by teams such as Irreducible (which designed the Binius proof system and commitment scheme), Gensyn, TensorOpera, Hellas, and Inference Labs will be an important step toward achieving this vision. However, timelines remain overly optimistic, as true innovation takes time and mathematical progress.
In the meantime, it’s worth noting other possibilities and hybrid solutions. HellasAI and others are developing new ways to represent models and computations that could enable optimistic challenge games, allowing only the subset of computations that need to be processed to be processed in zero-knowledge. Optimistic proofs only work if there is collateral, the ability to prove wrongdoing, and a credible threat that other nodes in the system are checking the computation. Another approach, developed by Inference Labs, validates a subset of queries, where a node commits to generating a ZKP with a bond, but only provides proof if a client challenges it first.
Summary
Decentralized AI training and inference will serve as a safeguard against a few dominant players consolidating power while unlocking previously inaccessible computation. ZKPs will be integral to achieving this vision. Your computer will be able to earn real money for you unknowingly by leveraging extra processing power in the background. Concise proofs of correct execution of computations will make the trust utilized by the largest cloud providers unnecessary, enabling computation networks with smaller providers to attract enterprise customers.
While zero-knowledge proofs will enable this future and become an essential component of more than just computation networks (like Ethereum’s vision of single-slot finality), their computational overhead remains a barrier. Hybrid solutions that combine the game-theoretic mechanics of optimistic games with the selective use of zero-knowledge proofs are a better solution and will likely become a ubiquitous bridge point until ZKPs become faster.
For both native and non-native cryptocurrency investors, understanding the value and challenges of decentralized AI systems is critical to deploying capital efficiently. Teams should have answers to questions about node computation proofs and network redundancy. Furthermore, as we have observed in many DePIN projects, decentralization happens over time, and it is critical that teams have a clear plan for achieving this vision. Solving the challenges associated with DePIN computing is critical to returning control to individuals and small developers - an important part of keeping our systems open, free, and censorship-resistant.