Opinion: Investors need to be wary of politicians’ changing attitudes toward Crypto assets
BTC advocate Samson Mow expressed his skepticism about politicians supporting digital assets.
JinseFinanceSource: BeWater Community
On June 14, the AO Foundation officially launched the token economics of the decentralized supercomputer AO. On the evening of June 12, on the eve of the release of AO's token economics and the release of NEAR's DA solution, we invited Sam Williams, founder of Arweave and AO, and Illia Polosukhin, co-founder of NEAR Protocol, to have an in-depth conversation on the integration of AI and blockchain. Sam elaborated on the underlying architecture of AO, which is based on the Actor-oriented paradigm and decentralized Erlang model, and aims to create a decentralized computing network that can be infinitely expanded and supports heterogeneous process interactions. Sam also looked forward to the potential application of AO in DeFi scenarios. By introducing trusted AI strategies, AO is expected to achieve true "agent finance". Illia shared the latest progress of NEAR Protocol in scalability and AI integration, including the introduction of chain abstraction and chain signature functions, as well as peer-to-peer payment and AI reasoning routers under development. In addition, the two guests also expressed their views on the current priorities and research focuses in their respective ecosystems, as well as the innovative projects they are optimistic about. Thanks to @0xLogicrw of @BlockBeatsAsia for compiling and organizing the content in the first time and bringing the Chinese version of the wonderful content to the community.
Lulu: First, let me give a brief introduction of yourself and talk about how you got involved in the two fields of AI and blockchain.
Illia: My background is in machine learning and artificial intelligence, and I worked in this field for about 10 years before entering the crypto field. I am best known for the paper "Attention is All You Need", which introduced the Transformer model, which is now widely used in various modern machine learning, AI, and deep learning technologies. However, before that, I worked on many projects, including TensorFlow, a machine learning framework that Google open-sourced in 2014-2015. I also worked on question-answering systems, machine translation, and other research, and applied some of the research results in Google.com and other Google products.
Later, Alex and I co-founded NEAR.ai, which was originally an AI company dedicated to teaching machines to program. We believed that in the future, people would be able to communicate with computers through natural language, and the computers would automatically program themselves. In 2017, this sounded like science fiction, but we did a lot of research. We crowdsourced more training data, and students from China, Eastern Europe, and other places completed small tasks for us, such as writing code and writing code comments. But we encountered challenges in paying them, such as PayPal could not transfer money to Chinese users.
Someone suggested using Bitcoin, but Bitcoin's transaction fees were already high at the time. So we started to study it in depth. We have a background in scalability, and at Google, everything is about scale. My co-founder Alex founded a sharded database company that served Fortune 500 companies. It was strange to see the state of blockchain technology at that time. Almost everything actually ran on a single machine and was limited by the power of that single machine.
So we set out to build a new protocol, which is NEAR Protocol. It is a sharded Layer 1 protocol that focuses on scalability, ease of use, and ease of development. We launched our mainnet in 2020 and have been growing the ecosystem. In 2022, Alex joined OpenAI and in 2023 founded an AI company focused on basic models. Recently, we announced that he is returning to lead the NEAR.ai team to continue the work we started in 2017 to teach machines to program.
Lulu: This is a really interesting story. I didn’t know that NEAR started as an AI company and is now refocusing on AI. Next, let Sam tell us about yourself and your project.
Sam: We started getting involved in this space about seven years ago, and I had been following Bitcoin for a long time. We discovered this exciting but underexplored idea: you could store data on a network that would be replicated around the world with no single centralized point of failure. This inspired us to create an archive that would never forget, replicated in multiple places, so that no single organization or even government could censor the content.
So our mission became to scale Bitcoin, or to enable Bitcoin-style on-chain data storage to any scale so that we could create a knowledge base for humanity that would store all of history, a kind of immutable, trustless historical log, so that we would never forget the important context of how we got to where we are today.
We started this work 7 years ago, and it's been more than 6 years since we launched on mainnet. In the process, we realized that permanent on-chain storage could provide far more capabilities than we originally imagined. Initially, the idea was to store newspaper articles. But shortly after the mainnet launched, we realized that if you can store all of this content around the world, you've actually planted the seeds of a permanent decentralized network. Not only that, but we realized around 2020 that if you have a deterministic virtual machine and a permanent ordered log of program interactions, you can basically create smart contract systems.
We first experimented with this system in 2020, when we called it SmartWeave. We borrowed from the concept of lazy evaluation in computer science, which was popularized by the programming language Haskell. We knew that this concept had been used in production environments for a long time, but it had not really been applied in the blockchain space. Usually in the blockchain space, people execute smart contracts when they write messages. But we think that blockchains are actually data structures that only grow and never decrease, with certain rules to include new information without executing code at the same time as the data is written itself. Since we have an arbitrarily extensible data log, this was a natural way for us to think about it, but it was relatively rare at the time. The only other team was the team that is now called Celestia (then called LazyLedger).
This led to a Cambrian explosion of computational systems on Arweave. There were about three or four major projects, some of which developed their own unique communities, feature sets, and security tradeoffs. In the process, we discovered that we needed to not only leverage the data availability of the base layer to store these logs, but also a mechanism to delegate data availability guarantees. Specifically, you submit data to a packing node or someone else acting on your behalf (now called a dispatch unit), which uploads the data to the Arweave network and gives you a guarantee with economic incentives that the data will be written to the network. Once this mechanism is in place, you have a system that can scale computation horizontally. Essentially, you have a series of processes, which can be thought of as Rollups on Ethereum, that share the same data set and can communicate with each other.
AO (Actor-Oriented) gets its name from a paradigm in computer science, and we built a system that combines all of these components, with a native messaging system, data availability providers, and a decentralized computing network. So the lazy evaluation component becomes a distributed collection, and anyone can start a node to parse the state of the contract. Putting these together, what you get is a decentralized supercomputer. At its core, we have an arbitrarily scalable message log that records all the messages that participate in the computation. I think this is particularly interesting because you can do parallel computations without your process affecting the scalability or utilization of my process, which means you can do arbitrarily deep computations, such as running large-scale AI workloads inside the network. Right now, there is a big push in our ecosystem to explore this idea of what happens when you introduce market intelligence into the base layer smart contract system. That way, you essentially have intelligent agents working for you, and they are trusted and verifiable, just like the underlying smart contracts.
Lulu: As we know, NEAR Protocol and Arweave are now driving the intersection of AI and cryptocurrency. I want to dig deeper, and since Sam has touched on some of the underlying concepts and architecture of AO, I might start with AO and move to AI later. The concepts you described make me feel like those agents are running autonomously, coordinating and allowing AI agents or applications to work on top of AO. Can you elaborate on the parallel execution or independent autonomous agents inside the AO infrastructure? Is the metaphor of building a decentralized Erlang accurate?
Sam:Before I start, I want to mention that I built an operating system based on the Erlang system during my PhD. We call it running on bare metal. What’s exciting about Erlang is that it’s a simple and expressive environment where each piece of computation is expected to run in parallel, as opposed to the shared state model that has become the norm in crypto.
The elegance of this is that it maps beautifully to the real world. Just like we’re having this conversation together right now, we’re actually independent characters, doing computations in our own heads, and then listening, thinking, and talking. Erlang’s agent or actor-oriented architecture is really brilliant. The talk that came right after me at the AO Summit was from one of the founders of Erlang, and he talked about how they came up with this architecture around 1989. They weren’t even aware of the term “actor-oriented” at the time. But it was a beautiful enough concept that many people came up with the same idea because it made sense.
To me, if you want to build truly scalable systems, you have to have them message passing, not sharing state. That is, when they share state, as happens in Ethereum, Solana, and pretty much every other blockchain, NEAR is actually an exception. NEAR has shards, so they don’t share global state, they have local state.
When we built AO, the goal was to combine these concepts. We wanted to have processes that execute in parallel, that can do arbitrarily large computations, while decoupling the interactions of these processes from their execution environment, ultimately forming a decentralized version of Erlang. For those who are not very familiar with distributed technologies, the easiest way to think about it is to think of it as a decentralized supercomputer. With AO, you can start a terminal inside the system. As a developer, the most natural way to use it is to start your own local process and then talk to it just like you would talk to a local command line interface. As we move towards consumer adoption, people are building UIs and all the things you would expect. Fundamentally, it allows you to run personal computations in this decentralized cloud of computing devices and interact with them using a unified message format. We designed this part with reference to the TCP/IP protocol that runs the Internet, trying to create a TCP/IP protocol that can be treated as computation itself.
AO's data protocol does not force the use of any specific type of virtual machine. You can use any virtual machine you want, and we have implemented WASM32 and 64-bit versions. Others in the ecosystem have implemented EVM. If you have this shared messaging layer, which we use with Arweave, then you can have all of these highly heterogeneous processes interacting in a shared environment, like the internet of computation. Once that infrastructure is in place, the natural next step is to explore what you can do with intelligent, verifiable, trusted computation. The obvious application is AI or smart contracts, having agents make intelligent decisions in the market, potentially against each other or on behalf of humans against humans. When we look at the global financial system, about 83% of the trades on Nasdaq are executed by robots. That's the way the world works.
In the past we couldn't get the smart part on-chain and make it trustworthy. But in the Arweave ecosystem, there's another parallel workstream that we call RAIL, the Responsible AI Ledger. It's essentially a way to create a record of the inputs and outputs of different models, and store those records in a public, transparent way so that you can query and say, "Hey, is this piece of data I'm seeing coming from an AI model?" If we can generalize this, we think it can solve a fundamental problem we see today. For example, someone sends you a news article from a website you don't trust, and it appears to have a picture or video of a politician doing something stupid. Is this real? RAIL provides a ledger that many competing companies can use in a transparent and neutral way to store records of the outputs they generate, just like they use the internet. And they can do it at a very low cost.
Lulu: I'm curious about Illia's thoughts on AO methods or model scalability. You worked on the Transformer model, which is designed to solve the bottleneck of sequential processing. I wanted to ask, what is NEAR's approach to scalability? In the previous AMA chat, you mentioned that you are working on a direction where multiple small models form a system, which may be one of the solutions.
Illia:Scalability can be applied in many different ways in blockchain, and we can follow up on Sam's topic. What we see now is that if you use a single large language model (LLM), it has some limitations in terms of reasoning. You need to prompt it in a specific way so that it can run for a while. Over time, the model will continue to improve and become more general. But in any case, you're somehow training these models, which you can think of as raw intelligence, to perform specific functions and tasks and to reason better in specific contexts.
If you want them to do more general work and processes, you need multiple models running in different contexts, performing different aspects of the task. Let's take a very specific example, we're developing an end-to-end process right now. You can say, "Hey, I want to build this application." The final output is a fully built application with correct, formally verified smart contracts, and the user experience is also well tested. In real life, there's usually not one person who builds all of this stuff, and the same idea applies here. You actually want AI to play different roles and play different roles at different times, right?
First, you need an AI agent who takes the role of a product manager to actually collect requirements and figure out what exactly you want, what are the trade-offs, what are the user stories and experiences. Then there might be an AI designer who is responsible for translating these designs into the front end. Then there might be an architect who is responsible for the architecture of the back end and the middleware. Then there are AI developers who write code and make sure the smart contracts and all the front end work is formally verified. Finally, there might be an AI tester who makes sure everything is working properly, testing it through a browser. So you have a group of AI agents that might use the same model but are fine-tuned for specific functions. They each play a role in the process independently, using prompts, structure, tools, and the observed environment to interact and build a complete flow.
This is what Sam was talking about, having many different agents that do their work asynchronously, observing the environment and figuring out what to do. So you do need a framework, you need systems to continuously improve them. From a user's perspective, you send a request and interact with different agents, but they act like a single system doing the work. Under the hood, they might actually pay each other to exchange information, or different agents from different owners interact with each other to actually get something done. This is a new version of APIs, smarter and more natural language driven. All of this requires a lot of framework structure and payment and settlement systems.
There is a new way of explaining it called AI business, where all these agents interact with each other to get things done. This is the system we are all moving towards. If you think about the scalability of such a system, there are several problems that need to be solved. As I mentioned, NEAR is designed to support billions of users, including humans, AI agents, and even cats, as long as they can transact. Every NEAR account or smart contract runs in parallel, allowing for continued scaling and transactions. At a lower level, you probably don't want to send a transaction every time you call an AI agent or an API, no matter how cheap NEAR is, it doesn't make sense. So we are developing a peer-to-peer protocol that enables agent nodes, clients (including humans or AI) to connect to each other and pay fees for API calls, data fetching, etc., and have cryptoeconomic rules that guarantee that they will respond or they will lose part of their collateral.
This is a new system that allows for scaling beyond NEAR, providing micropayments. We call it yoctoNEAR, which is equivalent to 10^-24 of NEAR. This way you can actually exchange messages at the network level and attach payment functions to it, so that all operations and interactions can now be settled through this payment system. This solves a fundamental problem in blockchain, which is that we don't have a payment system with bandwidth and latency, and there are actually a lot of free riders. This is a very interesting aspect of scalability, not just limited to blockchain scalability, but can be applied to a future world where there may be billions of agents. In this world, even on your device, there may be multiple agents running at the same time, performing various tasks in the background.
Lulu: This use case is very interesting. I believe that for AI payments, there is generally a need for high-frequency payments and complex strategies, which have not been realized due to performance limitations. So I am looking forward to seeing how these needs can be realized based on better scalability options. In our hackathon, Sam and the team mentioned that AO is also exploring the use of new AI infrastructure to support DeFi use cases. Sam, can you elaborate on how your infrastructure can be applied in new DeFi scenarios?
Sam:We call it Agent Finance. This refers to the fact that we see two aspects of the market. DeFi has done a very good job in the first stage of decentralizing and bringing various economic primitives to the chain, allowing users to use them without trusting any intermediaries. But when we think about the market, we think about the fluctuations of numbers up and down, and the intelligence that drives these decisions. When you are able to bring this intelligence itself on-chain, you get a trustless financial instrument like a fund.
A simple example is, let's say we want to build a meme coin trading hedge fund. Our strategy is to buy Trump coins when we see Trump mentioned and Biden coins when we see Biden mentioned. In AO, you can use an oracle service like 0rbit to get the entire content of a web page, like the Wall Street Journal or the New York Times, and then feed that into your agent, which processes this data and analyzes how many times Trump is mentioned. You can also do sentiment analysis to understand market trends. Your agent then buys and sells these assets based on this information.
What's interesting is that we can make the agent execution itself trustless. So you have a hedge fund that can execute strategies and you can put money into it without having to trust the fund manager. This is another aspect of finance that the DeFi world hasn't really touched on, which is making intelligent decisions and then acting on them. If you can make these decision processes trustworthy, you can unify the entire system into something that looks like a truly decentralized economy, not just a settlement layer for primitives involving different economic games.
We think this is a huge opportunity, and there are already some people in the ecosystem who are starting to build these components. We have a team that has created a trustless portfolio manager that will buy and sell assets in the proportions you want. For example, you want 50% to be Arweave tokens and 50% to be stablecoins. When the price of these things changes, it will automatically execute the trade. There is also an interesting concept behind this, there is a feature in AO that we call cron messages. This means that processes can wake up by themselves and decide to do something autonomously in the environment. You can set up your hedge fund smart contract to wake up every five seconds or five minutes, get data from the network, process the data, and take actions in the environment. This makes it completely autonomous because it can interact with the environment, in a sense, it is "alive".
The execution of smart contracts on Ethereum requires external triggers, and people have built a lot of infrastructure to solve this problem, but it is not smooth. In AO, these functions are built in. So you will see a market where agents are constantly competing with each other on the chain. This will drive a huge increase in the use of the network in a way that has never been seen in the crypto space.
Lulu: NEAR.ai is advancing some promising use cases, can you tell us more about other layers or the overall strategy and some of the focus?
Illia:There is indeed a lot going on at every level, with various products and projects that can be integrated. It all obviously starts with the NEAR blockchain itself. A lot of projects need a scalable blockchain, some form of identity verification, payments, and coordination. NEAR smart contracts are written in Rust and JavaScript, which is very convenient for a lot of use cases. One interesting thing is that NEAR’s recent protocol upgrade introduced what are called yield/resume precompiles. These precompiles allow smart contracts to pause execution, wait for an external event to happen, whether it is another smart contract or AI reasoning, and then resume execution. This is very useful for smart contracts that need input from LLMs (such as ChatGPT) or verifiable reasoning.
We also launched chain abstraction and chain signature capabilities, which is one of the unique features introduced by NEAR in the past six months. Any NEAR account can transact on other chains. This is very useful for building agents, AI inference, or other infrastructure, because now you can do cross-chain transactions through NEAR without having to worry about transaction fees, tokens, RPC, and other infrastructure. This is all handled for you through the chain signature infrastructure. Regular users can also use this feature. There is a HOT Wallet built on NEAR on Telegram, and it actually just launched Base integration on mainnet, and there are about 140,000 users using Base through this Telegram wallet.
Going further, we intend to develop a peer-to-peer network, which will enable agents, AI inference nodes, and other storage nodes, etc. to participate in a more provable communication protocol. This is very important because the current network stack is very limited and has no native payment functionality. Although we often say that blockchain is "internet money", we have not actually solved the problem of sending packets with money at the network level. We are solving this problem, which is very useful for all AI use cases and broader Web3 applications.
In addition, we are also developing what we call the AI Inference Router, which is essentially a place where all use cases, middleware, decentralized inference, on-chain and off-chain data providers can be plugged in. This router can serve as a framework to really interconnect all the projects being built in the NEAR ecosystem and then make all of this available to the NEAR user base. NEAR has over 15 million monthly active users across different models and applications.
Some applications are exploring how to deploy models on user devices, so-called edge computing. This approach involves storing data locally and operating with relevant protocols and SDKs. This has great potential from a privacy protection perspective. In the future, many applications will run on user devices, generate or pre-compile user experiences, and only use local models to avoid data leakage. As developers, we have a lot of research going on to make it easy for anyone to build and publish applications on Web3 and do formal verification on the backend. This will become an important topic in the future as the OLLM model becomes increasingly powerful in discovering codebase vulnerabilities.
In short, this is a complete technology stack, from the underlying blockchain infrastructure, to the chain abstraction of Web3, to peer-to-peer connections, which are very suitable for connecting off-chain and on-chain participants. Next are applications for AI reasoning routing centers and local data storage, which are particularly suitable for situations where private data needs to be accessed without being leaked to the outside. Finally, developers integrate all the research results with the goal of allowing future applications to be built by AI. In the medium and long term, this will be a very important development direction.
Lulu: I would like to ask Sam, what are AO's current priorities and research focuses?
Sam:One of the ideas that I am particularly interested in is to use the extension functions provided by AO to build a deterministic CUDA subset, an abstract GPU driver. Normally, GPU computing is not deterministic and therefore cannot be used for computing as safely as on AO, at least not safely, so no one will trust these processes. If we can solve this problem, it is theoretically possible, and we only need to deal with the uncertainty problem at the device level. There is some interesting research going on, but this needs to be handled in a way that is always 100% deterministic, which is critical for smart contract execution. We already have a plugin system that supports this as a driver inside AO. The framework is there, we just need to figure out how to implement it exactly. There are a lot of technical details, but basically it's about making the job loading in a GPU environment predictable enough for this kind of computation.
Another thing I'm interested in is whether we can use this on-chain AI capability to allow us to do decentralized or at least open and distributed model training, especially fine-tuning models. The basic idea is that if you can set a clear criterion for a task, you can train models against that criterion. Can we create a system where people stake tokens to encourage miners to compete to build better models? While this may not attract a very diverse group of miners, it doesn't matter because it allows model training to be done in an open way. Then, when miners upload their models, they can add a Common Data License label that stipulates that anyone can use these models, but if they use them commercially, they must pay a specific royalty. The royalties can be distributed to contributors via tokens. In this way, by combining all of these elements, you can create an incentive mechanism to train open source models.
I also think the RAIL initiative mentioned earlier is also very important. We have discussed the possibility of supporting this initiative with some major AI providers or inference providers, and they have indeed shown a lot of interest. If we can get them to actually implement and write this data on the network, then users can right-click on any image on the Internet and query whether this image was generated with Stable Diffusion or DALL·E. These are all very interesting areas that we are currently exploring.
Lulu: Please each nominate a recent favorite AI or crypto project, it can be any project.
Illia: I'm going to be a little tricky. We have AI Office Hours every week and invite a few projects, and recently we have Masa and Compute Labs. Both projects are very good, and I'll use Compute Labs as an example. Compute Labs basically turns actual computing resources (such as GPUs and other hardware) into a real asset that can be participated in the economy, allowing users to earn income from these devices. Currently, computing markets in the crypto space are booming, and they seem to be a natural place for cryptocurrencies to promote markets. But the problem is that these markets lack moats and network effects, leading to fierce competition and margin compression. Therefore, computing markets are just a complement to other business models. Compute Labs provides a very crypto-native business model, namely capital formation and asset decarbonization. It creates opportunities for people to participate that usually require building data centers. The computing market is just one part of it, and the main purpose is to provide access to computing resources. This model also fits into the broader decentralized AI ecosystem, providing opportunities for a wider group of investors to participate in innovation by providing underlying computing resources.
Sam:I have a lot of great projects in the AO ecosystem, and I don't want to favor any one, but I think the underlying infrastructure that Autonomous Finance is building makes "agent finance" possible. This is very cool, and they are really at the forefront of this. I also want to thank the broader open source AI community, especially Meta's approach to open sourcing the Lama model, which has driven many others to open source their models. Without this trend, when OpenAI becomes ClosedAI after GPT-2, we may be in the dark, especially in the crypto space, because we will not be able to access these models. People can only rent these closed source models from one or two major providers. It is great that this is not happening now. Although it is ironic, I still have to give a thumbs up to Meta, the king of Web2.
BTC advocate Samson Mow expressed his skepticism about politicians supporting digital assets.
JinseFinanceBitcoin’s recent slump could continue for a while, according to Katie Stockton, managing partner at Fairlead Strategies.
JinseFinanceBTC, Crypto Markets, Riding the Crypto Wave: Can Investors Stay Safe Against Booming Market Conditions? Golden Finance, how has the background of Bitcoin changed?
JinseFinanceWith Bitcoin rising to its highest price in two years just weeks away from the crucial Super Tuesday in March, where does each candidate stand relative to the crypto asset class?
JinseFinanceKT Corporation shuts down NFT platform MINCL amid changing business conditions, reflecting a broader trend in South Korea's web3 industry reassessment for short-term gains.
EdmundJesus Rodriguez, CEO of IntoTheBlock, said he expects the cryptocurrency market to move from crazy incentives to real utility.
JinseFinanceOnce the benefits are realized or the expectations corresponding to the benefits disappear, you can no longer continue to participate.
JinseFinanceAccording to the minutes of the Federal Reserve's latest Federal Open Market Committee (FOMC) monetary policy meeting, the Federal Reserve decided to slow down the pace of interest rate increases in December and continue to maintain the target range of the federal funds rate between 5.25% and 5.50%.
JinseFinanceThe MVIS CryptoCompare Digital Assets 100 Index has recorded a growth of 5.4% since the end of June.
FinboldBinance will attempt to enter the Japanese market following changes in official policy and outlook.
Beincrypto