Author: AYLO; Translator: Tao Zhu, Golden Finance
Last week, Aethir announced that their ARR (Annual Recurring Revenue) was $36 million, which would put them among the top 20 crypto protocols by revenue this year.
Now did I get your attention? Good, this article is worth reading.
As AI rapidly advances toward AGI, the demand for computing resources has skyrocketed, leading to a growing gap between those who can use powerful GPU chips and those who cannot. Aethir is an innovative decentralized physical infrastructure network (DePIN) that aims to democratize access to cloud computing resources.
Founded with the vision of making on-demand computing more accessible and affordable, Aethir has built a distributed network that aggregates enterprise-grade GPU chips from a variety of sources. The network is designed to support the growing demands of AI, cloud gaming, and other compute-intensive applications.
In this interview, I sat down with Aethir co-founder Mark Rydon to discuss Aethir's unique approach.
Given the recent rise in competition in the decentralized GPU space, how is Aethir different from other players in the space?
This is a very good question. I will answer it in two parts. First, I will explain the problem we are trying to solve because it is key to understanding. As I am sure you and your audience know, there is a global problem of compute scarcity. Tech giants are competing for critical GPU resources. It is a massive race to create smarter and smarter AI until we reach AGI or ASI, and then everything in the world will change.
What's interesting about this race is that it's driven by a simple principle. If you add more GPUs and data to the ecosystem, the AI gets smarter. It's like a line from the bottom left to the top right, as the AI smarts increase. The type of GPU that's needed for this is critical. You can't do it on a consumer GPU or a low-power graphics card. All the big companies in the AI race, both in training and in applications, use enterprise-grade GPUs. The specific model that they've been looking at for the last year and a half to two years is Nvidia's H100 GPU.
The key point is that enterprise companies are the ones that primarily have a huge demand for AI computing. They're building their businesses on this computing infrastructure. So we have to ask what kind of compute they want, what type of GPUs, and what quality, performance, and uptime requirements they need to meet their internal metrics. This is just like Netflix needs servers with high uptime guarantees to avoid service interruptions. The same is true for GPUs and any compute providers; they must meet strict service, quality, uptime, and performance requirements.
Unfortunately, most compute networks in the Web3 space aggregate consumer GPUs. This is the easiest way to build a network - offer tokens to a community of people contributing their idle gaming GPUs. This will quickly attract a lot of GPUs and build a strong community that is excited about the token rewards. This is why many of the compute networks that exist today aggregate consumer GPUs.
The challenge is that to have a real business, you need to sell aggregated compute. However, consumer GPU networks quickly hit a low ceiling because 99.9% of companies don't want to buy compute from consumer decentralized networks. They can't guarantee that the GPU won't be turned off at night, or that the bandwidth won't be limited by home activities like streaming Netflix. This leads to a considerable disconnect between enterprise needs and what consumer GPU networks can offer.
Source: Layer.gg
From day one, we decided not to aggregate any consumer-grade GPUs. Every GPU connected to Aethir is enterprise-grade, integrated through enterprise network infrastructure, and located in data centers suitable for enterprise workloads. The largest AI companies, telcos, and tech companies can use our network to do whatever they need without sacrificing performance or quality. In fact, they get higher performance and a better overall experience.
IO.net, for example, needs to overcome a lot of FUD about its massive network of consumer-grade GPUs. When they want to prove that their network can handle real business, they rent enterprise GPUs from Aethir. So all enterprise GPUs on IO.net are provided by Aethir. This is public knowledge within the ecosystem.
It is critical that Aethir is committed to serving enterprise customers from day one.
One more thing: when I explain what we do, this used to confuse people in the AI space. By definition, distributed GPUs mean that we don't own any GPUs. We have about 43,000 GPUs in our data centers, but we don't own any of them. Of those, we own over 3,000 H100s, which is by far the largest collection of H100s in Web3, almost 10x more than our closest competitor. That's why so many large AI companies use our infrastructure, because we can actually serve them.
One thing that some AI companies are confused about is the importance of what we call "co-located machines". If you are doing large-scale training, like OpenAI or similar projects, and you need 500 or more H100s, then those GPUs must be in the same data center. You can't have one H100 in Japan, one in the United States, and 200 in India. AI can't be trained efficiently on distributed hardware. This is a big technical challenge that other DePin companies have been working on, and it's still an unsolved problem. I think it's a huge opportunity, but it's very complex.
Since Aethir has been focused on the enterprise from the beginning, we understand that serving enterprise customers means more than just having a bunch of disconnected enterprise-grade GPUs. We also need to think about the colocation of machines in our network. So Aethir has a number of large colocated high-performance GPU clusters. What that means is that we're not just a distributed network with enterprise-class GPUs around the world; our network has large colocated clusters that enable us to handle those large AI jobs for companies that need colocated machines.
What inspired the creation of Aethir?
So actually, it was cloud gaming networks that got us excited about distributed GPU computing. The team met when I was living in Beijing for about seven years. I moved there to start my first company and eventually started working on scaling cloud gaming networks.
Long story short, we had an idea that we could solve the performance and scalability challenges of cloud gaming networks by distributing the hardware in a decentralized way. At a high level, the premise is that latency is the killer of these networks. The farther the user is from the compute, the worse the user experience is due to increased latency. The idea is that if you remove the incentive to centralize compute, you can have a more distributed network. As the network gets larger and more decentralized, the likelihood of users being closer to the compute increases, which reduces latency and improves performance.
Centralized solutions focus on bringing all resources to one location to achieve economies of scale, but this does not add value from a user perspective. It actually limits network performance. If you were to build a network that optimizes for user experience, you would distribute compute everywhere so that the user is always close to it. We thought that if we could solve the distribution and unit economics challenges, we could solve the problems that prevent services like Google Stadia from being deployed wherever they are needed.
That’s where we started, and we quickly realized our relevance to the AI space and started building products there.
Another positive news worth noting is that the global gaming population is about 3.3 billion. The majority of them (around 2.8 billion) play on low-end devices, which means they can’t play mainstream AAA games, and likewise, AAA developers can’t reach these players.
The most viable solution to unlock all this capital is to use cloud gaming to remove the hardware requirements from the user. Simply take the technology we already have and make it more cost-effective to scale. Now you have a technology that can unlock these 2 billion players, no matter where they are. This is exactly the role of building the web in a decentralized way.
Current Aethir Gaming Metrics
We are bringing hardware-decoupled gaming experiences to billions of gamers around the world in a way that is fundamentally impossible. This is why I am so bullish on the gaming space; this is our original vision.
Do you think there will be a significant increase in demand for decentralized computing? Or do you think we are already there but it will take more time for customers to adopt the technology?
I think, to be honest, if we're talking about decentralized cloud solutions, or just Aethir, it's mostly about education. You don't need to know that Aethir is a decentralized cloud provider to work with us. We have a lot of fun working with Web2 companies - 90% of our customers are Web2 companies and they're very happy with the service we provide.
Looking at the broader AI ecosystem, where is the inflection point? There are some crazy statistics.
A few weeks ago, I read a research paper that said that based on the predicted growth in computing demand, there will not be enough electricity on the planet to meet the computing needs of AI by 2030. This is crazy.
These macro numbers show a lot of capital being deployed into the ecosystem. These numbers are almost too big to comprehend. But if you zoom in a little bit, there are two types of computing demand: training and inference. Training is the process of making the AI smarter, such as upgrading from ChatGPT-4 to ChatGPT-5. Inference is the process of the AI doing its job, such as answering questions.
People like you and me mostly use ChatGPT or other large models, right? For example, ChatGPT through the Microsoft ecosystem or through Google's Gemini. Most of our interactions are with general-purpose large language models from a very small number of companies. But if we look ahead a year, given the exponential growth of the industry, my guess is that you will interact with AI in more places than you do today.
Soon we will interact with AI in more meaningful, more agentic ways. AI will do more things for us, like booking flights, providing assistance, and handling customer service calls. It will be much more than it is today.
If you look upstream at compute, unless a company is just using the ChatGPT API to create an application, they are most likely building their own AI product, which means they have their own compute infrastructure needs. So as the inference space grows, it will become more fragmented. Right now, most of the infrastructure used for training comes from a small number of large companies. While some companies are developing new competitors to these large language models, the explosion of AI applications on the inference side will lead to a more fragmented compute market.
This means we will see a lot of demand from the channel with competitive pricing and startup and small business friendly contracts. This is probably the most imminent turning point I see.
Another product in the Aethir ecosystem is the A-phone. Can you tell us more about this product and who its target audience is?
The A-Phone is built and scaled directly on our infrastructure. It uses our cloud gaming technology to stream real-time rendering to the device in a low latency manner. This is very cool because it's all about access. For example, you can have a $150 smartphone, download the Aethir app, and then open it up to access the equivalent of a $1,500 device. All of the hardware limitations of your local device are gone because you have the cloud power that powers the app.
You can open virtually unlimited apps on your Aethir phone without draining your battery. All of the computing, processing, and storage is done in the cloud, essentially giving you a super phone that you can call up at any time to run any app you want.
Whether it's a game or an educational platform with video conferencing, it's really cool. It removes the hardware barrier for people to access content, tools, or utilities, especially for mobile users, who make up the vast majority of Internet users.
What do you think has been the key to Aethir's success to date: your technology solutions, or your business development efforts?
I think there are two areas that we're focused on as a company. The first is the enterprise element that I mentioned earlier. That meant making some very hard decisions early on. As I said, it was much easier to aggregate consumer-grade GPUs. Aggregating these enterprise-grade GPUs, which we already knew were hard to find and access, was much harder. We took a harder path early on, which put us at risk in the early days of operations. But because of that, we did the hard work and are in a better position now. Not many companies have the resolve to do something so risky early on, and that means a lot to us.
Second, we have always been very focused on real business - real utilization, real contracts, real revenue. That focus has been very important to us from the beginning. That's why we chose the enterprise path. We want to fully leverage Web3 technology and provide industry-changing solutions, not just Web3 solutions, but best-in-class industry solutions in AI and gaming.
Our business development team played a crucial role in convincing partners to join our ecosystem, especially in the early days. On the technical side, we made the process of connecting compute resources seamless. Currently, there is more supply than we can accommodate looking to enter our ecosystem. In the future, our goal is to be a truly permissionless, fully decentralized ecosystem, and we will get there. But at the beginning, we had to be pragmatic. It is not a good business move to open the floodgates of compute and have a large number of GPUs drain your tokens.
We see ourselves as a supply-led organization. We always try to have more supply than demand. We don't want to turn down demand, but we also don't want a huge gap between supply and demand. We want to grow supply and demand sensibly and steadily. We're not going to throw in unlimited GPUs just to brag about our numbers; that's not the right approach.
We have some big announcements planned in the coming weeks that show our commitment to transparency. This will be really interesting for people to see and shows that Aethir is a company that people want to be a part of.
Can you tell us a little bit about the Aethir token? How does it fit into the ecosystem and how does it generate value?
This is actually a topic for a much larger version that you will see very soon. I can't speak to the topic any more right now, but what I can say is that it has been difficult for a lot of projects to work with large Web2 entities before because of the need to deal with tokens.
This is an ongoing challenge in the space and we have a very exciting and novel solution. I think people will be very optimistic when they see it. So it will allow us to bring a lot of volume to the token.
Our largest customers are Web2 customers and I don't think that will change. We need to make sure we engage in that business and allow that value to accrue to the Aethir token and the ecosystem that it supports. That's our commitment and I think you're going to see some very interesting stuff next week about how we achieve that.