Justin Sun Might Be Behind HTX, Heco Chain Hack
This incident follows a string of exploits related to Justin Sun, who just saw Poloniex suffer a $100 million hack this month.
DavinAuthor: Paul Timofeev Source: Shoal Research Translation: Shan Ouba, Golden Finance
Explore the role of computing decentralized infrastructure in supporting the decentralized GPU market, and provide comprehensive analysis and complementary case studies.
With the rise of machine learning, especially the development of generative artificial intelligence, which requires a large number of compute-intensive workloads, computing resources have become increasingly sought after. However, due to large companies and governments hoarding these resources, startups and independent developers are now facing a shortage of GPUs in the market, resulting in excessive costs or lack of accessibility.
Compute DePINs enable a decentralized market for computing resources by allowing people around the world to offer idle computing resources, such as GPUs, in exchange for monetary rewards. This is intended to help underserved GPU consumers access new supply streams to obtain the development resources they need for their workloads at a lower cost and overhead.
Today, Compute DePINs still face many economic and technical challenges in competing with traditional centralized service providers, some of which will resolve themselves over time, while others will require new solutions and optimizations in the future.
Since the Industrial Revolution, technology has propelled humanity forward at an unprecedented pace, and nearly every aspect of daily life has been impacted or completely transformed. Computers ultimately became the culmination of the efforts of collective researchers, academics, and computer engineers. Originally designed to solve large arithmetic tasks to assist advanced military operations, computers have evolved into the backbone of modern life. As the impact of computers on humanity continues to grow, so too does the demand for these machines and the resources they require, outstripping the available supply. This in turn creates a dynamic in the market where most developers and businesses cannot access critical resources, leaving the development of machine learning and generative AI, today’s most transformative technologies, in the hands of a few well-funded players. At the same time, the vast amount of idle computing resources presents a lucrative opportunity to alleviate the imbalance between computing supply and demand, exacerbating the need for adequate coordination mechanisms between participants on both sides of a transaction. As such, we believe that decentralized systems powered by blockchain technology and digital assets are essential for the development of a broader, more democratic, and accountable generative AI products and services.
Computing can be defined as any of a variety of activities, applications, or workloads in which a computer produces a well-defined output based on a given input. Ultimately, it refers to the computational and processing power of a computer, which is fundamental to the core utility of these machines in today’s modern world, with computers alone generating a whopping $1.1 trillion in revenue last year.
Computing resources refer to the various hardware and software components that support computing and processing. As the number of applications and functions supported by these components continues to grow, they are becoming increasingly important in everyday life. This has led to a race among national powers and businesses to accumulate as many of these resources as possible as a means of survival. This is reflected in the market performance of companies that provide these resources (e.g., Nvidia, whose market capitalization has increased by more than 3,000% in the past 5 years).
Graphics Processing Units (GPUs) are one of the most important resources in modern high-performance computing. Their core function is to serve as a dedicated electronic circuit that accelerates computer graphics workloads through parallel processing. Initially serving the gaming and personal computer industries, GPUs have evolved to serve many of the emerging technologies that are shaping the future world (e.g., mainframe and personal computers, mobile devices, cloud computing, the Internet of Things). However, the rise of machine learning and artificial intelligence has particularly intensified the demand for these resources - GPUs accelerate machine learning and artificial intelligence operations by performing calculations in parallel, thereby enhancing the processing power and performance of the resulting technology.
At its core, artificial intelligence (AI) is a technology that enables computers and machines to simulate human intelligence and problem-solving abilities. An AI model operates as a neural network composed of many different pieces of data. The model requires processing power to identify and learn the relationships between these pieces of data, and then reference these relationships when creating outputs based on given inputs.
AI development and production is not new; in 1967, Frank Rosenblatt built the Mark 1 Perceptron, the first neural network-based computer that "learned" through trial and error. In addition, a large amount of academic research that laid the foundation for the development of modern AI was published in the late 1990s and early 2000s, and the industry has continued to develop since then.
In addition to R&D efforts, "narrow" AI models power a variety of powerful applications in use today. Examples include social media algorithms, Apple's Siri and Amazon's Alexa, customized product recommendations, and many more. Notably, the rise of deep learning has transformed the development of artificially generated intelligence (AGI). Deep learning algorithms use larger or “deeper” neural networks than machine learning applications as a more scalable alternative with a wider range of performance capabilities. A generative AI model “encodes a simplified representation of its training data and emits new outputs that are similar but not identical to it in reference to it.”
Deep learning enables developers to scale generative AI models to images, speech, and other complex data types, and milestone applications like ChatGPT, which has set the record for the fastest growing user base in the modern era, are still just the early versions of what is possible with generative AI and deep learning.
With this in mind, it’s no surprise that generative AI development involves multiple compute-intensive workloads, requiring a lot of processing power and compute power.
According to the Triple Whammy of Deep Learning Application Demand report, AI application development is constrained by several key workloads:
Training - Models must process and analyze large data sets to learn how to respond to given inputs.
Tuning - Models go through a series of iterative processes where various hyperparameters are adjusted and optimized to improve performance and quality.
Simulation - Certain models, such as reinforcement learning algorithms, are run through a series of test simulations before being deployed.
Over the past few decades, various technological advances have driven an unprecedented surge in demand for computing and processing power. As a result, today’s demand for computing resources, such as GPUs, far outstrips the available supply, creating a bottleneck in AI development that will only continue to worsen without effective solutions.
The broader constraints on supply are also driven by a large number of companies actively purchasing more GPUs than they actually need, both as a competitive advantage and as a means of survival in the modern global economy. Compute providers often use contract structures that require long-term capital commitments, providing customers with supply far in excess of their needs.
Epoch’s research shows that the overall number of compute-intensive AI models released has grown rapidly, indicating that demand for resources to drive these technologies will continue to grow rapidly.
As the complexity of AI models continues to increase, application developers’ demand for computing and processing power is also growing. In turn, the performance of GPUs and their availability will play an increasingly important role. This trend is already evident, with a surge in demand for high-end GPUs, such as those produced by Nvidia, which calls GPUs the “rare earth metals” or “gold” of the AI industry.
The rapid commercialization of AI has the potential to hand control to a handful of tech giants, similar to today’s social media industry, raising concerns about the ethical foundations of these models. A notable example is the recent controversy surrounding Google Gemini. While its many bizarre responses to various prompts did not pose any actual danger at the time, the incident demonstrated the inherent risks of a handful of companies dominating and controlling AI development.
Today’s tech startups face increasing challenges in acquiring computing resources to power their AI models. These applications require a large number of computationally intensive processes to be performed before the models can be deployed. For small businesses, amassing a large number of GPUs is an unsustainable endeavor, and while traditional cloud computing services such as AWS or Google Cloud offer a seamless and convenient developer experience, their limited capacity ultimately results in high costs that discourage many developers. At the end of the day, not everyone can come up with a plan to raise $7 trillion to spend on hardware costs.
Nvidia previously estimated that there are more than 40,000 companies using GPUs for AI and accelerated computing, with a global developer community of more than 4 million. Looking ahead, the global AI market is expected to grow from $515 billion in 2023 to $2.74 trillion in 2032, an average annual growth rate of 20.4%. Meanwhile, the GPU market is expected to reach $400 billion by 2032, an average annual growth rate of 25%.
However, the growing imbalance between the supply and demand of computing resources after the AI revolution may create a rather dystopian future in which a small number of well-funded giants dominate the development of many transformative technologies. Therefore, we believe that all roads lead to decentralized alternative solutions to help bridge the gap between AI developer needs and available resources.
DePIN is a term coined by the Messari research team that stands for Decentralized Physical Infrastructure Network. Breaking it down, decentralization refers to the absence of a single entity extracting rents and restricting access. Meanwhile, physical infrastructure refers to the "real life" physical resources that are utilized. A network refers to a group of participants working in a coordinated manner to achieve a predetermined goal or set of goals. Today, the total market value of DePINs is approximately $28.3 billion.
At the core of DePINs is a global network of nodes that connect physical infrastructure resources with blockchain to enable a decentralized market, connecting buyers and suppliers, where anyone can become a supplier and be compensated for their services and contributions to the network. In this case, the central intermediary that restricts access to the network through various legal and regulatory means and service fees is replaced by a decentralized protocol composed of smart contracts and code, governed by their respective token holders.
The value of DePINs is that they provide a decentralized, accessible, low-cost and scalable alternative to traditional resource networks and service providers. They enable decentralized markets designed to achieve a specific end goal; the cost of goods and services is determined by market dynamics, and anyone can participate at any time, naturally reducing unit costs as the number of suppliers increases and profit margins decrease.
The use of blockchain enables DePINs to build cryptoeconomic incentive systems that help ensure that network participants are appropriately compensated for their services, making key value providers stakeholders. However, it is important to note that network effects are achieved by turning small individual networks into larger production systems, which is critical to achieving many of the benefits of DePINs. In addition, while token rewards have proven to be a powerful means of bootstrapping networks, building sustainable incentives to help with user retention and long-term adoption remains a key challenge in the broader field of DePINs.
To better understand the value that DePINs provide in supporting decentralized computing markets, it is important to recognize the different structural components and how they work together to form a decentralized resource network. Let's consider the structure and participants of a DePIN.
A decentralized protocol, a set of smart contracts built on top of an underlying blockchain network, is used to facilitate trusted interactions between network participants. Ideally, the protocol will be governed by a diverse set of stakeholders who are actively committed to the long-term success of the network. These stakeholders then vote on proposed changes and developments using their holdings of protocol tokens. Given that successfully coordinating a distributed network is a huge challenge in itself, the core team will typically retain the power to implement these changes in the early stages, and then transition power to a decentralized autonomous organization (DAO).
The end users of a resource network are its most valuable participants and can be categorized based on their functionality.
Suppliers: Individuals or entities that provide resources to the network in exchange for monetary rewards paid in DePINs native tokens. Suppliers are "connected" to the network via a blockchain-native protocol, which may enforce a whitelisting process or a permissionless process. By receiving tokens, suppliers gain a stake in the network, similar to stakeholders in the context of equity ownership, enabling them to vote on various proposals and network developments, such as those they believe will help drive demand and increase the value of the network, thereby creating higher token prices over time. Of course, suppliers who receive tokens are also likely to utilize DePINs as a form of passive income and sell them as they receive the tokens.
Consumers: These are individuals or entities that actively seek out the resources provided by DePINs, such as AI startups seeking GPUs, representing the demand side of the economic equation. Consumers are compelled to use DePINs if there are real advantages to using DePINs over traditional alternatives (such as lower costs and overhead requirements), thus representing organic demand for the network. DePINs typically require consumers to pay for resources in their native token as a means of creating value and maintaining a steady cash flow.
DePINs can serve different markets and allocate resources with different business models. Blockworks provides a great framework for this; custom hardware DePINs, which provide suppliers with specialized proprietary hardware for allocation; and commodity hardware DePINs, which enable the allocation of existing idle resources (including but not limited to compute, storage, and bandwidth).
In an ideally functioning DePIN, value accrues from the revenue generated by consumers paying suppliers for their resources. Continued demand for the network means continued demand for the native token, which aligns with the economic incentives of suppliers and token holders. Generating sustainable organic demand in the early stages is a challenge for most startups, which is why DePINs offer inflationary token incentives to incentivize early suppliers and bootstrap the network’s supply, thereby generating demand and, therefore, more organic supply. This is very similar to how VCs subsidized Uber’s passenger costs in the early stages of the company to bootstrap the initial customer base, thereby further attracting drivers and strengthening its network effects.
DePINs need to manage token incentives as strategically as possible, as they play a key role in the overall success of the network. When demand and network revenues rise, token issuance should decrease. Conversely, when demand and revenues fall, token issuance should be used to incentivize supply again.
To further illustrate what a successful DePIN network looks like, consider the “DePIN Flywheel,” a positively reflexive loop used to bootstrap DePINs. To summarize:
DePIN incentivizes suppliers to provide resources to the network by distributing inflationary token rewards and establishing a base level of supply available for consumption.
Assuming the number of suppliers begins to grow, competitive dynamics begin to form in the network, improving the overall quality of goods and services provided by the network to a level that is better than existing market solutions, thereby gaining a competitive advantage. This means that a decentralized system surpasses traditional centralized service providers, which is no small feat.
DePIN begins to form organic demand, providing legitimate cash flow for suppliers. This is a compelling opportunity for investors and suppliers, continuing to drive demand for the network and therefore the token price higher.
Growth in token price increases revenue for suppliers, attracting more suppliers, restarting the flywheel.
The framework provides a compelling growth strategy, but it is worth noting that it is largely theoretical and assumes that the network is providing competitive resources and remains relevant over a long period of time.
The decentralized computing market falls within the scope of a broader movement known as the “sharing economy,” a peer-to-peer economic system built on consumers sharing goods and services directly with other consumers through online platforms. This model, pioneered by companies like eBay and dominated today by companies like Airbnb and Uber, is set to be disrupted as the next generation of transformative technologies sweeps across global markets. The value of the sharing economy will reach $15 billion by 2023, and is expected to grow to nearly $80 billion by 2031, which is indicative of a broader trend in consumer behavior that we believe DePIN will benefit from and play a key role in enabling.
Compute DePINs are peer-to-peer networks that facilitate the allocation of computing resources through decentralized marketplaces connecting suppliers and buyers. A key difference between these networks is that they focus on commodity hardware resources, which are already available to many people today. As we have discussed, the emergence of deep learning and generative AI has created a surge in demand for processing power due to their resource-intensive workloads, creating bottlenecks in access to key resources for AI development. In short, decentralized computing markets aim to alleviate these bottlenecks by creating a new supply stream - one that spans the globe and that anyone can participate in. In Computation DePINs, any individual or entity can instantly lend out their idle resources and receive appropriate compensation for their services. At the same time, any individual or entity can access necessary resources from a global permissionless network with lower costs and greater flexibility than existing market offerings. Therefore, we can structure the participants in Computation DePINs through a simple economic framework: Supply side: individuals or entities that have computing resources and are willing to lend or sell their computing resources for subsidies. Demand side: individuals or entities that need computing and are willing to pay for it. Key advantages of Computation DePINs Computation DePINs offer many advantages that make them an alternative to centralized service providers and markets. First, allowing permissionless, cross-border market participation unlocks a new supply stream, increasing the amount of critical resources needed for compute-intensive workloads. Compute DePINs focus on hardware resources that most people already own—anyone with a gaming PC already has a GPU that can be rented out. This expands the range of developers and teams that can participate in building the next generation of goods and services, thereby benefiting more people around the world.
Going deeper, the blockchain infrastructure that supports DePINs provides an efficient and scalable settlement channel for facilitating peer-to-peer transactions. Crypto-native financial assets (tokens) provide a shared unit of value that demand-side participants use to pay suppliers, leveraging a distribution mechanism consistent with today's increasingly globalized economy. Referring to the DePIN flywheel construct mentioned earlier, strategically managing economic incentives is highly beneficial to increasing the network effects of DePINs (on both the supply and demand sides), thereby increasing competition among suppliers. This dynamic reduces unit costs while improving service quality, creating a sustainable competitive advantage for DePINs, from which suppliers can benefit as token holders and key value providers.
DePINs function similarly to cloud computing service providers, aiming to provide a flexible user experience where resources can be accessed and paid for on demand. According to Grandview Research, the global cloud computing market size is expected to grow at an average annual rate of 21.2% to exceed $2.4 trillion by 2030, proving the viability of this business model given the future demand forecasts for computing resources. Modern cloud computing platforms utilize central servers to handle all communications between client devices and servers, creating a single point of failure in their operations. Built on blockchain, DePINs can provide greater censorship resistance and resilience than traditional service providers. While attacks on a single organization or entity (such as a central cloud service provider) can compromise the entire network of underlying resources, DePINs are designed to be resistant to such events through their distributed nature. First, the blockchain itself is a globally distributed network of dedicated nodes designed to resist centralized network authorities. In addition, computing DePINs also allows for permissionless network participation, bypassing legal and regulatory barriers. Due to the nature of the token distribution, DePINs can adopt a fair voting process to vote on proposed changes and developments to the protocol to eliminate the possibility of a single entity suddenly shutting down the entire network.
Render Network is a computational DePIN that connects GPU buyers and sellers through a decentralized computing marketplace, with transactions conducted through its native token. Render's GPU marketplace involves two key parties - creators looking for processing power and node operators who rent idle GPUs in exchange for compensation in native Render tokens. Node operators are ranked by a reputation-based system, and creators can choose GPUs from a multi-tiered pricing system. The Proof-of-Render (POR) consensus algorithm coordinates operations, and node operators commit their computing resources (GPUs) to process tasks, i.e., graphics rendering work. Once a task is completed, the POR algorithm updates the node operator's status, including changes to the reputation score based on the quality of the task. Render's blockchain infrastructure facilitates task payments, providing a transparent and efficient settlement channel for suppliers and buyers to transact through the network token.
Render Network was conceived by Jules Urbach in 2009, and the network went live on Ethereum (RNDR) in September 2020, migrating to Solana (RENDER) about three years later to improve network performance and reduce operating costs.
As of this writing, Render Network has processed up to 33 million tasks (measured in rendered frames) and has grown to 5,600 nodes since its inception. Just under 60k RENDER is burned, a process that occurs when work credits are distributed to node operators.
Io Net is launching a decentralized GPU network on Solana as a coordination layer between the vast pool of idle computing resources and the growing number of individuals and entities that need the processing power these resources provide. Io Net’s unique selling point is that it does not compete directly with other DePINs on the market, but rather aggregates GPUs from a variety of sources including data centers, miners, and other DePINs including Render Network and Filecoin, while leveraging a proprietary DePIN — the Internet-of-GPUs (IoG) — to coordinate operations and align incentives between market participants. Io Net customers can customize a cluster on IO Cloud for their workloads by selecting processor type, location, communication speed, compliance, and service term. Conversely, anyone with a supported GPU model (12 GB RAM, 256 GB SSD) can participate as an IO Worker, earning rewards by lending their idle computing resources to the network. While service payments are currently settled in fiat currencies and USDC, the network will soon support payments in native $IO tokens as well. The price paid for resources is determined by their supply and demand as well as various GPU specifications and configuration algorithms. The ultimate goal of Io Net is to become the preferred GPU marketplace by providing lower costs and better quality of service than modern cloud service providers.
The multi-layer IO architecture can be mapped as follows:
UI Layer - consists of the public website, client area, and workspace.
Security Layer - This layer consists of a firewall for network protection, an authentication service for user verification, and a logging service for tracking activities.
API Layer - This layer acts as a communication layer and consists of public APIs, private APIs, and internal APIs for cluster management, analytics, and monitoring and reporting.
Backend Layer - The backend layer manages workspaces, cluster/GPU operations, customer interactions, billing and usage monitoring, analytics, and auto-scaling.
Database Layer - This layer is the data repository of the system, using primary storage for structured data and cache for frequently accessed temporary data.
Message Broker and Task Layer - This layer facilitates asynchronous communication and task management.
Infrastructure Layer - This layer contains GPU pools, orchestration tools, and manages task deployment.
Current Statistics/Roadmap:
As of the time of writing:
Total network revenue: $1.08 million
Total computing hours: 837.6k hours
Total number of GPUs in the prepared cluster: 20.4k
Total number of CPUs in the prepared cluster: 5.6k
Total number of on-chain transactions: 167 10,000
Total inferences: 335.7k
Total clusters created: 15.1k
Data from Io Net Explorer.
Aethir is a cloud computing DePIN that facilitates the sharing of high-performance computing resources in compute-intensive fields and applications. It leverages resource pools to achieve global GPU allocation at significantly reduced costs and enables decentralized ownership through distributed resource ownership. Aether has designed a distributed GPU framework specifically targeting high-performance workloads such as gaming and AI model training and inference. By unifying GPU clusters into a single network, Aethir is designed to increase cluster size, thereby improving the overall performance and reliability of services provided on its network.
Aethir Network is a decentralized economy consisting of miners, developers, users, token holders, and the Aethir DAO. The three key roles that ensure the successful operation of the network are containers, indexers, and inspectors. Containers are the power nodes of the network, performing critical operations to keep the network active as dedicated nodes, including validating transactions and rendering digital content in real time. Inspectors are quality assurance workers that continuously monitor the performance and quality of service of containers to ensure reliable and efficient operation that meets the needs of GPU consumers. Indexers act as matchmakers between users and the best available containers. Underpinning this structure is the Arbitrum Layer 2 blockchain, which provides a decentralized settlement layer to facilitate payments for goods and services on the Aethir network, using the native $ATH token.
Nodes in the Aethir network have two key functions - rendering proof of power, randomly selecting a group of workers every 15 minutes to verify transactions, and rendering proof of work, closely monitoring network performance to ensure users are best served, adjusting resources based on demand and geography. Mining rewards are distributed in the form of native $ATH tokens to participants who run Aethir network nodes to reward them for the computing resources they provide.
Nosana is a decentralized GPU network built on Solana. Nosana allows anyone to contribute idle computing resources and be rewarded in the form of $NOS tokens for doing so. DePIN facilitates the allocation of cost-effective GPUs that can be used to run complex AI workloads without the overhead of traditional cloud solutions. Anyone can run a Nosana node by renting out idle GPUs, earning token rewards proportional to the GPU power they provide to the network.
The network connects two parties that allocate computing resources: users seeking access to computing resources and node operators who provide them. Important protocol decisions and upgrades are voted on by NOS token holders and governed by the Nosana DAO.
Nosana has a detailed roadmap for its future plans - Galactica (v1.0 - H1/H2 2024) will launch mainnet, release CLI and SDK, and focus on network expansion through container nodes for consumer GPUs. Triangulum (v1.X - H2 2024) will integrate major machine learning protocols and connectors for PyTorch, HuggingFace, and TensorFlow. Whirlpool (v1.X - H1 2025) will expand support for different GPUs from AMD, Intel, and Apple Silicon. Sombrero (v1.X - H2 2025) will add support for medium to large enterprises, fiat currency exchange, billing, and team features.
Akash Network is an open-source proof-of-stake network built on top of the Cosmos SDK, a decentralized cloud computing marketplace that allows anyone to join and contribute. The $AKT token is used to secure the network, facilitate resource payments, and coordinate economic alignment between network participants. Akash Network consists of several key components:
Blockchain layer, which provides consensus using Tendermint Core and the Cosmos SDK.
Application layer, which manages deployment and resource allocation.
Provider layer, which manages resources, bidding, and user application deployment.
User Layer, which allows users to interact with the Akash Network, manage resources, and monitor application status through a CLI, console, and dashboard.
Originally focused on storage and CPU rental services, the network later expanded to GPU rental and allocation through its AkashML platform in response to the growth of AI training and inference workloads and their demand for processing power. AkashML uses a "reverse auction" system where customers (called tenants) submit the price they want to pay for a GPU, and compute vendors (called providers) compete to supply the requested GPUs.
As of this writing, the Akash blockchain has seen over 12.9 million total transactions, over $535,000 has been spent to access compute resources, and over 189k unique deployments have been leased.
The compute DePIN space is still evolving, with many teams racing to bring innovative and efficient solutions to market. Other examples that warrant further investigation include: Hyperbolic is building a collaborative open-access platform for AI development resource pools, Exabits is building a distributed computing power network supported by computational miners, and Shaga is building a network on Solana that allows PC rental and monetization for server-side gaming.
Now that we understand the fundamentals of computational DePIN and reviewed several currently running complementary case studies, it is important to consider the impact of these decentralized networks, including the pros and cons.
Building distributed networks at scale often requires trade-offs in performance versus security, resiliency, etc. For example, training AI models on a globally distributed network of commodity hardware may be less cost-effective and time-efficient. As mentioned earlier, AI models and their workloads are becoming increasingly complex, requiring more high-performance GPUs rather than commodity GPUs.
This is why large companies hoard high-performance GPUs in large quantities, and this is the inherent challenge of computing DePINs attempts to solve the GPU shortage problem by establishing a permissionless market where anyone can lend out idle supply. Protocols can solve this problem in two main ways: setting baseline requirements for GPU providers who want to contribute to the network, and by pooling the computing resources provided to the network to achieve a larger whole. However, this model is inherently more challenging than centralized service providers who can allocate more capital to deal directly with hardware vendors such as Nvidia. DePINs should consider this in the future. If a decentralized protocol has a large enough treasury, the DAO can vote to allocate part of the funds to purchase high-performance GPUs, which can be managed in a decentralized manner and rented out at a higher price than commoditized GPUs.
Another challenge specific to computational DePINs is managing the right amount of resource utilization. In their early stages, most computational DePINs will face a structural lack of demand, much like the situation many startups face today. In general, the challenge for DePINs is to build enough supply early on to achieve minimum viable product quality. Without supply, the network will not be able to generate sustainable demand and will not be able to serve its customers during peak demand. The other side of this equation is the concern of excess supply. Beyond a certain threshold, more supply is only beneficial when the network’s utilization is close to or at full capacity. Otherwise, DePINs run the risk of overpaying for supply, which in turn leads to underutilization of resources, and reduced revenue for suppliers unless the protocol increases token issuance to retain suppliers.
Just as a telecommunications network without broad geographic coverage is useless, a taxi network is useless if passengers have to wait too long for a ride. A DePIN is useless if it has to pay people to provide resources over a long period of time. While centralized service providers can predict resource demand and manage supply efficiently, computational DePINs lack a central authority to manage this utilization. Therefore, DePINs must be particularly strategic in building resource utilization.
A bigger picture issue for the decentralized GPU market is that the GPU shortage may be coming to an end. Mark Zuckerberg recently said in an interview that he believes the bottleneck in the future will be energy, not compute resources, because companies will now compete to build data centers in large numbers, rather than hoarding compute resources as they do today. Of course, this means that the cost of GPUs may decrease due to slowing demand, but it also raises the question of how AI startups will compete with large companies in terms of performance and quality of service if building proprietary data centers raises the AI model performance bar to unprecedented levels.
To reiterate, there is a growing gap between the complexity of AI models and their subsequent processing and computing requirements and the number of high-performance GPUs and other computing resources available.
Computational DePINs have the potential to innovate and disrupt the computing market space, which is dominated today by major hardware manufacturers and cloud computing service providers, based on several key capabilities:
Provide lower costs for goods and services.
Provide stronger censorship resistance and network resilience guarantees.
Benefit from potential regulatory guidelines for AI, requiring AI models to be as open as possible for fine-tuning and training, and easily accessible to anyone, anywhere.
The percentage of households with computers and internet access in the United States has grown exponentially, approaching 100%. It has also grown significantly in many parts of the world. This suggests that potential computing resource providers (GPU owners) may be willing to lend out idle supply if there is sufficient monetary incentive and a seamless transaction process. Of course, this is a very rough estimate, but it suggests that the foundation for building a sustainable computing resource sharing economy may already exist.
Beyond AI, future demand for computing will also come from many other industries, such as quantum computing. The quantum computing market size is expected to grow from $928.8 million in 2023 to $6,528.8 million in 2030, an average annual growth rate of 32.1%. Production in this industry will require different kinds of resources, but it will be interesting to see if any quantum computing DePINs start up and what they will look like.
"A strong ecosystem of open source models running on consumer hardware is an important countermeasure to protect future value from being captured by excessive concentration in AI, and at a much lower rate than corporate giants and the military." - Vitalik Buterin
Large enterprises are probably not the target audience for DePINs, nor will they be. Computational DePINs re-empower individual developers, small entrepreneurs, and startups with limited resources. They allow for the transformation of idle supply into innovative ideas and solutions brought about by the abundance of more computing resources. AI will undoubtedly change the lives of billions of people. Instead of worrying that it will replace everyone's jobs, we should encourage the idea that AI can empower individuals and self-employed entrepreneurs, startups, and the wider public.
This incident follows a string of exploits related to Justin Sun, who just saw Poloniex suffer a $100 million hack this month.
DavinXapo Bank, under CEO Seamus Rocca's leadership, strategically navigates Latin America's evolving crypto landscape, capitalizing on grassroots adoption in countries like Argentina and Mexico by offering secure access to digital assets, addressing financial challenges, and revolutionizing remittances through stablecoins.
JasperKraken's co-founder, Jesse Powell, recently discussed the enduring struggles within the cryptocurrency industry amidst emerging threats to its reputation. He referenced Binance's historic $4.3 billion fine settlement with the U.S. Department of Justice and the U.S. Securities and Exchange Commission's (SEC) recent lawsuit against Kraken, illustrating the volatility faced by industry players.
JoyFTX founder Sam Bankman-Fried faces denied bail in the midst of legal turmoil following a fraud conviction.
Hui XinAlready, Italy, Croatia, Poland, Portugal, Slovenia, Luxembourg, and Romania have committed to the Europeum plan, with Belgium slated to host the project's headquarters.
BrianIn the aftermath of Binance facing a monumental $4 billion fine for alleged involvement in Hamas financing, Sean Chen, Chairman of Taiwan's Appacus Foundation, asserts that this landmark penalty underscores the crucial need for legal frameworks in the digital finance realm. Chen, who is also a Taiwanese political figure and financial expert, contends that the incident not only signals the paramount importance of legal structuring in digital finance but also marks the industry's entry into a new era of legal scrutiny and order.
JoyUS prosecutors raise concerns about former Binance CEO CZ Zhao's flight risk, urging travel restrictions as he faces sentencing in 2024.
Hui XinMatr1x, a Singapore-based NFT gaming company, has announced a significant funding boost of $10 million for its mobile gaming initiatives. This recent financial injection, disclosed on Thursday, marks a substantial step forward for the firm's gaming ventures.
JoyHTX, formerly Huobi Global, grapples with its fourth hack in two months, losing $30 million, raising concerns about the exchange's security despite reassurances from executives.
JasperBank of Korea pioneers a revolutionary CBDC pilot involving 100,000 citizens, marking a significant step towards the Digital Won era.
Hui Xin