Author: ArweaveOasis, Source: PermaDAO
This article discusses the advantages of adopting a microservice architecture (or Actor model) and analyzes the logical complexity it brings to application development.
The release of @aoTheComputer has undoubtedly brought a new way of thinking and practice to the entire @ArweaveEco ecosystem and even the entire Web3 industry. This is not only reflected in the attention of the majority of investors, but also in attracting a large number of high-quality developers to start in-depth research.
What is hindering the large-scale adoption of Web3?
It's simple, because there are too few decentralized applications worth using.
Based on the current status of Web3 infrastructure, development tools, software engineering practices, etc., many types of decentralized applications are currently almost impossible to achieve.
In terms of infrastructure, I think the emergence of AO fills some of the major gaps. However, the complexity of the engineering of building large decentralized applications is still daunting. This prevents us from developing more diverse, larger, and often better, more feature-rich decentralized applications under resource constraints - which is usually the case in the early stages of development.
Don't believe those nonsense like "smart contracts/on-chain programs should be simple, there is no need to make them too complicated"!
The reality is not "don't want to", but "can't" - I can't do it.
AO is a computer system running on Arweave, designed to achieve verifiable unlimited computing power. It is short for Actor Oriented. As the name suggests, this means that decentralized applications running on AO need to adopt a design and programming method based on the Actor model.
In fact, AO is not the first to use the Actor model for blockchain (or "decentralized infrastructure"). For example, TON's smart contracts are built using the Actor model. Speaking of TON, I personally think it is quite similar to AO in some aspects.
For Web2 developers who have not yet deeply understood Web3, a convenient way to quickly understand the biggest feature of AO or TON compared to other "monopoly blockchains" is to regard the smart contracts (on-chain programs) running on them as "microservices". AO or TON is the infrastructure that supports the operation of these microservices, such as Kafka, Kubernetes, etc.
As a senior CRUD boy who has been mainly focused on application development for more than 20 years, I personally am very happy to see the emergence of non-monopoly blockchains such as AO and TON, and I am full of expectations for their development. Next, I would like to talk about my views on AO from the perspective of an application developer. Many of my views may not be mature yet. Maybe some application developers will feel the same way, that's enough.
Is it really necessary to apply the Actor model to blockchain?
The answer is yes. Look at the Web2 applications that have achieved "mass adoption" and you will understand.
Too many architects already know how to "make" Web2 applications "big": microservice architecture (MSA), event-driven architecture (EDA), message communication mechanism, eventual consistency model, sharding... These things, no matter what they are called, always coexist with the Actor model. Some of these concepts can even be said to be just different aspects of one thing. So in the following text, we do not distinguish between "microservices" and actors, you can think of them as synonyms.
Today's prosperity of the Internet is inseparable from the wisdom of these architects. They continue to explore, practice, and summarize, and finally formed a complete set of engineering practice systems.
As a Web3 infrastructure, AO has done a great job. At least, as the best decentralized message broker in the current Web3 field (in my eyes), AO has shown great potential. I believe that developers of traditional Web2 applications can immediately understand the significance of this: if there is no Kafka or Kafka-like message broker available, can you imagine how many large Internet applications are "written" now?
Although the Actor model has theoretical advantages in many aspects, whether it is the Actor model or the microservice architecture, in my opinion, it is more of a "pain" that developers have to endure in order to develop certain applications (especially large applications).
Let's use a simple example to illustrate this to non-technical readers. Suppose all banks in the world conduct business based on a "world computer", and this world computer is a monolithic system. So, when "Zhang San", a customer of ICBC, remits 100 yuan to "Li Si", who has an account in China Merchants Bank, the developer can write the code of the transfer program like this:
1. Start a transaction (or "transaction", which is the same word in English);
2. Deduct 100 yuan from Zhang San's account;
3. Add 100 yuan to Li Si's account;
4. Commit the transaction.
No matter which step of the above steps goes wrong, for example, the third step, adding money to Li Si's account, fails for some reason, then the whole operation will be rolled back, as if nothing happened. By the way, we call this the "strong consistency" model when the program is written like this.
What if the world computer is a system that adopts the microservice architecture (MSA)?
Then, the microservice (or Actor) that manages the ICBC account and the microservice that manages the China Merchants Bank account are unlikely to be the same. Let's assume that they are not the same. We call the former Actor ICBC and the latter Actor CMB. At this point, developers may need to write the transfer code like this:
1. Actor ICBC first records the following information: "Zhang San transfers 100 yuan to Li Si"; Actor ICBC deducts 100 yuan from Zhang San's account and sends a message to Actor CMB: "Zhang San transfers 100 yuan to Li Si";
2. Actor CMB receives the message, adds 100 yuan to Li Si's account, and then sends a message to Actor ICBC: "Li Si has received 100 yuan from Zhang San";
3. Actor ICBC receives the message and records: "Zhang San transfers 100 yuan to Li Si, successfully".
The above is just the "everything is fine" process. However, what if a step, such as the second step, "add 100 yuan to Li Si's account", has a problem?
Developers need to write the following processing logic for this possible problem:
Actor ICBC receives the message, adds 100 yuan to Zhang San's account, and records: "Zhang San transferred 100 yuan to Li Si, but the processing failed".
Writing the program like this is called adopting the eventual consistency model.
Non-technical readers should be able to intuitively feel the huge difference in workload between developing monolithic applications and developing MSA applications, right? You should know that the transfer example mentioned above is just a very simple application, if we call it an application instead of a function. The functions in large applications are often much more complicated than such examples.
How big should this microservice be?
In other words, "Is this microservice too big and should it be split in two?"
Unfortunately, there is no standard answer to this question, it is an art. The smaller the microservice, the easier it is to optimize the system by creating new instances and moving them as needed. However, the smaller the microservice, the harder it is for developers to implement complex processes, as shown above.
By the way, splitting an application into multiple microservices is called "sharding" from a database design perspective. One of the best practices of microservice architecture is that each microservice uses only one local database of its own. In simple terms, sharding allows horizontal scaling. When data sets become too large to be processed by traditional means, there is no other way (to scale) except to split them into smaller pieces.
Back to the issue of splitting microservices. In order to better practice this art, we need to master the use of some thinking tools. The "aggregate" of DDD (domain-driven design) is such a "killer" that you must have. What I mean is that it can help you destroy the "core complexity" in software design.
I think aggregate is the most important concept of DDD at the tactical level.
What is aggregate? Aggregates draw boundaries between objects, especially between entities. An aggregate must contain and only contains one aggregate root entity, and may contain an indefinite number of aggregate internal entities (or non-aggregate root entities).
We can use the concept of aggregate to analyze and model the domain served by the application; then when coding, we can divide microservices according to aggregates. The simplest way is to implement each aggregate as a microservice.
However, no matter how skilled you are, you cannot guarantee that you will do this right the first time. At this time, a tool that allows you to verify the modeling results as soon as possible and start over if it doesn't work is invaluable to you.
What else may constitute an obstacle for large Web2 applications to migrate to the AO ecosystem?
I want to talk about the problem of language and program runtime.
AO is a data protocol. You can think of it as a set of interface specifications that define how the various "units" in the AO network can collaborate. At present, the official implementation of AO includes a WASM-based virtual machine environment and a Lua runtime environment (ao-lib) compiled to WASM, which aims to simplify the development of AO processes.
Lua is a small and beautiful language. It is generally believed that Lua's advantages lie in its lightweight and easy embedding in other languages, which makes it particularly useful in specific scenarios (such as game development). However, for the development of large-scale Internet applications, Lua language is not a mainstream choice. Large-scale Internet application development usually tends to use languages such as Java, C#, PHP, Python, JavaScript, Ruby, etc., because these languages provide a more comprehensive ecosystem and tool chain, as well as broader community support.
Some people may argue that these languages can be compiled into WASM bytecode and run in the WASM virtual machine. But in fact, although WASM has a strong performance in the field of Web front-end development, it is not a mainstream choice for Internet applications to use WASM as the back-end operating environment. Note that smart contracts (on-chain programs) are the "new backend" in the Web3 era.
Summary
In summary, we have discussed the advantages of adopting a microservice architecture (or Actor model) and the complexity it brings to application development. Some complexity is inevitable. For example, even in the more mature Web2 environment, achieving "eventual consistency" based on message communication is already a challenge for many developers. This challenge seems to be even more obvious when developing Dapps on the nascent AO platform - of course this is completely understandable. The beginning of the following linked article shows an example.
https://github.com/dddappp/A-AO-Demo?tab=readme-ov-file#an-ao-dapp-development-demo-with-a-low-code-approach
We all know that the battle for public chains is actually a war for application developers. So, how can AO win developers in this case?
I think we need to continue to learn from Web2, which has already achieved "mass adoption". This includes not only learning its infrastructure, but also various aspects such as development methodology, development tools, and software engineering practices. In the next article, I will show you a solution that I firmly believe in: low-code development.