Interviewee: Andre Cronje, Translator: loxia.eth
This is a detailed interview about Andre Cronje, with a total duration of 1 h 20 min.
Contains Andre Cronje’s summary of his past career, as well as a lot of experience, guiding suggestions, and personal ideological judgments.
Andre Cronje is an OG in many fields in the industry. This is the most recent and in-depth long interview, which is worth reading and referring to.
The article is about 20,000 words in total and is divided into 9 parts.
Andre Cronje believes that it is foolish to regard any blockchain project as an "Ethereum killer". Even if you add up the total value locked (TVL) of all blockchain networks, including Bitcoin and Ethereum, they are a tiny fraction of the overall financial world. If you think that one blockchain project can solve global financial problems, you are simply crazy.
1. Introduction
Moderator 1: Hello everyone, welcome to the "Bright Speech" program! Today we are honored to invite Andre Cronje, the founder of Yearn Finance, Phantom and Keeper Network, and one of the important contributors to many DeFi projects. Andre, welcome to the show!
Andre Cronje: Thank you. Well, that introduction is a bit exaggerated. I'm just a person who likes to write code.
Host 1: When I was listening to Extraordinary Core in 2020, they said you are a builder. I'm not actually a developer myself, just an integrator, but I think you're thinking too low of yourself. You have a very interesting story and we can learn a lot from it. Maybe we could start in 2017, when you first entered this space, right around the time of the ICO era. So it's great to hear how you got into this and hopefully explain to people who weren't there at the time just how crazy that era was.
2. The ICO Era: Andre’s Cryptocurrency Journey
Andre Cronje: Yeah, I mean, before I got involved in cryptocurrencies, I'm a very traditional cryptocurrency skeptic. I come from a traditional finance background and was an architect and CTO of a small financial company. We were doing some high-throughput stuff, and at the time we were using Kafka and Scala. So that's my background on high throughput financial solutions.
That era in 2017 was very similar to now, in many ways, because there was too much noise, there were many teams claiming to solve some industry-wide problems, traditional finance and traditional distributed systems had already Struggled for decades. But these 18- to 20-year-old guys with no work experience launch an ICO, raise 20M or 40M, claiming that they solve a distributed system problem or something.
So I initially got into this just to test my suspicions and make sure I wasn't missing something, you know, creating a disruptive technology and displacing the previous one, it's not the first time that this has happened , this kind of thing will happen. My concern is that there is a lack of a lot of research evidence in the blockchain field. While there is a lack of a lot of strong evidence, there are a lot of people claiming to have done something. So I got into the field and started reading white papers, and again the white papers theoretically proved that a lot of the proofs seemed reasonable, but now the other problem, there are today, is that there are a lot of good-sounding proofs, and you say, "There are It makes sense, and it works." But after it is implemented, when you put it into practice, there will be some hard restrictions that will not allow it to work the way you expect.
Even if the theory is correct, or even if the concept is correct, emmmmm (it may not be feasible), so after reading a lot of white papers, I started to look at a lot of code, and I started to do my Original code review. I'm not doing these code reviews from any value creation perspective or any due diligence perspective, it's totally just like I read this white paper and it says it solves problem X and then I look at the code and I wonder if it Solving X problem is more like a record of myself.
So you know, when I was writing down these processes on Medium, I just wrote down and said, well, this code doesn't match what they're saying here, this code base has nothing to do with what they're saying here. . I made these public for some reason, and they became very popular in the ICO era because there weren't a lot of naysayers and there weren't a lot of people saying, "This won't work because your The code proves you don't have what you say". A question arises at this point, and it's important. The reason I eventually stopped doing code reviews is that people started to treat them as investment signals rather than any code fundamentals, because I shared them for others to learn and go on. The same learning journey I’m trying to go through.
So, I did my own public review and then ended up working with a company called Crypt Briefing, working with Hana and John and those guys, they're still great and I'm still in touch with them today , started doing some reviews for them, but then it also became a lot more mixed with a shift that I didn't like, which is I like to review publicly available code, so you know, if it's on GitHub, I can see it At the same time, these codes are available for everyone to see so that people can verify whether what I said is factual or tell me if there are any mistakes.
But as that influence grew, more and more teams wanted us to review their private code and then publish the results, which made me uncomfortable because it was purely an investment signal. , but anyway, that's a parallel (another thing) that we can get into in depth (sometimes), but going through it all, you know that 99.9% is garbage, but there is 1% that is of real value exists, and this noise ratio is obviously extremely high, but that 1% has always bothered me and attracted me.
So looking back, my focus shifted from trying to understand what was going on to catching up with the industry, and I think I did that in about two years, I think about Around 2019, maybe a little earlier, maybe late 2018, I think I managed to catch up. It's hard to catch up in this field, I mean, there's so much new stuff coming out every day that you have to read the other 98% of what's posted to know what's actually going on, but the amount of real stuff that's actually going on is very small and only accounts for 1% to 2%.
At that time, I began to pay attention to one thing, that is, POW (proof of work) at that time was obviously a bottleneck. When you look at a blockchain system, you think, well, the speed is obviously limited. Bitcoin’s longest chain rule standard at the time was that transactions should take between 10 and 30 minutes. Before that, I was fascinated by cross-border payments, cross-border settlements and instant online payments.
I am South African and South Africa is not even a member of SWIFT or IBAN (organizations that provide global financial services) etc. We are restricted by exchange controls and restricted by online consumption. Our banking system is very constrained and that has been a challenge. Seeing this freedom from being controlled by a single entity really appealed to me, and it fit my background.
So I started focusing on consensus research. During that time, the research and code reviews I did started to lead me to get to know Fantom and the team there, and start to get more deeply involved. The market was very frantic when they were raising money and they managed to raise around $40 million in ETH! It’s worth mentioning that they kept this ETH, even during the bear market, and I remember they ended up selling it when Ethereum reached around $300. However, they make a lot of promises that sound good but actually don't deliver. They seemed to be aware of this, but did not choose to actively withdraw, such as spending money or doing something to consume the funds. Eventually they asked me if they could use the research I started publishing. I had been thinking about starting my own chain and this was a good fit because I had no experience interacting with VCs or raising money or anything like that. . It's not my expertise, it's a skill, and I don't have it.
You know, that's the reason I've never launched anything, whether it's Yearn or Keeper or anything I've launched, without VC investment or these other things, which a lot of people think is Kind of a statement I'm making about work ethics, it's not that, I'm just not good at it, so I figure out ways to get around it, that's all.
So in the end, they had the funds, they had a team with a brand, so in the end I pushed my research into that, and the first thing was consensus, and the original consensus was ABFT (Asynchronous Byzantine Fault Tolerance) ), they call it Lachesis, but it's actually just based on a paper from the early 1990s, Common Concurrent Knowledge, which is really just the ABFT point-to-point communication system. When we initially launched, it was late 2019 or early 2020. Consensus itself was great, you know, it was one of the first ABFT solutions and it jumped the transaction speed from that max of 7 TPS back then, we didn't have VMs connected yet, we were just doing raw Transaction connectivity, because that's just a pure payment network, we could easily touch pure payment frequency between 30,000 and 50,000, depending on the validator's connectivity or participation in the network.
But we wanted to allow virtual machines because smart contracts are powerful. At that time we chose EVM, which was our only real viable option. We considered using WM, and we considered using risk. Base compilers and so on, but then and even now, you know, for a blockchain to actually become viable and even adaptable for use, you need a lot of service providers on top of you to let people do everything on the base chain. It's all difficult because everyone says, well, we're just doing EVM and people are just forking EVM, so we're just going to stick with EVM and then we connect our consensus as a base layer because consensus is just an ordering system , that's all it does, it takes transactions, sorts them, and then those transactions are given to the virtual machine and executed into state.
Then we noticed that our TPS would drop to between 180 and 200 at most up to 200, which was purely a limitation of EVM, and then the next three years we were purely trying to improve EVM , we made some progress, but I have to say, if I could go back and change that decision, I definitely would.
I think we took what was easiest at the time and went the EVM route so we knew it would be easier to integrate with all these third party vendors and that was a positive choice because we didn't have the capability Build your own wallet, build your own RPC node provider, your own instant deployment, etc., etc., but anyway, this is a topic we can discuss in depth later.
3. Establish Yearn Finance
Andre Cronje: In the topic mentioned before, they raised $40 million and left all the funds in on ETH. However, when this was finally converted to U.S. dollars, only about $2.5 million was left. I want to touch on this, this is our operating capital as a team. In order to manage this money, I started looking into the many lending protocols that existed at the time, such as Compound, BZX, Full Crim, and many more. With the exception of Compound, all other protocols have disappeared. I looked at these protocols every day, remembering that fees on Ethereum at the time were only three to six cents, so operations were possible every day. I would check these sites every morning to see which one offered the highest annual percentage yield (APY), and then manually transfer funds between these protocols. Over time I realized that it was annoying to check these websites every day, they should have smart contracts on the chain that would show the interest rates and I could collect all the data and display it.
The first smart contract I wrote and deployed on Ethereum was just an annualized yield (APY) aggregator. It can get data from all these different places and display it. The reason I did this was because at the time I couldn't figure out the RPC infrastructure like Web3 JS or anything else related to get data from nodes and perform operations. So, for me, the easier way is to deploy to the chain and read from there.
So, I started my Solidity development journey. With this smart contract, at least I can go and see which interest rate is the highest every morning and then transfer the funds. Then I realized, hey, I could actually write a smart contract to do this for me. This is where Yearn comes from. Later on, it got smarter and now it's rocket science compared to the code I write. But, that's the basics. What I want is to automate the things I do manually every day until it can handle the money I manage. I eventually opened it up so others could use the same system. I no longer have to click a button every morning to reallocate funds between different protocols because it will reallocate funds every time someone interacts with it, whether it's a deposit or a withdrawal. This ultimately automated the entire process, which is how Yearn came about.
However, as Yearn grew, the token launch did not go as planned. The issuance of tokens is not fair. I'm just being sarcastic about these worthless tokens. I said as long as you provide liquidity, I will give away this garbage for free. This seemed like the stupidest thing in my head, but apparently I was wrong. However, this attracted a lot of attention, people started joining, and things started to get more complicated, involving strategic investments, infrastructure, and so on.
As the strategy deepened, we spent a lot of energy on harvesting. We are like any protocol that sells tokens. It also became a thing. I used to do this manually with these scripts. So I said, there must be a way to do this in a public space where anyone can invoke it and they would be motivated to invoke it. This is where tasks and keepers come in. Eventually, this evolved into a keeper network, which worked well for Yearn. So we decided to open this up so that anyone can connect to a task and have keepers execute it. I don't know who these keepers are, but they do the job. The first task I launched on the chain was really fascinating because we didn't promote it, we didn't release anything, we just activated the task and the bots started calling it. It's really hectic to see these things happening on the chain, which is probably why it used to be called the Dark Forest, now I guess it's just the Mev Forest.
4. Mistakes and testing in production
Andre Cronje: Then, there were a lot of...mistakes for lack of a better word. . Before Yearn, I was noticed in this space, but I had no public reputation, fame, or eyeballs, so I developed a lot of bad development habits. For example, I often test in production, which means I put some experimental stuff into an actual running system, which you know I do. Another example is a complete disconnect between intention and direction. Because mixing test and production is where I think there's a lot of risk. It's like telling someone, "Hey, I'm doing in-production testing and if you interact with this, this is not what you should be doing because the chance of something going wrong is very high." I say that to warn you that if you're in When interacting here, you need to understand that the risks are extremely high.
In-production testing ended up being a somewhat sloppy, throw-your-own-money approach, although that was not my intention. Bottom line, I'm still using my old development practices and I'm still building Eminence. I was very unhappy with the NFT culture at the time, which I think has improved now, but the way people were using NFTs at the time was pretty stupid. They turned a painting into an NFT and sold it for $100k. I like the idea of NFTs because I am an avid gamer. I think they are a perfect use case for NFTs. So I licensed the IP for Eminence, which was from another gaming company. We plan to build some silly games to show how NFTs work. I think there will always be issues with IP for NFTs because it can't just exist in one game. The whole plan is to build a series of different games that all use the same base layer.
But anyway, I deployed a bunch of tests, people interacted with them, a serious breach occurred, and I lost about $60M. I took a big step back because that's when I realized how dangerous this field actually is, how easily things can go wrong very quickly, without the right safeguards and so on. At the same time, because of Yearn, I was also facing quite a bit of pressure from a lot of the regulators at the time to characterize it as a financial instrument, which I guess is fair, but I also wanted to distance myself from that a little bit. I finally came back determined because one thing bothered me for a long time, and that was how to improve the AMM curve. At that time, you know there was only one standard stable trading curve, and that was Curve Finance, founded by Mitch, who is absolutely a genius developer, founder, architect, and maybe I still think of him as the best in this field that I know One of the smartest people. But I was obsessed with it, and I wanted to make something as simple as Uniswap, like X Y K. As a result, I ended up designing the whole X to the third power y plus y to the third power x and it worked really well, you can define this curve and it's simple.
At the same time, I also added a bunch, and at the time you had TWAP (which is time weighted price), I also added RWAP (reserve weighted price). Because as for how these XY pools work, I actually don't even need to explain, you just need to know that for TWAP, it's a fixed time price point, completely ignoring the amount of liquidity, it's saying, hey , you can sell a billion of this thing for this fixed price, which is a big problem for me.
Note: The time-weighted average price (TWAP) and reserve-weighted average price (RWAP) algorithms use different methods to calculate asset prices, which is an integral part of almost all DeFi fundamentals
Because many liquidation bots, liquidation engines, lending, even fully decentralized stablecoins, etc. need to understand that slippage is part of the calculation. Let's take the liquidation bot as an example, the way it works is simple: it needs to check if I can pay off someone's debt, get his 1 million ETH as collateral, and then sell it on the Uniswap pool and still be able to make a profit. If I use TWAP, my bot says, no problem, good profit, can execute. But if the slippage after actually selling was large, I would suffer a loss. So what I need is a way to take liquidity into account so I can really check, and it's specifically time weighted so you know there's not a lot of flash loans putting into liquidity right now. I could sell, but at the same time, it's an opportunity to get a jump on my robot.
So I need to go back and check that everything is there and then build that method. Launching on Fantom also faced some chaos because I left after a week or two. But beyond Fantom, I’ve always felt that this is what decentralized protocol founders should be doing. If your protocol is completely immutable, nothing updates, nothing changes, you need to leave because you can't be the top guy associated with that thing. I think Yearn and keeper do a good job because they manage them in a very decentralized way. With neither agreement, you can't really be sure whose it is, although it's certainly a huge mess on Fantom. It has become one of the main AMMs for many new VM exchanges like Velodrome Aerodrome and many more that I don't know about.
So, it achieves what I wanted, albeit not the iteration I did. After that, I decided that my development days were over, my smart contract days were over, and I didn't have the necessary infrastructure, so I went back to Fantom full-time. Sorry, this is a very long history and I've been delayed here for a while.
5. Fantom L1: Making software as efficient as possible
Andre Cronje:I think databases definitely have their place, and I think FVM is currently the best standard , I don’t think there is a better virtual machine out there right now. From a data structure perspective, I think that's the case because with Karma and the new database, we've gone through some normal processes. Initially we were using Badger, then we did a lot of research on various different databases, and then switched to Pble, which gave us a nice throughput improvement, but not a huge change. One problem with all these existing databases is that they are designed for general purpose data and can store anything in any way. At the same time, if you use SQL, the structured query language, at the top level, that means there's a lot going on behind the scenes. They're building their own indexes, creating their own P-trees, etc., which adds a lot of extra overhead.
So even when you switch to key-indexed storage, you might think that this makes more sense for EVM or virtual machine based data, or we can say smart contract X data, and then You would think this would improve throughput, and it does, but again, there's a lot of work behind the scenes to support their query languages, such as GraphQL, SQL, or possibly others. In fact, when we switched from initially a standard database structure to Pble and then to a key-value store, our throughput increased quite dramatically, whereas now we simply use a flat file on disk without any Complex stuff.
Our lookup pattern is very simple because like any smart contract it is an address, this is the first part of your index and the rest is which address slot the data you are looking for is stored in , which is actually just one two three four five six, so if I were to look up, say the first one is the name of the smart contract, it's actually the address one and I have it, I don't need anything more, I don't need some Complex language. This actually now to be fair has addressed a limitation of the EVM because you know the MPT data structures that the EVM uses with existing index building and other things are very data intensive.
What I mean is that you have your actual data structure leaf nodes, and then you have complexes of this, and complexes of this, until finally you end up at your top level leaf node, so in that data structure There's a lot of heavy lifting involved, and most of it is hashing. This also means that every time you do a read and write, it will be very intensive. I mean, for our VMs, if we just look at Carmen, which is the datastore, it's increased peak capacity by 8.2x, increased throughput by 820 just through this, and a lot of other incremental changes, but you You know, I think that's a huge event in itself.
One of the things I've always preached is that I think a lot of blockchains, a lot of development teams have accepted the current limitations as if they were based on some fixed physical rules, and if you ask Bitcoin People, they will say that POW is the fastest consensus mechanism in the world. Even though I've interrupted you, sorry, I'll shut up.
Moderator 2: No, it's actually very consistent with Solana's view of the world, and if you look at Kevin Bowers, who's the new client lead at Fire Dancer, he All work is done to make the software as efficient as possible. Because like you said, there's a lot of weird abstractions that lead to a lot of weird performance problems, and then those problems just keep getting worse. I mean, they even did things like speed up hashing algorithms. I think based on what you're saying, there's really a lot of gain that can be gained by scaling vertically first and optimizing the software to take full advantage of physics. And then add all this complexity. I'm not going to go on and on, I'd go into a big rant about this, but I guess, so what I want to do is actually for people who are new to Fantom, we have a lot of developers, investors, researchers, or maybe For someone more familiar with Solana and Ethereum itself, could you please briefly explain how Fantom works at a high level and what makes it different? And then what are some of the obvious takeaways?
Andre Cronje: Our first priority was consensus. At the time, Proof of Work (PoW) was the dominant system, and while that's just an analogy, I don't like when people refer to blockchains like Fantom as PoS. PoS is only an anti-fraud mechanism, not a consensus mechanism, while Proof of Work is a combination of consensus and anti-fraud mechanisms. The core concept of consensus is shared synchronous knowledge, that is, all participants agree on an event and know that everyone else knows it too.
For example, suppose I am wearing headphones now. I tell you that I am wearing headphones, and I will prove to you that I am wearing headphones. So now you know I'm wearing headphones and I know I'm wearing headphones. And you tell Garrett, "Hey, Andre has headphones on." Andre confirms to me that he has headphones on, and now Garrett knows that I know he has headphones on, and you know, Garrett knows that I know. Although I didn't know that Garrett knew that I knew at this time, through third-party confirmation, we reached a consensus and participants across the entire network were aware of the incident. This is Fantom's consensus mechanism, because we want all validators in the network to be communicating with each other, they send ping packets to check if each other is online, and they send transactions that they know but others don't, to stay in sync. So communication is continuous, and we try to leverage this peer-to-peer communication to achieve consensus by sharing messages and sharing knowledge of messages.
In our networks, messages spread as fast as viruses. It starts slowly from one node to two, but over time the exponential growth means it spreads rapidly throughout the network. We use a DAG (Directed Acyclic Graph) structure that does not produce blocks like a traditional blockchain, but reaches consensus purely based on communication. We divide time into so-called Epochs (epochs), and when 2/3 of the nodes in the network reach consensus, a new Epoch will start. This does mean that we rely heavily on the peer-to-peer communication network, which we are working hard to improve and optimize to enable faster information dissemination and consensus reaching.
In the blockchain, the dissemination of information is the key. Just like you tell me a message, I tell other people, and they tell more people. As more and more people know the news, like on the Fantom blockchain, the news will eventually become known to enough people to form a chain. We have introduced a concept called Epochs for smart contracts (EVM) on the blockchain. Whenever 2/3 of the people in the network know something, we call it an epoch. While this may sound a bit strange, it really just means that more than half of the population has reached a consensus. Then, this Epoch is like a block, and we hand it over to the EVM for processing. Technically, we don't even have real blocks, just communication going on and consensus forming.
This means that we are very dependent on the topology of the network. We have identified a number of areas within the P2P layer that we will focus on next. Communication plays a key role in this process and even today, communication delays are not a big issue even on a global scale. So especially if you use a broadcast protocol, which means instead of one-to-one communication, I send the message to everyone I know, it will be a little slower to process the information, but in a network like this, you can still Share information very quickly.
Now, let’s talk about the consensus layer. Initially, we were just doing simple transfers from one wallet to another, nothing fancy, no virtual machines. But then, we wanted to introduce virtual machines, so we had Epochs. These two independent components cooperate with each other, and the virtual machine is responsible for processing and updating the state. Despite our optimizations, throughput is still limited. Our maximum throughput in the raw transaction network is approximately between 50k and 180k, depending on hardware and network limitations. Our goal at that time was to test the true limit, but we can further extend the throughput through hardware and technical means.
Our research is now focused on the field of virtual machines, especially in the consensus area, and we have received some peer review and received a lot of help from the University of Sydney, establishing a close relationship with Professor Bernard Schultz. He was a truly great man, and although some may strike you as a little unintelligent, it wasn't intentional. They're just on a different level, which can be frustrating at times, but also a great opportunity to learn, and I learned a lot from that. He was one of the pioneers in programming languages and virtual machines, bringing many fantastic ideas and a full team to the table.
Kman and TSA are his creations and I know I can keep up with them, but I don't have ownership over these things. So let me explain briefly, TSA is a new virtual machine. Because we have DApp developers and an ecosystem, we do have to take them into consideration. The choice we faced was to either start over, which meant abandoning everything that came before, including developers and the community, and rewrite everything, or find a compromise that met their needs. We have chosen to preserve compatibility at the bytecode level. Simply put, the previous code can still run in the new system. This way, even if we do a network fork, previously deployed contracts will still work normally. While this remains a matter of internal discussion, we believe that at some point a decision will need to be made to reduce the cost of the technology. Currently, the TSA virtual machine is bytecode compatible with the EVM, which means you don't need to recompile things like Solidity contracts, but you certainly can if you want to. Recompiling can bring some optimizations as we have new interpreters from high level languages like Solidity and Viper. Although there are no practical applications yet, these are just some of the ongoing efforts as we need to migrate the system to the new one.
In blockchain, the opcodes (a type of computer instructions) we use may affect the performance of our system. For example, the Ethereum Virtual Machine (EVM) uses 8-bit opcodes, while we use 16-bit opcodes. You may not think this is a big deal, but when executing from the first to the 50 millionth transaction, the interpreter using 8-bit opcodes took about 40 hours, while the interpreter using 16-bit opcodes took 27 hours. This represents a 30% improvement in performance. Of course, there are many other factors that affect the operation of the system, but this is indeed an important aspect.
In the field of blockchain, there is a lot of research on distributed systems, virtual machines, etc. However, for some reason, many people choose to ignore these research results. They may think that they can solve the problem on their own and do not need to rely on the results of decades of research. But we think we can improve our system by applying proven techniques. This is one reason why we decided to move to 16-bit opcodes.
Let's explain opcodes a little bit. In a traditional EVM, when performing basic mathematical operations like a plus b and then multiplying by c, you need to do it in steps. First a plus b, then multiplied by c. But by introducing super instructions, we can find that this a plus b plus c operation accounts for the majority (more than 95%). So why don't we combine them into one super operation? This can reduce the number of operations performed and improve the efficiency of the system.
The super instruction set (super set) is a set that combines two operations. It no longer performs a and b separately, but performs addition and multiplication operations by default. Typically, you need to read, modify, and write to a target before performing a second operation. The introduction of the super instruction structure can reduce the number of operations that need to be performed by half. Especially in current virtual machines, such as in DeFi and NFT operations, such as ERC transfer, although it seems to be a standard operation, it is actually a series of operations. First, read your balance, then check whether the balance is sufficient, then subtract the corresponding amount from the balance, then update the other party's balance, check again whether the balance is consistent, and finally submit the operation.
We can imagine how many times these operations would occur on the blockchain, possibly requiring the execution of multiple different opcodes, but by introducing a mechanism called a super instruction, we This process can be simplified. However, it should be pointed out that this is not a new concept, but the result of research conducted decades ago. We have not applied this concept in the blockchain space before, but now we have introduced it into virtual machines, which is an improvement.
We also spent a lot of time studying parallel execution, and the concept is very intuitive. For example, if I'm sending USDC to G, then M buying the NFT shouldn't wait for my send to complete before committing to the same state. Typically, many interactions that occur within a block are highly correlated. During periods of high activity, multiple interactions may occur in the same state, with significant activity both before and after transactions.
After many optimizations and parallelization, we discovered an enhancement phenomenon called CL (Clarvoyance). Simply put, Clarvoyance refers to the process where we brute force sort all the transactions in the first 50 million blocks and then try to reorder them to find the best way to write the state. We ended up with the best sorting solution, which we call Optimum Clarance. This way we achieved a 30% performance improvement, which is a nice boost. While that's great, we have performance gains elsewhere, some as high as 800%, 400%, that make CL less important.
The next big improvement we made was after the virtual machine started to spin up. We developed a simulation environment called Substrate that allowed us to make small changes and run the entire system very quickly. This simulation environment is like a container in which we can test small changes to understand their impact. Without such tools, testing the impact of these changes is difficult. We've experienced this a few times before spending a long time building a system only to find out the answer doesn't work. In fact, it was the first tool we built. We plan to make it open source and since it is compatible with any Ethereum Virtual Machine network, it will also be useful to other blockchain development teams who can also adopt this approach.
This is also the tool we've been using to test all these different theories and incremental changes. By making small changes to the codebase and running it quickly, we can test large data sets in hours instead of days. Part of this is our profiler, which shows where we spend the most time while executing these transactions. After we introduced a lot of VM-level improvements like these opcodes and super opcodes, we also introduced hotspot caching. For example, if the state is accessed frequently, we keep it in the cache and then always access the cache directly without reading it again. This was basically the foundation of web development even 50 years ago, but for some reason this optimization didn't catch on. Additionally, we implemented a hash cache since we didn't want to keep recalculating. This consumes a lot of resources, and many hash values, such as state tree roots, etc., change frequently as transactions are verified.
However, we found that the next biggest bottleneck was the disk, which was the database. We track all of these read and write operations, and a lot of them are happening in the background and you might not even realize it, like the index builds I mentioned earlier. We then went from Badger to Pebble and finally Carmen, a key-value store. It really only has two main components, the new pattern set. I'm trying to simplify and we can go into detail if you want, but I'm just going to keep it high-level for now.
So, this is about addresses and address spaces, and the basic way you look up smart contract data when looking up values. Another feature is real-time pruning, especially for the Fantom blockchain. Because Fantom uses an ABFT (Asynchronous Byzantine Fault Tolerance) system, it doesn't actually require the longest chain rule. Once you get 2/3 of the confirmations, you can truncate all previous records since you only care about the final state. Of course, you can still retain historical data for archiving and the same purpose as Proof of History. However, you don't need the longest chain, you've achieved true finality.
This is also a long-standing topic of debate because of the risks in some cases. Suppose someone had a secret lab of super quantum computers somewhere, they could crack cryptographic hashes in an extremely short time and then create a new Bitcoin chain and submit it. This is probably unlikely to happen in a system like Ethereum, but it is a risk for a system like Bitcoin. They can create a new longest chain and declare: "Hey, this is the new chain, we have everything now, thank you."
Therefore, adopting the assumption of probabilistic finality is risky of. There have been so many blocks and work done so far that it is nearly impossible to change, even with a system like Bitcoin. They could have a phase where all the validators sign off and we agree that this is our new state, and then proceed in the future and only that part might be affected. But the possibility is real, and as scary as it sounds, it's probably unlikely to happen in the next 50 years.
So one of the purposes of real-time pruning is to keep disk usage low. State inflation is a huge problem in any longest chain system. It’s important to address the problem before it becomes serious, because if your blockchain gets a lot of economic activity or attention, your status will grow quickly. Your validators will start to suffer and they will have to upgrade and improve. Therefore, the status area is always an important focus for us.
With real-time pruning, even without real-time pruning, we can reduce the flat storage and address index lookups we use. For example, the storage on the Sonic chain we currently run is 98% reduced on disk. That's because now you no longer need all those background indexes and lookups, and everything else that comes with a database. This is very important because as activity increases, so do the hardware requirements. You have to constantly make trade-offs between increasing specifications and decreasing requirements. It's an ongoing process that cannot be avoided. Whether it's in Web2 or traditional finance, eventually you have to scale these things, and that gets very expensive, so you have to find new ways to change the code. In the traditional world, you would repeat this process every six months or so.
Overall, Carman’s new data store solves the major bottleneck issues we encountered previously and is able to achieve approximately 8x the throughput. The two main issues we are currently working on are the P2P layer and the main exchange pool, which is a standard optimization project. We take the traditional approach of gradually profiling the stack, identifying the biggest bottlenecks, focusing on fixing them, and then running through the profiler again, and so on until we make an optimization so tiny that it makes you feel suicidal.
6. The road to expansion of ETH
Moderator 2: You said some very interesting things. I would like to talk about it briefly and explain It serves as a topic transition. You mentioned that you don't quite agree with the idea of parallel processing because in blockchain, a lot of things are interconnected, and on the contrary, SUI or MOVE take an optimistic approach to parallel processing. We're not sure how this will play out in practice, but I wanted to use it as a transition since you are a pioneer in the cryptocurrency space and Ethereum in particular. What are your thoughts on the new scaling method adopted by Ethereum?
Andre Cronje: Ethereum has not done anything to deal with the scaling problem, and they are already scared because past attempts have not been successful. I don't entirely agree with that, I think about it a lot, just like Fantom. It's got relatively less economic activity and less value, but every time we do a deployment or a change, I get nervous, even with the Sonic system that we've tested a lot of times, because you know , an error in an opcode or a bytecode may lead to huge losses. Especially after Eminence, I'm even more worried because even the smallest details that you don't take into account can have catastrophic consequences.
Out of concerns about this financial loss, I respect Ethereum’s choice. I think the transition to proof of work is a major historic moment and it's a huge success for Ethereum and all involved, and I want to express my appreciation and congratulations to them. But in my opinion, I've seen this happen in any company I've worked for. When you're new, you're willing to take risks and push barriers, but as your reputation and value grow, you become less willing to take that risk. Ethereum’s strategy right now is to take a conservative approach and focus on L2. People seem to forget that the Lightning Network was the first L2 on Bitcoin, but no one really did anything there. This may be the right approach, but it shows that we are content with the status quo and have little need for improvement. Historically, this has happened every time a new competitor emerges that is capable of taking risks and outperforming the previous generation. Ethereum did that for Bitcoin, and I think the next generation of Layer1 is doing the same thing for Ethereum. It’s silly to say that Ethereum is a killer, our economic activity, TVL, and everything from networks like Bitcoin, Ethereum, Solana, etc. all blend together and are just an insignificant part of the financial world. If you think this is the limit of global financial solutions, this tribalism is just crazy.
Anyway, I say this because you know, sometimes you don’t use Ethereum to buy a cup of coffee at Starbucks. It's more for portfolios, or in some older banks, they only change things once a year. For these uses, Ethereum is the perfect choice because it is built on factors such as security budget. It's a bit of a joke, but that's why it exists. Just like if you wanted to pay someone, you wouldn't use Bitcoin because almost no one is using it. I still remember when I entered this field, my first salary was paid in Bitcoin because it was the main payment network at that time. But now, I can't imagine having to wait an hour for someone to send me Bitcoin. I would rather use Ethereum.
7. Fantom’s marketing plan
Moderator 1:Andre, I have a question here that I think ties into a lot of what we discussed Relationship, this is actually a marketing question about Fantom, but to really discuss this, I want to go back to when you were at Curve, when you were at Curve, you did this so-called fair launch, which actually The founder doesn’t take any tokens, which basically creates a culture of personality cult for you, and someone even wrote an e-book called “The Blue Pill” describing you as almost a god because you conduct With this fair launch project, I know you've talked about this before, there was a time when you said I just code and prod, and then a whole cult group starts to form around you, and it makes it very difficult to operate because of what you're doing Everything you do is highly scrutinized and focused, no matter what, and then all of a sudden you leave Yearn, and then the price (of Yearn) collapses relatively overnight, okay, and then after a while, you bring in Fantom, Fantom went up a lot because of that, I think it was in 2022, and then you left again.
At that time, you actually wrote a blog. You explained why you left DeFi because of these toxic cultures. Many people began to accuse you, saying that you rug pulled everyone, even though you just A person who worked on this project, but you rug pulled us, and that’s why the token dropped 50%. Although you said that in the cryptocurrency field, if the founders create stable projects, they need to leave, but among these projects So one of the reasons it was so successful was not just because you were the first to do it, similar to Yearn, you were the first to actually have this aggregator on Ethereum, but you established such a reputation that when That reputation follows you when you go to Fantom, which is a big reason why there's so much speculation behind the project.
My question is, now that you're at Fantom, how do you think about go-to-market, because all of these blockchains are, to a greater or lesser extent, trying to achieve this convergence to be in some way have the same core to a certain extent, but differ in some subtle differences, so now how do you think about that now that you're no longer really relying on a cult of personality to attract speculation and funding, how do you think about differentiating Fantom from other projects? Come, when many technologies seem to be converging on the same spot?
Andre Cronje: Yeah, so I think we've taken the right steps in that direction, but the first thing we need to do is identify who we really The users, you know, the people who interact with Rabby or metamask or whatever wallet you want to use on Ledger are our users, or the DAPP developers, and we quickly realized it's the DAPP developers, they're our customers, they're Our users, these are the people we should serve.
Then we started thinking about how we can improve the lives of our DAPP developers. There are a lot of little things. For example, just to give an example of abft, when you issue an RPC call to submit a transaction, when you receive By the time you get your 200 OK reply, it means that the transaction has been confirmed, it's final, completed, you don't have to wait in the background and poll, you know it's confirmed, when the block came out, Are there enough blocks, that's a lot of extra boilerplate code that you as an adapter would have to write if you were doing website stuff, there's a lot of small examples, we're just trying to improve the coding experience as well as the overall development process, we'll consider that next The thing is that most of the fees, gas fees, etc. go to the security validators, and we can take 20% of that and support the economic activity of the DAPPs, so they actually have a revenue stream, rather than just doing it once in the beginning Raise money at scale and then slowly peter out and die because you don't have another revenue stream and you don't want to charge users because then someone will just copy your code and give it away for free until eventually it too Died, but that's another story.
Our gas incentive program is a very successful project. Gas subsidy is another thing, when new users come to use someone's DAPP, they don't need to pay gas fees, the DAPP developer can provide a little subsidy that they can get from their gas incentives to pay for all these interactions. For a fee, this acts like a local repeater, which means the onboarding experience for their users will be simpler.
Also, we are focusing more on school/college level projects, we hold hackathons every three months and work hard with students to teach them because we don’t expect them to become the next batch of buidlers, but I Hopefully in five to ten years they'll be the next wave of buidlers and then you know, if they're familiar with our tech stack and our ecosystem, that'll be great. So, this is a longer-term approach because we realize that competing for mindshare in the current environment is pointless and a complete waste of time. Because in the past we've actually invested in teams through incubation and seed, starting from their original concept stage, and then they would typically go into Series A or Series B, or whatever, and then find lead VCs. Fantom does not have venture capital backing, it is just a public sale, so we don’t have big-name venture capital backing us. And then when these teams go to lead investors, their lead investors always tell them, "No, you're going to release on the ABC chain, not on Fantom, because that's where they invest." So. Can't blame them, but feel a bit cheated at the same time. So we basically stopped interacting with those teams and things.
I also believe that most of the current developers building decentralized applications are just copying what we see in the traditional world, which is not wrong, but the next important decentralized applications will be made by those The people who grew up with the decentralized web won't be built by the 30-year-old developers who are just now starting to cross over and think about something new, because we don't have that kind of thinking encoded into our DNA. So we also want to cultivate this new energy. I mean, that's probably not what you token holders and the community want right now, but I don't think we're going to win that game at all in the next two to four years, and I don't think our technology stack is there yet To where we should be to achieve that goal, but we're lucky in terms of spending and funding, we can take on some pretty long-term things and that's what we're trying to do.
Moderator 1:We have a pretty deep community here. I saw you posted a tweet and someone asked you about rollups if they were coming to Fantom, and you said rollups weren't on the roadmap, but we were working on something about SVM. Can you give us some Alpha Insight?
8. SVM is the best virtual machine
Andre Cronje: Yes, as I mentioned before, I think SVM is currently Best virtual machine. Our consensus mechanism is decoupled from the virtual machine, which means we can easily integrate other virtual machines (such as FVM) into our system. Our goal is to use the consensus mechanism as an ordering system and then output and process data from it, which is not complicated. While one might expect to see something more exciting, we are convinced that SVM is the best virtual machine technology available and we are working hard to combine it with our consensus system.
9. Airdrops, Regulations and Bull Market Predictions
Moderator 1: If you are paying attention to Solana, you may know that they recently conducted a JTO airdrop , this can be regarded as airdrop season to a certain extent. I didn’t see many tokens before, but now there are a lot of them. Have you ever been through the airdrop process, and I know it's pretty interesting for some people who aren't very familiar with it. Yearn previously airdropped all its tokens at once, during the defi craze of 2020. A year later, the community actually worked together to reduce the supply of tokens by 30% to reward contributors and put some of the funds into the treasury. I think people are realizing now that maybe we shouldn't release all the tokens at once and we shouldn't neglect setting up a treasury. But you were too early. I'm just curious, do you have any advice or lessons learned about this? How do you think these airdrops will be successful? Will you continue to do this, or do you think it's just business?
Andre Cronje: No, I wouldn’t do that right now. Airdrops have been machine-operated and turned into liquidity farms, losing real value. Just like Keeper, these tokens are not free and you cannot mine them. I called them the "olm option" at the time. "Liquidity mining", you obtain options through mining. If I stake my liquidity or something and I make 10 tokens, I don't actually get those 10 tokens, I just get the right to buy those tokens at a discount, so we were doing 50% or something like that, so I can buy these tokens at 50% of the current market price.
We were testing this approach at the time, and I thought there were some nice things about it, although I felt like it wasn't as widely adopted as I would have expected, but you know, some of the things I've seen One of the best systems, inversion, I think that's a good example, is it's done manually, and the first person says, "Well, I like you three, and you three will get some tokens. "So now we form a committee together and the four of us decide who is the next contributor to get tokens, and I think that's one of the best because airdrops are now mercenaries and it's hard to avoid that." Another thing is that no matter what you do, there will always be a large portion of people who will be dissatisfied, so the best way is, if they are going to be dissatisfied anyway, give some value in exchange, don't give it for free. While that attracts a lot of attention and eyeballs, I also think it's not a good thing for a lot of teams because their initial ICO airdrop fair launch, or whatever you want to call it, that's their pinnacle, the best event, is The best of it all, and then from that point on, slowly, slowly, silently decays until death.
I understand it's because you are in a highly competitive field and if you don't you have to put in more effort, but if you choose the long term path and decide to do it long term Choice, you won't have that much hype, you won't have that many users, but I can assure you that those airdrops and xx etc. also don't have that real hype and real users, just some financial value Available for projects to take advantage of, which is why there's a lot of activity, but go forward a year and they're probably going to have lower usage and value than those who didn't do that and just slowly languish. In this space, competing with so many lies and hoaxes that people believe are true and will defend relentlessly, you have to fight, taking that long slow approach is really tough, but in the long run it works out better.
You know, to me, a perfect example is Solana, where they could have taken the EVM approach and been like, "Fuck it, let's integrate EVM into Proof of History." ”, but at that moment when we integrated, I think it was one of our biggest mistakes, because at that point in time, we made quick trade-offs. Now, what this means is that people can come to Fantom and deploy things very quickly, but that's why 99% of the things you see on new chains are forks, people copy and paste from A to B, and then they mine , do liquidity mining, create the token, and then the team leaves and then everyone else leaves because you can migrate back so easily, unlike if you do something new like an SVM, it does take longer In time to accumulate users, you have to do more teaching and other things, but like these people you have on hand, you know, they can't give up and jump away quickly.
Even if they take these steps, they realize, "Oh, it's actually not that great here, and I want to go back." So that proves to me that Solana is a good place to be Case in point, the long-term approach is usually the better option and has better returns, but you have to live with it for a longer period of time.
Moderator 1: This is a very insightful point. For everyone, Fantom is the second oldest EVM chain, after Ethereum, and obviously now you have what I call FVM, which is also coming soon. Okay, I promise this is the last question, as we wrap up, you were on a podcast, I think it was back in February, and they asked you if you knew when the next bull market was going to be, and you said you actually Appreciate the bear market because that's when you can focus and build, but you did say there that you thought the end of the year was coming and now prices are up a lot and I'm just curious if you feel the same way and if you think the tide has turned , or are people just bored and we have this temporary increase.
Andre Cronje:There are so many factors at play right now that I don’t think the world is changing so quickly. Big events like covid, Ukraine, Russia, Israel, Palestine, so many have happened this year that it's hard to make any real predictions. I do think there's been a change in sentiment, and sentiment changes generally coincide with price activity, and the sentiment change now is positive, people are happy again, people are enjoying themselves again. Most people would, I don't know if it was Buffett or Munger or whoever said it, you have to wait for the tide to go out to see who's not wearing clothes, it's happened before, we've seen who's not wearing clothes, they ran away or they were locked up Up, or whatever. Now those who are still left have been through the depression and everything else, it's like you're in the "but it can't get worse" stage, so it has to get better stage.
But I feel like, we're there now and I hope I hope that's true and it can't get worse, but I mean, there's been so many catastrophic events over the years that you Never knowing what will happen next, assuming there are no new once-in-a-lifetime events, I think things are moving in a positive direction. I don't think the pain is over in the traditional finance world yet, I don't know how much of an impact this will have, I think there's going to be more pain there, at least for another year or two, and I do think cryptocurrencies are probably going to be with that Decoupling, I optimistically hope, because cryptocurrencies are more interesting, so unless something catastrophic happens, I think there's reason to be a little bit more optimistic.
Moderator 1:Like you mentioned before, it’s hard to imagine going back to a job in private equity or investment banking after getting involved in cryptocurrencies. I think if someone did that, they would probably move to another frontier, like artificial intelligence. Now people may be getting back into crypto, so that's very interesting. I'm sure everyone listening can relate to how smart Andre Cronje is and how much of an impact he has had on the ecosystem. Also, I didn't know you were a lawyer until preparing for this interview, or at least you had a law degree, is that right?
Andre Cronje: I cannot practice law, I have never taken the bar exam. One of the first things I studied was law, probably around 2001, so definitely don't trust me with legal advice.
Moderator 1:I believe it would also be beneficial to have some legal knowledge as you enter the world of crypto.
Andre Cronje:Yeah, that's a big driver for me on the whole crypto regulatory thing, but I've also been heavily criticized for it. The argument I've been trying to make is that this is why I differentiate between regulated crypto and crypto regulation, because crypto regulation is impossible. You know, we're going to see in all these countries, they all say, "Okay, but you can't do ETH transfers on XYZ." But you can't stop this, like I can run my own node and do it, you There's no way to really stop this, but you could have useful legislation for regulated crypto companies, like Circle, Tether, etc., like we're seeing now, like Coinbase, Binance, you know, they can do this a little. I've always been an advocate of, you know, regulating things that you can regulate and should regulate, but things that are on-chain cannot be regulated. We don’t necessarily know what happened, the lessons learned from FTX, Three Arrows, etc., but also acknowledge that you can’t really touch cryptocurrencies. But that subtlety is lost in the space, as all everyone hears is that Andre Cronje wants to regulate cryptocurrencies and pulls up the documents behind them to show others, but that’s not actually the case. There's subtlety here, but you know, subtlety doesn't lend itself to headlines, and no one has time to read more than a line. It's just a matter of simplicity.
Moderator 1:Yeah, I think it's a little fanciful to not have regulations at all, and I think a lot of people, you know, they say what we want most is clarity, But it also points to L2s, like Base, being run by Coinbase, a single entity, which may be regulated, you know, and maybe in the long run, that's actually a bull sign, you know, because maybe you'll have more Many users authenticate with KYC, they can use Coinbase's credentials and so on, and then you might have something like Fantom or Solana, which doesn't require KYC, or certain applications that don't, because now you can do These hooks, you know, are only fired when the program is running behind the scenes. So I don't know what the future is going to look like, but like regulation is definitely going to be a part of a lot of cryptocurrencies.
Andre Cronje: I agree, I think, you know, as we saw in Ethereum, a lot of block producers are not allowed to contribute to those on the list People make transactions, although the transactions will still go through because someone else is running the allowed block producers. But you know, if I'm a certifier and I'm running in the United States where the OFAC list is a big deal, I need to comply with that because I live under that jurisdiction, it's as simple as that. But if I'm in some remote faceless country where the list doesn't matter and I can run a validator without it, I'm not under your jurisdiction. So to me, entities should regulate cryptocurrencies where they can identify them. Like, "Hey, X person is doing Y activity in my jurisdiction, that's my jurisdiction." But trying to prevent something globally, and then you know, it becomes if like a base layer developer , they bear the responsibility, even if they want to change these things, the verifier does not adopt it, then they are finished. And then they try to target those unlucky guys, which we've seen in a few cases.
This is a lost opportunity. It's like you know, that's what I hope our entire industry understands, we have to help educate. If we just keep burning flags and trying to get these guys to fuck off, the more you resist, the harder they're going to be able to fight back, and they're going to make your life miserable. At a certain point, you can only say there's nothing you can do so many times until you start lying and saying well, I might as well do something, and then putting a backdoor or something in it, which is even worse.
Host 1: Does this still scare you? I know you mentioned at Yearn's token launch and so on, were you aware that the CIA or might come to you, is that still in the back of your mind?
Andre Cronje: Yeah, look, I've had my fair share of discussions with several three-letter agencies, and I admit that when I first received When I wrote that letter, I was very scared. But as the process goes on, you begin to understand that this is discovery and education. I think it's actually a lot better than I originally thought. And you know, I thought it was funny at the time, because like everyone on crypto Twitter was like, "Ah, these guys don't know what's going on, it's going to take them years to understand and catch up." And the questions these guys asked me were all so low-tech, like you know more about this space than most people on Twitter, no offense, but that was my interaction with them. Overall, it was actually a little more positive than I initially thought, to be honest. But it's still scary, but you know that's going to change soon. I could have made a lot of stupid mistakes, one of which was not holding a fundraiser, just issuing tokens, everyone coming, and then launching fairly. The other one is to leave when the time is right, if it weren't for these two points I think I would have more problems now.
Moderator 1: Yeah, well said, that's actually some of the pushback I've heard against doing what I think you describe as airdrops like options. Some people worry that this actually puts protocol operators into the clutches of regulation, because you're selling something, right?
Andre Cronje: Yes, yes.
Moderator 1: Anyway, Andre, thank you for joining in and sharing your story, it's very engaging. I've been following your stories and never thought I would actually get to talk to you, so this is really cool. So thank you for participating.
Andre Cronje: Always welcome. It's my pleasure, thank you for the invitation.
Host 1: Okay, see you next time.