In response to OpenAI and recent AGI-related product dynamics in the AI field, Ethereum founder Vitalik Buterin posted on the X platform, "My definition of AGI (artificial general intelligence) is that AGI is artificial intelligence that is powerful enough that if all humans suddenly disappeared one day and the AI was uploaded into a robot body, it would be able to continue civilization independently. Obviously, this is a very difficult definition to measure, but I think this is the core of the intuitive difference between "the AI we are used to" and "AGI" in many people's minds. It marks the transition from a tool that constantly relies on human input to a self-sufficient life form. ASI (super artificial intelligence) is another matter entirely - my definition is when humans no longer contribute value to productivity in the loop (as in chess games, we have actually only reached this point in the past decade.) Yes, ASI scares me - even AGI as I define it scares me because it brings obvious risks of loss of control. I support focusing our work on building intelligence-enhancing tools for humans, rather than building super-intelligent life forms."