Eric Schmidt, the former CEO of Google and “no longer a Google employee”, recently gave a talk at Stanford.
The talk was recorded as a video and uploaded to the official Stanford Online Courses YouTube account, which included a 40-minute Q&A session between Schmidt and students.
Because his views were too direct and his words were too honest, Schmidt’s talk was on the news.
The Stanford official account hid the video.
Finally, Schmidt apologized for his “wrong remarks” in an email interview.
The well-known technology blogger Lan Xi summarized the key points of Schmidt's sharing, TLDR. The full Q&A with Schmidt is also attached at the end of the article.
Why is Google now being suppressed by OpenAI in the field of AI? Because Google thinks that letting employees go home as early as possible and balancing work are more important than winning the competition. If your employees only come to the company to work one day a week, how can you compete with OpenAI or Anthropic?
Look at Musk, look at TSMC, the reason why these companies are successful is that they can squeeze employees. You have to push employees hard enough to win. TSMC will let a physics PhD work in the factory in the first year. Can you imagine American PhD students going to the assembly line?
I have made many mistakes. For example, I used to think that NVIDIA's CUDA was a very stupid programming language, but now CUDA is NVIDIA's most powerful moat. All large models must run on CUDA, and only NVIDIA's GPU supports CUDA, which is a combination that other chips cannot shake.
When Microsoft cooperated with OpenAI, I also found it unbelievable. How could Microsoft outsource the most important AI business to such a small company? As a result, it was wrong again. Look at Apple's lukewarm attitude towards AI. Big companies are really bureaucratic, and struggling people are all starting businesses.
TikTok has taught Americans a lesson. If the young people here want to start a business in the future, they should steal music or something as soon as possible - it seems to be a blackmail on TikTok for condoning pirated BGM in the early days - if you succeed, you will have the money to hire the top lawyers to help you clean up your ass, and if you fail, no one will sue you.
OpenAI's Stargate said it would cost 100 billion US dollars, but in fact it might not even cost 300 billion. The energy gap is too big. I have suggested to the White House that the United States should either build good relations with Canada, which has abundant hydropower resources, cheap labor, and is close enough, or get close to Arab countries and let them make sovereign investments.
Europe is out of the game. Brussels (the headquarters of the European Union) has been destroying opportunities for technological innovation. France may still have some hope, but Germany can't, let alone other European countries. India is the most important swing state among the US allies, and the United States has already lost China.
Open source is good, and most of Google's historical infrastructure has benefited from open source, but to be honest, the cost of the AI industry is too high, and open source cannot afford it. The French large model Mistral, which I invested in, will turn to a closed source route. Not all companies are willing and able to be suckers like Meta.
AI will make the rich richer and the poor poorer, and so will countries. This is a game between powerful countries. Countries without technical resources need to get tickets to join the supply chain of powerful countries, otherwise they will miss the feast.
AI chips belong to high-end manufacturing, with high output value, but it is unlikely to drive employment. Few of you may have been to chip manufacturing plants, which are all mechanized production and do not require people. People are stupid and dirty, so don't expect a revival of the manufacturing industry. Apple moved the MacBook production line back to Texas not because Texas wages are low, but because there is no need to hire people on a large scale.
Historically, after the introduction of electricity into factories, it did not create more productivity than steam engines. It was about 30 years later that distributed power sources transformed the layout of workshops and promoted the emergence of assembly systems, and then began to leap in productivity. Today's AI is as valuable as electricity was back then, but it still needs organizational innovation to really get huge returns. At present, everyone is just picking "low-hanging fruits".
Click to follow and discover more AI entrepreneurs
1.Three AI technologies that will change the future
Host: What do you think about the development of AI in the short term? In your opinion, the short term should be the next one or two years, right?
Eric Schmidt: Things are moving so fast that it feels like I have to give a new speech about the future every six months. Is there anyone here majoring in computer science? Can anyone explain to everyone what a million token context window is?
Audience: The basic meaning is that the question prompt can use a million tokens or a million words, or something similar.
Eric Schmidt :So a million tokens means you can ask a question that is one million words long.
Audience :Yes, I know this is a general direction of Gemini at present.
Gemini official website introduction (Chinese translation is a plug-in effect, thanks to immersive translation)
Eric Schmidt :No, their goal is to reach 10 million. Anthropic has reached 200,000 and is still growing. The goal is one million and above, and it is conceivable that OpenAI has a similar goal. Next, can anyone give us a technical definition and explain what AI Agent is?
Audience Member: AI agents are things that perform tasks online, buy things on your behalf, and all kinds of things like that.
Eric Schmidt: So an agent is something that performs a task, and another definition is a large language model with memory. Another question, computer science students, can anyone explain what Text-to-Action is?
Audience Member: It's about expanding text into more text, inputting text, and then AI triggering actions based on the text.
Eric Schmidt: Another definition is converting language into Python, a programming language that I never thought would survive. But now everything in AI is done in Python. There's a new language that was just released called Mojo, which seems to have finally solved the problem of AI programming, but we'll have to see if it can survive in the face of Python dominance.
Another technical question, why is Nvidia worth two trillion dollars while other companies are struggling?
Audience: Technical reasons. I think it mainly comes down to the optimization of code running. Most code now needs to run in an optimized environment, and only Nvidia's GPU can do this. The fact is that other companies have the ability to develop various technologies and may have a decade of software development experience, but they don't have a team dedicated to optimizing for machine learning.
Eric Schmidt: I like to think of CUDA as the C language for GPUs. This is the way I like to think of it. It was born in 2008, and I always thought it was a terrible language, but it became mainstream. Now there is a whole set of open source libraries that are highly optimized for CUDA. Everyone who built these technology stacks ignored this. We call it vlm technology, and other open source libraries like it, they are all optimized for CUDA. This is very difficult for competitors to replicate.
What does all this mean?
In the next year, you will see more large-scale context windows, agents, and text-to-action capabilities. When they are applied at scale, the impact will be greater than the huge impact we are seeing from social media now, at least in my opinion. In context windows, you can use it as short-term memory, and the scale is so large that it is amazing. The technical services and calculations are very complex.
The interesting thing about short-term memory is that you can ask it to read 20 books, input the text of these books as a query, and ask it to tell you what the books are about. The human brain forgets the middle part. There are people building basic LLM agents now. The way they work is that they read, for example, chemistry content, discover the chemical principles in it, and then test it and add the results to their understanding. This is very powerful.
The third point is what I mentioned about text-to-action. For example, the government is now considering banning TikTok. We don't know if it will actually happen. If TikTok is banned, I suggest you say to your LLM: copy a TikTok, get all the users, get all the music, add my preferences, generate it in 30 seconds and publish it. If it doesn't take off in an hour, change it to something similar, and that's the command. Bang, bang, it works immediately.
Do you understand? If you can generate arbitrary numerical instructions directly from any language, this is basically what Python does in this scenario. Imagine that everyone has a programmer who can do what you want, instead of programmers who work for me but don't listen to you. (Laughs) Programmers know what I'm talking about. Imagine a programmer who is not arrogant and actually does what you want, but you don't have to pay so much. And these programmers are in unlimited supply. And these…
Host: They will all be available in the next year or two.
Eric Schmidt: They will be available soon. I’m very confident that they will happen in the next wave of technology.
Audience: You mentioned that the combination of extended context windows, agents, and Text-to-Action will have an incredible impact. First of all, why is this combination important? Secondly, I know you can’t predict the future, but why do you think this will be beyond our current imagination?
Eric Schmidt: I think it’s mainly because the extended context window can solve the problem of timeliness. Current AI models take about a year to train, including 6 months of preparation, 6 months of training, and 6 months of fine-tuning, so they are always a little behind. But the extended context window allows you to input the latest information, and this context function is very powerful, just like Google can update in real time.
About the Agents model, let me give you an example. I set up a foundation and funded a nonprofit organization, and they started a project with a tool called Chemcrow, which is a system based on a large language model for learning chemistry. They used this system to generate chemical hypotheses about proteins, and then the lab would do tests at night, and the system would continue to learn. This greatly accelerated research progress in chemistry and materials science.
I think "Text-to-Action" can be understood as the effect of a large number of cheap programmers. But I don't think we really understand what happens when everyone has their own programmer, and they do your expertise, not just simple things like turning on and off lights.
You can imagine a scenario, for example, you don't like Google. Just say, help me build a competitor to Google, search the web, build an interface, add generative AI, and do it in 30 seconds, and let's see the effect. These old companies, such as Google, are likely to be threatened by this attack, so let's wait and see.
2."I am no longer a Google employee"
Host: You worked at Google for many years. They invented the Transformer architecture, and Peter (Peter Norvig, former engineering director of Google Research) was one of the leaders. Thanks to smart people like Peter and Jeff Dean. But now, Google seems to have lost its initiative, and OpenAI has caught up. In the latest ranking I saw, Claude from Anthropic ranked first. I asked Sundar (Sundar Pichai), and he didn't give me a clear answer. Maybe you have a clearer or more objective explanation of what happened there.
Eric Schmidt: I am no longer an employee of Google. Frankly speaking, Google pays more attention to work-life balance. Leaving get off work early and working from home seem to be more important than winning battles. The secret to the success of a startup is that employees work hard. I'm sorry to say it so bluntly, but it's the truth. If you start a company after graduation, you won't let employees come to the company only one day a week and work from home most of the time. If you want to compete with other startups, this won't work.
Host: The early situation of Google was very similar to that of Microsoft at the time...
Eric Schmidt: Yes.
In our industry, there is a common phenomenon: Some companies win the market in a very innovative way and completely dominate a field, but they cannot smoothly transition to the next stage.
There are many such cases. I think founders are important, and this is a very important issue. They are at the helm of the company. Although founders are often difficult to get along with and demanding of employees, they also drive the company forward.
Although we may not like some of Elon's personal behaviors, look at what he does at work. The day I had dinner with him, he was flying back and forth. I was in Montana at the time, and he had to fly to attend a meeting with xAI at 10 o'clock in the evening that day.
When I went to Taiwan, I felt that different places have different cultures. What impressed me was that TSMC had a rule that new physics PhDs had to work in the basement of the factory first. Can you imagine asking American PhDs to do this kind of work? Almost impossible.
The work results are different. The reason why I am so harsh on the work issue is that there are network effects in these systems. Time is very critical, while in most industries, time is not so important, and they have enough time. Coke and Pepsi will always be there, and so will the competition between them, changing as slowly as a glacier.
When I worked with telecom companies, it took 18 months to sign a typical telecom contract. I don't think it needs to be that long, and things should be done as soon as possible. We are now at the peak of growth and revenue, and this is when some crazy ideas are needed.
For example, when Microsoft decided to work with OpenAI, I thought it was one of the stupidest ideas. It was incredible that Microsoft handed over the leadership of AI to OpenAI and Sam's team. Yet today, they are gradually becoming one of the most valuable companies, competing with Apple. Apple has no good solution in AI, and it seems that Microsoft's strategy has worked.
3.The gap between models is widening
Eric Schmidt: You asked me what will happen next, and every six months, my thoughts fluctuate. We are in an odd-even cycle. Right now, the gap between the leading models - there are only three models now - and the gap between the other models seems to be widening. Six months ago, I thought the gap was narrowing, so I invested a lot of money in some small companies, but now I am not so sure.
I started talking to big companies, and the big companies told me that they needed 10 billion, 20 billion, 50 billion, or even 100 billion.
Host: The goal is 100 billion, right? Eric Schmidt: Yes, it's very difficult. I'm good friends with Sam Altman, and he thinks it might take $300 billion or even more. I told him that I've calculated the amount of electricity needed. I went to the White House last Friday and told him frankly that we need to have a good relationship with Canada because Canada is not only a good country, but also helped invent AI and has a lot of hydropower resources. And we don't have enough electricity to support this development. Another option is to let Arab countries pay for it. I personally like Arab countries and have spent a lot of time there. But they will not abide by our national security rules, and Canada and the United States can work together. Host: That's right. So these data centers worth $100 billion or $300 billion, electricity will become a scarce resource.
Eric Schmidt: Yes. Following this line of thought, if 300 billion is invested in Nvidia, you know what stocks to buy, right? (Laughter) Of course, I am not recommending stocks.
Host: That's right. We will need more chips. Intel is getting a lot of money from the US government, and AMD is working hard to build chip factories.
Eric Schmidt: If there are devices using Intel chips here, please raise your hands (the audience raises their hands). Its monopoly seems to end here.
Host: Intel used to be a monopoly. And now it is Nvidia's monopoly. So, are there other companies that can do technical barriers like CUDA? I was talking to another entrepreneur the other day who switches between TPUs and Nvidia chips depending on the resources available.
Eric Schmidt: Because he has no other choice. If he had unlimited money, he would definitely choose Nvidia's B200 architecture today because it's faster. I'm not implying anything, competition is certainly a good thing. I've discussed this in detail with Lisa Sue of AMD, who has developed a system that can convert the CUDA architecture into their own architecture called Rocm. It's not fully functional yet, and they're still working on it.
4.We'll go through a huge bubble,and then the market will adjust
Audience: You're very optimistic about the prospects for AI. What do you think is driving this progress? Is it more money? Or more data? Or a breakthrough in technology? Eric Schmidt: I basically invest in any project I see, because I can't say for sure which one will succeed. And now there are a lot of funds coming in with me. I think part of the reason is that early investments have made money, and now those big money investors, although they don't understand AI very well, think that every project must add some AI elements, so now almost all investments have become AI investments. They can't tell good from bad. My understanding of AI is the kind of system that can really learn, I think that counts.
In addition, there are some very advanced new algorithms now, which are no longer limited to the Transformer architecture. A friend of mine, who is also my long-term partner, has created a completely new non-Transformer architecture. A team I funded in Paris also said that they have similar innovations, and there are many new trends at Stanford.
Finally, the market generally believes that developing intelligent technology will bring huge returns. For example, if you invest $50 billion in a company, you certainly hope to make a lot of money back through smart technology. So we may experience a huge investment bubble, and then the market will correct itself. It has always been like this in the past, and it may not be an exception now.
Host: You mentioned earlier that the leading companies are now pulling further and further apart.
Eric Schmidt: Yes, it is indeed the case now. There is a company in France called Mistral, they are doing very well, and I have also invested in them. They launched the second version of the model, but the third version may be closed because the cost is too high. They need income and can no longer provide the model for free.
The debate between open source and closed source is very intense in our industry. My entire career has been built on people's willingness to share open source software. The technical work I do is open source, and many of Google's core technologies are also open source. But now, maybe because the cost of capital is so high, the way software is developed may change fundamentally.
I personally think that the productivity of software programmers will at least double. There are three or four software companies working on this goal, and I have invested in these companies. Their goal is to make software programmers more efficient. One interesting company I met recently is called Augment. I always think about individual programmers, but their target is actually those large software teams, which may have millions of lines of code, but no one can figure out the details of how all the code runs. This problem is very suitable for AI to solve. Can they make money? I hope so.
Host: So, there are still many questions to discuss.
Audience: Regarding non-Transformer architectures, I feel that architectures such as state models are not discussed much, but now they are making more progress. What new progress have you seen in this area?
Eric Schmidt: I don't know enough about the math, and it's very complex. But basically, they're just different ways of doing gradient descent and matrix multiplication, faster and better. Transformers are a systematic way of doing multiplication at the same time, that's how I understand it. It's similar to that, but the math is different.
Audience: You're an engineer, given the capabilities that these models may have in the future, do we still need to spend time learning programming?
Eric Schmidt: It's like if you already know how to speak English, why do you need to continue learning English? Learning always makes you better. You have to understand how these systems work.
5.Distributed computing cannot solve the computing power problem of AI
Audience: Two quick questions: One is that the economic impact of large language models has been slower than you initially expected? Second, do you think academia should get AI subsidies? Or should it work with big companies?
Eric Schmidt: I have been working hard to promote the establishment of data centers for universities. If I were a professor in the computer science department here, I would be very dissatisfied because I could not develop those algorithms with graduate students and was forced to work with those big companies. In my opinion, these companies are not doing enough in this regard. I have talked to some professors, and many of them have to spend a lot of time waiting for Google Cloud usage quotas. This is a booming field, and the right thing to do is to provide resources to universities, and I am working hard to promote this.
As for the labor market impact you mentioned, I basically believe that high-skilled college education and related jobs should be fine because people will work with these systems. I think these systems are no different from previous technology waves, and those dangerous jobs and jobs that don't require much human judgment will eventually be replaced.
Audience: Have you studied distributed environments? I ask this because it's difficult to build large clusters, but MacBooks are still very powerful. There are many small machines around the world. Do you think ideas like Folding@home can be used for training?
Note: "Folding@home" is a project that uses global distributed computing resources to use the idle resources of global participants' computers to perform protein folding calculations.
Eric Schmidt: Distributed environments are indeed a challenge. It's indeed not easy to build large clusters, but each MacBook has its own computing power. There are so many small machines around the world, and the idea of combining them does have potential. This can be used for training, but there are a lot of technical details that need to be worked out.
We've looked at this in depth, and the way these algorithms work is this: you have a very large matrix, and you basically do multiplication. You can imagine this process is repeated over and over again. The performance of these systems depends entirely on how fast the data can be transferred from memory to the CPU or GPU. In fact, Nvidia's next generation of chips has integrated all of these functions into a single chip, and now these chips are very large and the functions are integrated together. And the packaging process is very delicate, and the chips and packaging are all done in clean rooms. So at present, supercomputers and light-speed transmission, especially the interconnect between memories, are the key factors. So I think it is unlikely to achieve what you are saying in the short term.
Moderator: Is it possible to break up large language models?
Eric Schmidt: To do this, you need to have millions of these models. And the way you ask questions will become very slow.
6.In the future, we may not understand AI,but we need to know its boundaries
Host: I would like to change the topic and talk about something philosophical. Last year you co-wrote an article with Henry Kissinger and Daniel Huttenlocher about the nature of knowledge and how it has evolved. I recently talked about this topic with someone else, and for most of history, our understanding of the universe was mysterious until the Scientific Revolution and the Enlightenment. In your article you say that models are becoming so complex and incomprehensible that we don't have a clear idea of their inner workings anymore. Feynman once said, "What I can't create, I can't understand." I've mentioned this recently, but it seems that people are creating things that they don't even understand. Does this mean that our understanding of knowledge is shifting? Do we need to start accepting the conclusions of these models even when they don't provide a clear explanation? Eric Schmidt: Let me use an analogy. It's a bit like young people. If you have teenagers in your family, you know they are people, but you don't always know how they think. However, our society has learned how to adapt to their existence and that they will eventually mature. So we may have some knowledge systems that we don't fully understand, but we know their boundaries. We know what they can and can't do. That may be the best we can hope for.
Host: Do you think we can get around these limitations?
Eric Schmidt: I think we can. The small group that we discuss every week thinks that we may use adversarial AI in the future. Imagine that in the future there will be companies that specialize in this, and you pay them to test AI systems for vulnerabilities, just like the current "red team" but with AI. The entire industry will do this kind of AI vs. AI, especially the parts that we don't understand very well. I think this is very reliable. Stanford can also consider this direction. If there are graduate students who are interested in how to crack these large models and study how they work, this is a good skill for them. So I think these two things will progress together.
Audience: Just now you mentioned the comments related to adversarial AI. In addition to the obvious improvement of AI performance models, what other problems do we need to solve? What are the main challenges to getting AI to actually do what we want? Eric Schmidt: It's definitely about getting higher performance models. You have to assume that as the technology improves, the AI illusions will decrease, although I'm not saying they will disappear completely. You also have to assume that there are ways to verify the effects, so we need to know if the results are what we expect. Take the example of the TikTok competitor I just mentioned. By the way, I'm not suggesting that you should illegally steal everyone's music. If you're a Silicon Valley entrepreneur - and I hope you all are - if your product is popular, you'll hire a bunch of lawyers to help you solve the problem, but if no one uses your product, then it doesn't matter if you stole everything. But don't take me seriously. Silicon Valley will do these tests and solve these problems. This is how we usually deal with it. So I believe that in the future we will see more and more high-performance systems, and the testing will become more and more sophisticated, and eventually there will be adversarial testing to ensure that the AI is within controllable limits. Technically, we call it "chained reasoning." People expect that in the next few years, you will be able to generate 1,000 steps of chained reasoning, just like cooking a recipe. You can follow the recipe step by step, and then verify that the final result is correct. That's how the system works. Unless of course you are playing a game.
7.False information seems to be unsolvable in the short term
Audience: How to prevent AI from creating false information in public opinion, especially in the upcoming election? Are there any solutions in the short and long term?
Eric Schmidt: In the upcoming election, and even globally, most false information will be spread through social media, and social media companies currently do not have enough power to manage this information. If you look at TikTok, some people have criticized TikTok for favoring one type of false information over another. I think we're in a mess in this area, and we need to learn how to think critically. This can be a difficult challenge, but just because someone tells you something, doesn't mean it's true.
Audience: Will it go to the other extreme? No one believes the real thing anymore? Some people have summarized this phenomenon as an "epistemological crisis."
Eric Schmidt: I think we're facing a crisis of trust right now. I think the biggest threat to society is false information, because we're getting better and better at creating false information. When I was running YouTube, the biggest problem we encountered was people uploading fake videos that even killed people, and we had a "no death policy" at the time, which sounds shocking.
Note: YouTube does not allow any content that encourages dangerous or illegal activities that could result in serious physical injury or death.
It was really painful to try to solve these problems, and there was no generative AI at that time. So to be honest, I didn't have a particularly good solution.
Host: Technology is not a universal solution, but one way that seems to alleviate this problem is public key authentication. For example, when Biden takes the stage to speak, why can't he add a digital signature to his words like SSL? Or when celebrities or public figures speak, can they have their own public keys? Just like when I give my credit card information to Amazon, I know that the recipient is indeed Amazon.
Eric Schmidt: This is indeed a form of public key authentication, combined with other verification methods to ensure that we know the source of the information.
I co-authored a paper that supports the argument you just made, but unfortunately, the paper did not work at all. So maybe the system is not organized to solve this problem as you say.
In general, CEOs are all looking to maximize revenue, and to do that, they have to maximize user engagement. To maximize engagement, that means stimulating more anger. The algorithm will prioritize content that makes people angry because that brings in more revenue. So, there is an overall bias toward extreme content, and it doesn't discriminate between camps. This is a problem that must be solved in our society.
We've talked privately about TikTok's solution before. When I was a kid, there was a rule called the "Equal Time Rule." Because TikTok is not really a social media, it's more like TV, with programmers controlling the content. Data shows that the average TikTok user in the United States spends 90 minutes watching 200 videos a day, which is a lot. The government may not make equal time rules, but some form of balance is necessary.
8.Big Models Are a Competition for Only a Few Countries
Audience Member: What role do you think AI will play in the competition with China in terms of national security or interests?
Eric Schmidt: I served as the chairman of the AI Commission, which studied this issue in detail. The report is 752 pages long, you can go and have a look. I will summarize it briefly: we are leading now, we need to continue to lead, and it will take a lot of money to do so.
The general situation is that if the leading-edge AI models continue to develop and a few open source models also participate, then only a few countries will be eligible to participate. Those with a lot of money, a strong education system, and the determination to win. The United States is one of them, and so is China. There may be other countries. But it is certain that the competition in the field of knowledge between the United States and China will be the greatest confrontation in your lifetime.
The U.S. government has basically banned the export of Nvidia chips to China, although they are not allowed to say this, but they have done so. We are about 10 years ahead of China in chip technology. In terms of lithography technology, we are also about 10 years ahead. I guess we will be ahead for a few more years in the future. The CHIPS bill was a decision made by the Trump administration and approved by the Biden administration.
Moderator: Do you think the current administration and Congress will listen to your advice? Do you think they will make such a large investment? In addition to the CHIPS bill, will large-scale AI systems continue to be built?
Eric Schmidt: As you know, I led an informal group, which is not an official group, and this group includes all the usual AI players. Over the past year, the recommendations made by these participants have become the basis for the Biden administration's decision in the field of AI, and this bill may be the longest presidential directive in history.
Note: Executive Order on Addressing United States Investments in Certain National Security Technologies and Products in Countries of Concern issued by President Biden on August 9 last year
Host: You are promoting the Special Competitive Research Project.
Eric Schmidt: This is the actual implementation bill of the Executive Office. They are busy implementing the details and have done a good job so far. For example, last year we discussed a problem: how to detect potential dangers in the system. Such a system may have learned something dangerous, but you don't know what to ask. In other words, this is a core problem. The system has learned something bad, but it won't tell you what it has learned, and you don't know how to ask questions. There are many threats in this, such as it may have learned a chemical mixing method that you don't understand. So now a lot of people are working hard to solve this problem.
Finally, we set a threshold in the memorandum, called 10^26 floating point operations, which is a measure of computing power. When you exceed this threshold, you must report your behavior to the government. This is part of the rules. The threshold set by the European Union is 10 to the 25th power, but the difference is not much. I think these technical differences will eventually disappear. The current technology can do "federated training", that is, different parts can be combined for training. So we may not be able to completely avoid the threats posed by these new technologies.
Host : I heard that OpenAI has had to do this, in part because the power consumption is too large, and no one place can bear all the computing power alone.
9.AI is a game for the rich,the rich get richer
Audience: The New York Times sued OpenAI for training models with their work. What do you think this means for data usage? Eric Schmidt: I have a lot of experience with music copyright. In the 60s, there was a series of lawsuits that culminated in an agreement that every time your song was played, whether or not the listener knew who you were, you got a certain amount of royalties, and that money was deposited into your bank account. I'm guessing it will be similar in the future, with a lot of lawsuits and some kind of agreement that a certain percentage of the revenue must be paid when these works are used. You can look at the examples of ASCAP (American Society of Composers, Authors and Publishers) and BMI (Broadcast Music, Inc., a U.S. performing rights organization), which seems a bit dated, but I think that's what will happen eventually. Audience: It seems like a few companies are dominating and will continue to dominate the AI space, and these companies seem to be the ones that all antitrust laws are focused on. What do you think of these two trends? Do you think regulators will break up these companies? What impact will this have on the industry? Eric Schmidt: I've pushed for Microsoft to be broken up in my career, and it wasn't broken up. I pushed for Google not to be broken up, and it wasn't broken up. So in my opinion, as long as these companies avoid becoming monopolies like John D. Rockefeller (founder of Standard Oil), the trend is not to break up. That's where antitrust laws come in. I don't think the government will do anything. The reason you see these big companies dominating the market is that only they have the money to build these data centers. So my friends Reed Hastings (co-founder and CEO of Netflix) and Elon Musk are doing it. So the rich get richer, and the poor can only do the best they can. That's the fact that this is a game for rich countries, which requires huge capital, a lot of technical talent, and strong government support. There are many other countries with various problems, and they don't have these resources, so they have to cooperate with other countries.
Audience Member: You spend a lot of time helping young people create wealth, and you're really passionate about it. What advice do you have for the students in this stage of their careers and beyond?
Eric Schmidt: I'm really impressed by your ability to quickly demonstrate new ideas. In one hackathon I participated in, the winning team was tasked with flying a drone between two towers. They did this in a virtual drone space, got the drone to understand what "between" meant, wrote code in Python, and successfully flew the drone through the towers in a simulator. If a professional programmer had done this, it would have taken a week or two.
I would say that the ability to prototype quickly is really important. One of the problems as an entrepreneur is that everything happens very quickly. Now, if you can't prototype in a day using a variety of tools, you have to think about it, because your competitors can do it.
So my advice is that when you start thinking about starting a business, it's good to write a business plan, you should let the computer write the business plan for you, and it's very important to use these tools to quickly turn your ideas into prototypes. Because you can be sure that someone is doing the same thing in another company, another university, or somewhere you've never been.