Source: Quantum
MIT Technology Review released five major trends in the development of artificial intelligence in 2025, and excluded agents and small language models on the grounds that this is already the obvious next big trend. The media believes that in addition to this, there are five other hot trends you should pay attention to this year. Please continue reading.
For the past few years, we have been predicting the future development of artificial intelligence. Considering the speed of development in this industry, this seems a bit like a pipe dream. But we have been continuing this effort and have earned a reputation for its foresight and reliability.
How did we perform in the last round of predictions? Last year’s top four trends to watch for 2024 included what we called custom chatbots — interactive assistant apps powered by multimodal, large language models (we didn’t know it then, but what we were talking about was what everyone now calls agents, which is the biggest buzz in AI right now); video generation (few technologies have advanced so quickly in the past 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, within a week of each other last December); and more general robots that can perform a wider range of tasks (gains from large language models continue to trickle down to other areas of the tech industry, with robotics being the leading edge). We also said AI-generated election disinformation would be ubiquitous, but thankfully we were wrong. There were a lot of things to worry about this year, but political deepfakes were rare. So what’s in store for 2025? We’ll ignore the obvious here: it’s a safe bet that agents and smaller, more efficient language models will continue to shape the industry. Here are five more hot trends you should watch this year.
1. Generative virtual playgrounds
If 2023 was the year of generative images and 2024 was the year of generative videos, what’s next? If you guessed generative virtual worlds (aka video games), let’s high-five.
We got a glimpse of this technology in February 2024, when Google DeepMind released a generative model called Genie that could take a still image and turn it into a side-scrolling, 2D platformer that players could interact with. In December, the company released Genie 2, a model that could turn an initial image into an entire virtual world.
Other companies are working on similar technology. In October, AI startups Decart and Etched unveiled an unofficial hack of Minecraft in which every frame of the game was generated on the fly as the player played. And World Labs, a startup co-founded by Fei-Fei Li, a renowned AI scientist and the godmother of AI, is building what it calls Large World Models (LWMs). (Li is also the creator of ImageNet, the massive photo dataset that kicked off the deep learning craze.) One obvious application is video games. These early experiments are fun, and generative 3D simulations can be used to explore design concepts for new games, turning sketches into playable environments on the fly. That could lead to entirely new types of games. But they can also be used to train robots. World Labs wants to develop what’s called spatial intelligence — the ability for robots to interpret and interact with everyday life. But robotics researchers lack high-quality data of real-world scenes to train such technology. Building countless virtual worlds, dropping virtual robots into them, and letting them learn through trial and error could fill that void.
2. Large language models that can “reason”
The buzz makes sense. When OpenAI released o1 in September, it introduced a new paradigm for how large language models work. Two months later, the company followed up with o3, which pushes that paradigm forward in almost every way—and a model that could completely reshape the technology.
Most models, including OpenAI’s flagship GPT-4, give the first answer they think of. Sometimes it’s right; sometimes it’s not. But the company’s new models are trained to solve problems incrementally, breaking down tough problems into a series of simpler ones. When one approach doesn’t work, they try another. This technique, called “reasoning” (yes—we know exactly what that word means), can make the technology more accurate, especially on math, physics, and logic problems.
It’s also critical for agents.
In December, Google DeepMind released an experimental new web-browsing agent called Mariner. In a preview demo provided by the company, Mariner seemed to have a problem. Megha Gore, a product manager at the company, had asked the agent to find her a recipe for Christmas cookies that looked like the ones in a photo she had given it. Mariner found a recipe online and started adding ingredients to Gore’s online shopping cart.
Then it paused because it didn’t know which flour to choose. Gore watched as Mariner explained its steps in a chat window: “It said, ‘I’ll use my browser’s ‘back’ button to return to the recipe.’”
It was a remarkable moment. Instead of hitting a wall, the agent broke the task down into different actions and chose one that might solve the problem. Figuring out that you need to hit the back button might sound simple, but for a mindless robot, it’s rocket science. And it worked: Mariner returned to the recipe, confirmed the type of flour, and continued filling Gore’s cart.
Google DeepMind is also building an experimental version of its latest large language model, Gemini 2.0, which uses this step-by-step approach to problem solving, called Gemini 2.0 Flash Thinking.
But OpenAI and Google are just the tip of the iceberg. Many companies are building large language models that use similar techniques, making them better at tasks ranging from cooking to programming. Expect more talk about reasoning this year (we know, we know).
3. AI is booming in science
One of the most exciting uses of AI is to accelerate discovery in the natural sciences. Perhaps the biggest demonstration of AI’s potential in this regard came last October, when the Royal Swedish Academy of Sciences awarded the Nobel Prize in Chemistry to Demis Hassabis and John M. Jumper of Google DeepMind for developing the AlphaFold tool that can solve the protein folding problem, and to David Baker for developing tools to help design new proteins.
Expect this trend to continue this year, with more datasets and models dedicated to scientific discovery. Proteins are a perfect target for AI because the field has excellent existing datasets that can be used to train AI models.
People are looking for the next big discovery. One potential area is materials science. Meta has released massive datasets and models that can help scientists use AI to discover new materials faster. In December, Hugging Face teamed up with startup Entalpic to launch LeMaterial, an open source project designed to simplify and accelerate materials research. Their first project is a dataset that unifies, cleans and standardizes the most important materials datasets.
AI model makers are also keen to use their generative products as research tools for scientists. OpenAI let scientists test its latest o1 model to see what support it can provide for their research. The results are encouraging.
Having an AI tool that works in a scientist-like way is one of the dreams of the tech community. In a manifesto published last October, Anthropic founder Dario Amodei emphasized that science, especially biology, is one of the key areas where strong AI can help. Amodei speculated that in the future AI may not only be a data analysis method, but also a "virtual biologist that performs all the tasks of a biologist." We are still a long way from this vision. But this year, we may see an important step towards this goal.
4. AI companies have a closer relationship with national security
AI companies can make a lot of money if they are willing to use their tools for border surveillance, intelligence gathering and other national security tasks.
The U.S. military has launched a series of programs that signal its eagerness to adopt AI, from the "Replicator" program, which pledged to invest $1 billion in small drones inspired by the war in Ukraine, to the "AI Rapid Capability Unit," a unit that will bring AI to everything from battlefield decision-making to logistics. European militaries are under pressure to increase investment in the technology due to concerns that Donald Trump's administration will cut support for Ukraine. Rising tensions between more countries and regions are also worrying military planners.
In 2025, these trends will continue to be a boon for defense technology companies such as Palantir and Anduril, which are currently using confidential military data to train AI models.
The defense industry's deep pockets will also attract mainstream AI companies to join. In December, OpenAI announced that it would work with Anduril on a program to shoot down drones, completing a year-long policy of not working with the military. It joins Microsoft, Amazon, and Google, which have been working with the Pentagon for years.
Other AI competitors are investing billions of dollars to train and develop new models, and in 2025 they will face more pressure to think seriously about revenue. It’s possible that they can find enough non-defense customers willing to pay top dollar for AI agents that can handle complex tasks, or creative industries willing to spend money on image and video generation tools.
But they will also be increasingly tempted to compete for lucrative Pentagon contracts. A big dilemma for these companies will be whether participating in defense projects will be seen as contrary to the company’s values. OpenAI’s rationale for changing its stance is that “democracies should continue to lead the development of artificial intelligence,” the company wrote, arguing that lending its models to the military will advance that goal. In 2025, we’ll see other companies follow its lead.
5. Nvidia sees competition coming
For much of the current AI boom, if you were a tech startup trying to make AI models, Huang was your man. As CEO of chip giant Nvidia, Huang has helped make the company the undisputed leader in chips that are used both to train AI models and to perform “inference” when someone uses the models.
In 2025, a variety of forces could change that. For one, giant rivals like Amazon, Broadcom, and AMD have been investing heavily in new chips, and there are early signs that those chips could compete aggressively with Nvidia’s — especially in inference, where Nvidia’s lead is less secure.
A growing number of startups are also attacking Nvidia from different angles. Rather than trying to make small improvements on Nvidia’s designs, startups like Groq are taking riskier bets on entirely new chip architectures that, given enough time, promise to offer more efficient or more effective training. In 2025, these experiments will still be in the early stages, but there’s a chance that a prominent competitor will emerge, changing the assumption that top AI models rely exclusively on Nvidia chips.
Underpinning this competition is the geopolitical chip war. So far, the war has relied on two main strategies. On the one hand, the West has sought to restrict exports of top chips and the technology that makes them to rival countries. On the other, initiatives like the US CHIPS Act aim to boost semiconductor production within the United States.
Donald Trump could escalate these export controls and promise massive tariffs on all imports from rival countries. In 2025, such tariffs would put TSMC at the center of a trade war — on which US chipmakers rely heavily.
It’s unclear how these factors will play out, but it would only further incentivize chipmakers to reduce their reliance on TSMC, which is the entire purpose of the CHIPS Act. As the bill's spending begins to circulate, this year will provide the first evidence of whether the bill has substantially boosted domestic chip production in the United States.