What’s next for AI world in 2025

What’s next for AI world in 2025: For the last couple of years we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we’re on a roll, and we’re doing it again.

We also said that AI generated election disinformation would be everywhere, but here happily we got it wrong it wrong. There were many things to wring our hands over this year, but political deepfakes were thin on the ground.

So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that agents and smaller, more efficient, language models will continue to shape the industry. Instead, here are five alternative picks from our AI team.

1. Generative Virtual Playgrounds

""If 2023 was the year of generative images and 2024 was the year of generative video what comes next? If you guessed generative virtual worlds (a.k.a. video games), high fives all round.

We got a tiny glimpse of this technology in February, when Google Deep Mind revealed a generative model called Genie that could take a still image and turn it into a side scrolling 2D platform game that can spin a starter image into an entire virtual world.

Other companies are building similar tech. In October, the AI startups De-cart and Etched revealed an unofficial Mine craft hack in which every frame of the game gets generated on the fly as you play. And World Labs, a startup confounded by Fri-Fri Li creator of Image Net, the vast data set of photos that kick started the deep learning boom is building what it calls large world models, or LWMSs.

One obvious application is video games. There’s a playful tone to these early experiments, and generative 3D simulations could be used to explore design concepts for new games, turning a sketch into a playable environment on the fly. This could lead to entirely new types of games.

But they could also used to train robots. World Labs wants to develop so called spatial intelligence the ability for machines to interpret and interact with the everyday world. But robotics researchers lack good data about real world scenarios with which to train such technology. Spinning up countless virtual worlds and doping virtual robots into them to learn by trial and error could help make up for that.

2. Large language models that “reason”

 

""The buzz was justified. When Open AI  revealed o1 in September, it introduced a new paradigm in  how large language models work. Two months later, the firm pushed that paradigm forward in almost every way with 03-a model that just might reshape this technology for good.

Most models, including Open AI’s flagship GPT-4, spit out the first response they come up with. Sometimes it’s correct; sometimes it’s not. But the firm’s new models are trained to work through their answers step by step, breaking down tricky problems into a series of simpler ones. When one approach isn’t working, they try another. This technique, known as reasoning (yes—we know exactly how loaded that term is), can make this technology more accurate, especially for math, physics, and logic problems.

It was a remarkable moment. Instead of hitting a wall , the agent had broken the task down into separate actions and picked one that might resolve the problem. Figuring out you need to click the Back button may sound basic, but for a mindless bot it’s akin to rocket science. And it worked: Mariner went back to the recipe, confirmed the type of flour, and carried on filling Godel’s basket.

Google Deep Mind is also building an experimental version of Gemini 2.0, its latest large language model, that uses this step-by-step approach to problem solving, called Gemini 2.0 Flash Thinking.

3.It’s boom time for AI in science

 

""One of the most exciting uses for AI is speeding up discovery in the natural sciences. Perhaps the greatest vindication of AI’s potential on this front came last October, when the Royal Swedish Academy of Sciences awarded the Nobel Prize for chemistry to Denis Abscissas and John M. Jumper from Google Deep Mind for building the Alpha Fold tool, which can solve protein folding, and to David Baker for building tools to help design new proteins.

Expect this trend to continue next year, and to see more data sets and models that are aimed specifically at scientific discovery. Proteins were the perfect target for AI, because the field had excellent existing data sets that AI models could be trained on.

AI model makers are also keen to pitch their generative products as research tools for scientists. Open AI  let scientists test its latest 01 model and see how it might support them in research. The results were encouraging.

Having an AI tool that can operate in a similar way to a scientist is one of the fantasies of the tech sector. In a manifesto published in October last year, Philanthropic founder Dario Demode highlighted science, speculates that in the future, AI could be not only a method of data analysis but a “virtual biologist who performs all tasks biologists do. We’re still a long way away from this scenario. But next year, we might see important steps toward it.

4. AI companies get cozier with national security

""There is a lot of money to be made by AI companies willing to lend their tools to border surveillance, intelligence gathering, and other national security tasks.

The US military has launched a number of initiatives that show it’s eager to adopt AI, from the Replicator program – which, inspired by the war in Ukraine, promises to spend $1 billion on small drones to the Artificial Intelligence Rapid Capabilities Cell, a unit bringing AI into everything from battlefield decision making to logistics.

European militarizes are under pressure to up their tech investment, triggered by concerns that Donald Trump’s administration will cut spending to Ukraine. Rising tensions between Taiwan and China weigh heavily on the minds of military planners, too.

5. Nvidia sees legitimate competition

""For much of the current AI boom, if you were a tech startup looking to try your hand at making an AI model, Jensen Huang was your man. As CEO of Nvidia, the world’s most valuable corporation, Huang helped the company become the undisputed leader of chips used both to train AI models and to ping a model when anyone uses it, called “influencing”.

A growing number of status are also attacking Nvidia from a different angle. Rather than trying to marginally improve on NVIDIA’s designs, startups like Grok are making riskier bets on entirely new chip architectures that, with enough time, promise to provide more efficient or training.

It’s unclear how these forces will play out, but it will only further incentivize chip makers to reduce reliance on Taiwan, which is the entire purpose of the CHIPS Act. As spending from the bill begins to circulate, next year could bring the first evidence of whether it’s materially boosting domestic chip production.

Leave a Comment