Stay Ahead, Stay ONMINE

What’s next for AI in 2025

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. For the last couple of years we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we’re on a roll, and we’re doing it again. How did we score last time round? Our four hot trends to watch out for in 2024 included what we called customized chatbots—interactive helper apps powered by multimodal large language models (check: we didn’t know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now); generative video (check: few technologies have improved so fast in the last 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, within a week of each other this December); and more general-purpose robots that can do a wider range of tasks (check: the payoffs from large language models continue to trickle down to other parts of the tech industry, and robotics is top of the list).  We also said that AI-generated election disinformation would be everywhere, but here—happily—we got it wrong. There were many things to wring our hands over this year, but political deepfakes were thin on the ground.  So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that agents and smaller, more efficient, language models will continue to shape the industry. Instead, here are five alternative picks from our AI team. 1. Generative virtual playgrounds  If 2023 was the year of generative images and 2024 was the year of generative video—what comes next? If you guessed generative virtual worlds (a.k.a. video games), high fives all round. We got a tiny glimpse of this technology in February, when Google DeepMind revealed a generative model called Genie that could take a still image and turn it into a side-scrolling 2D platform game that players could interact with. In December, the firm revealed Genie 2, a model that can spin a starter image into an entire virtual world. Other companies are building similar tech. In October, the AI startups Decart and Etched revealed an unofficial Minecraft hack in which every frame of the game gets generated on the fly as you play. And World Labs, a startup cofounded by Fei-Fei Li—creator of ImageNet, the vast data set of photos that kick-started the deep-learning boom—is building what it calls large world models, or LWMs. One obvious application is video games. There’s a playful tone to these early experiments, and generative 3D simulations could be used to explore design concepts for new games, turning a sketch into a playable environment on the fly. This could lead to entirely new types of games.  But they could also be used to train robots. World Labs wants to develop so-called spatial intelligence—the ability for machines to interpret and interact with the everyday world. But robotics researchers lack good data about real-world scenarios with which to train such technology. Spinning up countless virtual worlds and dropping virtual robots into them to learn by trial and error could help make up for that.    —Will Douglas Heaven 2. Large language models that “reason” The buzz was justified. When OpenAI revealed o1 in September, it introduced a new paradigm in how large language models work. Two months later, the firm pushed that paradigm forward in almost every way with o3—a model that just might reshape this technology for good. Most models, including OpenAI’s flagship GPT-4, spit out the first response they come up with. Sometimes it’s correct; sometimes it’s not. But the firm’s new models are trained to work through their answers step by step, breaking down tricky problems into a series of simpler ones. When one approach isn’t working, they try another. This technique, known as “reasoning” (yes—we know exactly how loaded that term is), can make this technology more accurate, especially for math, physics, and logic problems. It’s also crucial for agents. In December, Google DeepMind revealed an experimental new web-browsing agent called Mariner. In the middle of a preview demo that the company gave to MIT Technology Review, Mariner seemed to get stuck. Megha Goel, a product manager at the company, had asked the agent to find her a recipe for Christmas cookies that looked like the ones in a photo she’d given it. Mariner found a recipe on the web and started adding the ingredients to Goel’s online grocery basket. Then it stalled; it couldn’t figure out what type of flour to pick. Goel watched as Mariner explained its steps in a chat window: “It says, ‘I will use the browser’s Back button to return to the recipe.’” It was a remarkable moment. Instead of hitting a wall, the agent had broken the task down into separate actions and picked one that might resolve the problem. Figuring out you need to click the Back button may sound basic, but for a mindless bot it’s akin to rocket science. And it worked: Mariner went back to the recipe, confirmed the type of flour, and carried on filling Goel’s basket. Google DeepMind is also building an experimental version of Gemini 2.0, its latest large language model, that uses this step-by-step approach to problem solving, called Gemini 2.0 Flash Thinking. But OpenAI and Google are just the tip of the iceberg. Many companies are building large language models that use similar techniques, making them better at a whole range of tasks, from cooking to coding. Expect a lot more buzz about reasoning (we know, we know) this year. —Will Douglas Heaven 3. It’s boom time for AI in science  One of the most exciting uses for AI is speeding up discovery in the natural sciences. Perhaps the greatest vindication of AI’s potential on this front came last October, when the Royal Swedish Academy of Sciences awarded the Nobel Prize for chemistry to Demis Hassabis and John M. Jumper from Google DeepMind for building the AlphaFold tool, which can solve protein folding, and to David Baker for building tools to help design new proteins. Expect this trend to continue next year, and to see more data sets and models that are aimed specifically at scientific discovery. Proteins were the perfect target for AI, because the field had excellent existing data sets that AI models could be trained on.  The hunt is on to find the next big thing. One potential area is materials science. Meta has released massive data sets and models that could help scientists use AI to discover new materials much faster, and in December, Hugging Face, together with the startup Entalpic, launched LeMaterial, an open-source project that aims to simplify and accelerate materials research. Their first project is a data set that unifies, cleans, and standardizes the most prominent material data sets.  AI model makers are also keen to pitch their generative products as research tools for scientists. OpenAI let scientists test its latest o1 model and see how it might support them in research. The results were encouraging.  Having an AI tool that can operate in a similar way to a scientist is one of the fantasies of the tech sector. In a manifesto published in October last year, Anthropic founder Dario Amodei highlighted science, especially biology, as one of the key areas where powerful AI could help. Amodei speculates that in the future, AI could be not only a method of data analysis but a “virtual biologist who performs all the tasks biologists do.” We’re still a long way away from this scenario. But next year, we might see important steps toward it.  —Melissa Heikkilä 4. AI companies get cozier with national security There is a lot of money to be made by AI companies willing to lend their tools to border surveillance, intelligence gathering, and other national security tasks.  The US military has launched a number of initiatives that show it’s eager to adopt AI, from the Replicator program—which, inspired by the war in Ukraine, promises to spend $1 billion on small drones—to the Artificial Intelligence Rapid Capabilities Cell, a unit bringing AI into everything from battlefield decision-making to logistics. European militaries are under pressure to up their tech investment, triggered by concerns that Donald Trump’s administration will cut spending to Ukraine. Rising tensions between Taiwan and China weigh heavily on the minds of military planners, too.  In 2025, these trends will continue to be a boon for defense-tech companies like Palantir, Anduril, and others, which are now capitalizing on classified military data to train AI models.  The defense industry’s deep pockets will tempt mainstream AI companies into the fold too. OpenAI in December announced it is partnering with Anduril on a program to take down drones, completing a year-long pivot away from its policy of not working with the military. It joins the ranks of Microsoft, Amazon, and Google, which have worked with the Pentagon for years.  Other AI competitors, which are spending billions to train and develop new models, will face more pressure in 2025 to think seriously about revenue. It’s possible that they’ll find enough non-defense customers who will pay handsomely for AI agents that can handle complex tasks, or creative industries willing to spend on image and video generators.  But they’ll also be increasingly tempted to throw their hats in the ring for lucrative Pentagon contracts. Expect to see companies wrestle with whether working on defense projects will be seen as a contradiction to their values. OpenAI’s rationale for changing its stance was that “democracies should continue to take the lead in AI development,” the company wrote, reasoning that lending its models to the military would advance that goal. In 2025, we’ll be watching others follow its lead.  —James O’Donnell 5. Nvidia sees legitimate competition For much of the current AI boom, if you were a tech startup looking to try your hand at making an AI model, Jensen Huang was your man. As CEO of Nvidia, the world’s most valuable corporation, Huang helped the company become the undisputed leader of chips used both to train AI models and to ping a model when anyone uses it, called “inferencing.” A number of forces could change that in 2025. For one, behemoth competitors like Amazon, Broadcom, AMD, and others have been investing heavily in new chips, and there are early indications that these could compete closely with Nvidia’s—particularly for inference, where Nvidia’s lead is less solid.  A growing number of startups are also attacking Nvidia from a different angle. Rather than trying to marginally improve on Nvidia’s designs, startups like Groq are making riskier bets on entirely new chip architectures that, with enough time, promise to provide more efficient or effective training. In 2025 these experiments will still be in their early stages, but it’s possible that a standout competitor will change the assumption that top AI models rely exclusively on Nvidia chips. Underpinning this competition, the geopolitical chip war will continue. That war thus far has relied on two strategies. On one hand, the West seeks to limit exports to China of top chips and the technologies to make them. On the other, efforts like the US CHIPS Act aim to boost domestic production of semiconductors. Donald Trump may escalate those export controls and has promised massive tariffs on any goods imported from China. In 2025, such tariffs would put Taiwan—on which the US relies heavily because of the chip manufacturer TSMC—at the center of the trade wars. That’s because Taiwan has said it will help Chinese firms relocate to the island to help them avoid the proposed tariffs. That could draw further criticism from Trump, who has expressed frustration with US spending to defend Taiwan from China.  It’s unclear how these forces will play out, but it will only further incentivize chipmakers to reduce reliance on Taiwan, which is the entire purpose of the CHIPS Act. As spending from the bill begins to circulate, next year could bring the first evidence of whether it’s materially boosting domestic chip production.  —James O’Donnell

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

For the last couple of years we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we’re on a roll, and we’re doing it again.

How did we score last time round? Our four hot trends to watch out for in 2024 included what we called customized chatbots—interactive helper apps powered by multimodal large language models (check: we didn’t know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now); generative video (check: few technologies have improved so fast in the last 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, within a week of each other this December); and more general-purpose robots that can do a wider range of tasks (check: the payoffs from large language models continue to trickle down to other parts of the tech industry, and robotics is top of the list). 

We also said that AI-generated election disinformation would be everywhere, but here—happily—we got it wrong. There were many things to wring our hands over this year, but political deepfakes were thin on the ground

So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that agents and smaller, more efficient, language models will continue to shape the industry. Instead, here are five alternative picks from our AI team.

1. Generative virtual playgrounds 

If 2023 was the year of generative images and 2024 was the year of generative video—what comes next? If you guessed generative virtual worlds (a.k.a. video games), high fives all round.

We got a tiny glimpse of this technology in February, when Google DeepMind revealed a generative model called Genie that could take a still image and turn it into a side-scrolling 2D platform game that players could interact with. In December, the firm revealed Genie 2, a model that can spin a starter image into an entire virtual world.

Other companies are building similar tech. In October, the AI startups Decart and Etched revealed an unofficial Minecraft hack in which every frame of the game gets generated on the fly as you play. And World Labs, a startup cofounded by Fei-Fei Li—creator of ImageNet, the vast data set of photos that kick-started the deep-learning boom—is building what it calls large world models, or LWMs.

One obvious application is video games. There’s a playful tone to these early experiments, and generative 3D simulations could be used to explore design concepts for new games, turning a sketch into a playable environment on the fly. This could lead to entirely new types of games

But they could also be used to train robots. World Labs wants to develop so-called spatial intelligence—the ability for machines to interpret and interact with the everyday world. But robotics researchers lack good data about real-world scenarios with which to train such technology. Spinning up countless virtual worlds and dropping virtual robots into them to learn by trial and error could help make up for that.   

Will Douglas Heaven

2. Large language models that “reason”

The buzz was justified. When OpenAI revealed o1 in September, it introduced a new paradigm in how large language models work. Two months later, the firm pushed that paradigm forward in almost every way with o3—a model that just might reshape this technology for good.

Most models, including OpenAI’s flagship GPT-4, spit out the first response they come up with. Sometimes it’s correct; sometimes it’s not. But the firm’s new models are trained to work through their answers step by step, breaking down tricky problems into a series of simpler ones. When one approach isn’t working, they try another. This technique, known as “reasoning” (yes—we know exactly how loaded that term is), can make this technology more accurate, especially for math, physics, and logic problems.

It’s also crucial for agents.

In December, Google DeepMind revealed an experimental new web-browsing agent called Mariner. In the middle of a preview demo that the company gave to MIT Technology Review, Mariner seemed to get stuck. Megha Goel, a product manager at the company, had asked the agent to find her a recipe for Christmas cookies that looked like the ones in a photo she’d given it. Mariner found a recipe on the web and started adding the ingredients to Goel’s online grocery basket.

Then it stalled; it couldn’t figure out what type of flour to pick. Goel watched as Mariner explained its steps in a chat window: “It says, ‘I will use the browser’s Back button to return to the recipe.’”

It was a remarkable moment. Instead of hitting a wall, the agent had broken the task down into separate actions and picked one that might resolve the problem. Figuring out you need to click the Back button may sound basic, but for a mindless bot it’s akin to rocket science. And it worked: Mariner went back to the recipe, confirmed the type of flour, and carried on filling Goel’s basket.

Google DeepMind is also building an experimental version of Gemini 2.0, its latest large language model, that uses this step-by-step approach to problem solving, called Gemini 2.0 Flash Thinking.

But OpenAI and Google are just the tip of the iceberg. Many companies are building large language models that use similar techniques, making them better at a whole range of tasks, from cooking to coding. Expect a lot more buzz about reasoning (we know, we know) this year.

—Will Douglas Heaven

3. It’s boom time for AI in science 

One of the most exciting uses for AI is speeding up discovery in the natural sciences. Perhaps the greatest vindication of AI’s potential on this front came last October, when the Royal Swedish Academy of Sciences awarded the Nobel Prize for chemistry to Demis Hassabis and John M. Jumper from Google DeepMind for building the AlphaFold tool, which can solve protein folding, and to David Baker for building tools to help design new proteins.

Expect this trend to continue next year, and to see more data sets and models that are aimed specifically at scientific discovery. Proteins were the perfect target for AI, because the field had excellent existing data sets that AI models could be trained on. 

The hunt is on to find the next big thing. One potential area is materials science. Meta has released massive data sets and models that could help scientists use AI to discover new materials much faster, and in December, Hugging Face, together with the startup Entalpic, launched LeMaterial, an open-source project that aims to simplify and accelerate materials research. Their first project is a data set that unifies, cleans, and standardizes the most prominent material data sets. 

AI model makers are also keen to pitch their generative products as research tools for scientists. OpenAI let scientists test its latest o1 model and see how it might support them in research. The results were encouraging. 

Having an AI tool that can operate in a similar way to a scientist is one of the fantasies of the tech sector. In a manifesto published in October last year, Anthropic founder Dario Amodei highlighted science, especially biology, as one of the key areas where powerful AI could help. Amodei speculates that in the future, AI could be not only a method of data analysis but a “virtual biologist who performs all the tasks biologists do.” We’re still a long way away from this scenario. But next year, we might see important steps toward it. 

—Melissa Heikkilä

4. AI companies get cozier with national security

There is a lot of money to be made by AI companies willing to lend their tools to border surveillance, intelligence gathering, and other national security tasks. 

The US military has launched a number of initiatives that show it’s eager to adopt AI, from the Replicator program—which, inspired by the war in Ukraine, promises to spend $1 billion on small drones—to the Artificial Intelligence Rapid Capabilities Cell, a unit bringing AI into everything from battlefield decision-making to logistics. European militaries are under pressure to up their tech investment, triggered by concerns that Donald Trump’s administration will cut spending to Ukraine. Rising tensions between Taiwan and China weigh heavily on the minds of military planners, too. 

In 2025, these trends will continue to be a boon for defense-tech companies like Palantir, Anduril, and others, which are now capitalizing on classified military data to train AI models. 

The defense industry’s deep pockets will tempt mainstream AI companies into the fold too. OpenAI in December announced it is partnering with Anduril on a program to take down drones, completing a year-long pivot away from its policy of not working with the military. It joins the ranks of Microsoft, Amazon, and Google, which have worked with the Pentagon for years. 

Other AI competitors, which are spending billions to train and develop new models, will face more pressure in 2025 to think seriously about revenue. It’s possible that they’ll find enough non-defense customers who will pay handsomely for AI agents that can handle complex tasks, or creative industries willing to spend on image and video generators. 

But they’ll also be increasingly tempted to throw their hats in the ring for lucrative Pentagon contracts. Expect to see companies wrestle with whether working on defense projects will be seen as a contradiction to their values. OpenAI’s rationale for changing its stance was that “democracies should continue to take the lead in AI development,” the company wrote, reasoning that lending its models to the military would advance that goal. In 2025, we’ll be watching others follow its lead. 

James O’Donnell

5. Nvidia sees legitimate competition

For much of the current AI boom, if you were a tech startup looking to try your hand at making an AI model, Jensen Huang was your man. As CEO of Nvidia, the world’s most valuable corporation, Huang helped the company become the undisputed leader of chips used both to train AI models and to ping a model when anyone uses it, called “inferencing.”

A number of forces could change that in 2025. For one, behemoth competitors like Amazon, Broadcom, AMD, and others have been investing heavily in new chips, and there are early indications that these could compete closely with Nvidia’s—particularly for inference, where Nvidia’s lead is less solid. 

A growing number of startups are also attacking Nvidia from a different angle. Rather than trying to marginally improve on Nvidia’s designs, startups like Groq are making riskier bets on entirely new chip architectures that, with enough time, promise to provide more efficient or effective training. In 2025 these experiments will still be in their early stages, but it’s possible that a standout competitor will change the assumption that top AI models rely exclusively on Nvidia chips.

Underpinning this competition, the geopolitical chip war will continue. That war thus far has relied on two strategies. On one hand, the West seeks to limit exports to China of top chips and the technologies to make them. On the other, efforts like the US CHIPS Act aim to boost domestic production of semiconductors.

Donald Trump may escalate those export controls and has promised massive tariffs on any goods imported from China. In 2025, such tariffs would put Taiwan—on which the US relies heavily because of the chip manufacturer TSMC—at the center of the trade wars. That’s because Taiwan has said it will help Chinese firms relocate to the island to help them avoid the proposed tariffs. That could draw further criticism from Trump, who has expressed frustration with US spending to defend Taiwan from China. 

It’s unclear how these forces will play out, but it will only further incentivize chipmakers to reduce reliance on Taiwan, which is the entire purpose of the CHIPS Act. As spending from the bill begins to circulate, next year could bring the first evidence of whether it’s materially boosting domestic chip production. 

James O’Donnell

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

SASE 2025: Impact grows despite adoption hurdles

Complexity of managing access policies across multiple platforms: 23% Rising costs due to increased capacity and bandwidth needs: 16% Lack of visibility into use activity and traffic: 14% Inflexibility of technologies to support both remote and in-office work: 11% Excessive user privileges increasing security risk: 10% Lack of contextual data

Read More »

Nile dials-up AI to simplify network provisioning, operation

Other features in Nile Nav offer real-time deployment data and visibility as well as instant feedback during setup and activation ensures IT teams can monitor progress and address issues promptly, Kannan stated.  “Post-deployment, the app offers insights into network health and performance, enabling swift diagnostics and resolution,” Kannan stated. Nile

Read More »

USA Needs More Electricity to Win AI Race, Says Trump Energy Czar

The US risks forfeiting a global competition to dominate artificial intelligence if it doesn’t build more reliable, always-on electricity to supply the industry, President-elect Donald Trump’s pick to lead the Interior Department warned Thursday. Doug Burgum, the former North Dakota governor who has also been tapped to help chart Trump’s energy policy, cast the issue as critical to America’s national security during a Senate confirmation hearing that offered a preview of the incoming administration’s planned embrace of fossil fuels. Where renewable power supplies are intermittent and “unreliable,” Burgum said, AI’s growing energy demands will require more of the so-called baseload electricity that can be generated around-the-clock by burning coal and natural gas. “Without baseload, we’re going to lose the AI arms race to China,” Burgum told the Senate Energy and Natural Resources Committee. “AI is manufacturing intelligence, and if we don’t manufacture more intelligence than our adversaries, that affects every job, every company and every industry.” During a three-hour meeting marked by cordial exchanges — and none of the intense sparring that has dominated other confirmation hearings this week — Burgum sought to assure senators he would seek a “balanced approach” for oil drilling, conservation and even potentially housing on the federal land managed by the Interior Department.  The agency’s sprawling portfolio spans a fifth of US land, and it is the lead regulator for oil, gas and wind power development in the nation’s coastal waters. Burgum also made clear that a top priority is addressing what he called a “significant imbalance” in the nation’s electricity mix, as developers look to connect a host of low- and zero-emission power projects to the grid. “If the sun’s not shining and the wind’s not blowing, and we don’t have baseload, then we’ve got brownouts and blackouts, we have higher electric prices for

Read More »

Oil Notches Fourth Weekly Advance as Sanctions Threaten Supplies

Oil notched its fourth straight weekly gain, the longest run since July, as US sanctions posed growing risks to global supply in a market already tightened by cold weather. West Texas Intermediate was up almost 2% for the week, even after retreating below $78 a barrel on Friday. The Biden administration’s harshest ever curbs on Russian oil have shaken up markets, with freight costs rocketing and long-standing buyers of the country’s crude, including China and India, looking elsewhere for supplies. Market participants are also recalibrating their outlook three days ahead of President-elect Donald Trump’s inauguration. Prices whipsawed on Thursday as traders parsed clues on the incoming administration’s sanctions stance. Trump’s advisers were reportedly considering relaxing the curbs to enable a Russia-Ukraine accord, while Treasury secretary nominee Scott Bessent said he would support dialing up measures targeting the Russian oil industry.  “The Russian sanctions on 183 oil tankers have been the focus for crude prices,” said Dennis Kissler, senior vice president of trading at BOK Financial Securities. “The latest crude strength has been impressive, with tight near-term supplies, as buyers became aggressive once the sanctions on Russia were supported by both presidential administrations.” Trump has also threatened to impose tariffs on imports from Canada, including its oil. While the federal government is pushing back, the leader of its largest oil-producing province is resisting efforts to include curtailing or taxing crude shipments as potential countermeasures.  Meanwhile, traders weighed mixed economic signals out of China, the world’s largest crude importer. The nation hit the government’s growth goal last year after a late stimulus blitz and export boom turbocharged activity. At the same time, China’s oil refining volumes declined by 1.6% last year as the shift to electric vehicles gained pace. Looming US tariffs also threaten to take away a key driver of expansion. Crude has rallied almost 9% this year as cold weather in the

Read More »

Biden Makes Last-Minute Bid to Thwart Arctic Oil Drilling

The Biden administration advanced a plan to limit oil drilling and infrastructure across more of Alaska’s National Petroleum Reserve, a bid to lock in land protections and conservation requirements days before President-elect Donald Trump takes office. The Interior Department move Thursday represents the latest step by outgoing President Joe Biden to enshrine protections that could complicate Trump plans to rapidly expand oil and gas development across US federal lands and waters. In recent weeks, Biden also has designated new national monuments and ruled out the sale of drilling rights in more than 625 million acres of US coastal waters.  In the latest action, the Interior Department is proposing new “special area” designations that would restrict drilling and other activities across more than 3 million acres of the Indiana-sized reserve in northwest Alaska. The move comes on top of on an existing policy, finalized last year, that barred drilling across nearly half of the NPR-A.  The rugged terrain once earmarked for energy development contains an estimated 8.7 billion barrels of recoverable oil, but it’s also an important habitat for caribou, grizzly bears and migratory birds. And it’s a prized resource for Alaska Natives who have long relied on the land for subsistence hunting and fishing. The Interior Department immediately imposed measures meant to avoid damage to those areas even while they’re being considered for protection, effectively raising hurdles for building roads and other infrastructure across the tracts.  Although Trump could cast aside his predecessor’s proposed special areas and ignore the interim safeguards imposed in the meantime, the action could be challenged in federal court. The report and memo unveiled Thursday bolsters the government record for those safeguards, providing potential fodder for any future legal battle. Environmentalists said they hoped the effort would create a bulwark against Trump’s plan to unleash American oil development.  “The Biden administration clearly understands that the

Read More »

Japan Watching for Any Impact on LNG From New Russia Sanctions

Tokyo will closely monitor the rollout of new US sanctions on Moscow for any impact on shipments of liquefied natural gas from Russia’s Far East, a key source of supply for Japan. A week ago, the Biden administration imposed aggressive penalties on Russian energy, including restrictions on vessels that export oil from the Sakhalin-2 project just north of Japan. If those curbs end up halting crude production from the site, the gas that’s pumped out at the same time may be at risk. Japan is a big LNG buyer and sourced about 8% of its imports from Sakhalin-2 last year, according to ship-tracking data compiled by Bloomberg. “We’ll discuss with the relevant stakeholders” to ensure Japan gets the gas it needs, Shinichi Sasayama, the president of major importer Tokyo Gas Co., said Thursday. “It might require more investigation to determine how much impact this will actually have. I wouldn’t say there is no impact whatsoever.” One of Sakhalin-2’s three production platforms, Lunskaya, pumps both natural gas and gas condensate, a light version of crude oil, and the two fuels are then separated onshore. If curbs on exporting the oil lead to a buildup of crude on site, that may eventually prompt a halt in output, affecting gas in the process. “If oil and condensate shipments really stopped, then at some point — when the storage facilities were full — gas production would also have to halt as it’s impossible to produce gas without producing condensate,” said Sergey Vakulenko, an oil industry veteran who spent part of his career at Sakhalin-2. The US sanctions do not extend to the actual oil and gas from the development, just to the tankers needed to export the crude. Oil shipments are unlikely to cease immediately since the restrictions allow for a wind-down period. Ultimately, Lunskaya’s continued

Read More »

Labour appoints five members to GB Energy ‘start-up board’

The UK government has appointed five non-executive directors to the “start-up board” of proposed publicly-owned energy company GB Energy. The state-owned firm was a key election pledge of the Labour party, and officials unveiled former Siemens Energy chief executive Juergen Maier as chairman in July. Prime Minister Sir Keir Starmer later confirmed GB Energy will be based in Aberdeen, however questions remain over the number of jobs it will provide in the Granite City. Scottish politicians also criticised Maier’s decision to remain based in Manchester, rather than relocating to Aberdeen. Announcing the appointees, the Department for Energy Security and Net Zero said they bring a wide range of experience from their previous roles. “Together with the chair Juergen Maier, they will help to scale up Great British Energy and build its organisational structure and Aberdeen headquarters,” DESNZ said. UK energy secretary Ed Miliband said the GB Energy board will “hit the ground running” in its mission to “scale up clean, homegrown power”. Meanwhile, Maier said the appointments are an “important milestone” for the company is it seeks to “rapidly scale up” and “get to work”. “Their experience across the energy industry, government and trade unions will be crucial in shaping our strategy and organisation, ensuring we can back clean energy projects, bolster UK supply chains and create good jobs across the country,” Maier said. Who is the GB Energy start-up board? DESNZ said the five new start-up non-executive directors will join the GB Energy board on initial contracts of between 18 months and two years. They include former Trades Union Congress (TUC) general secretary and Labour peer  Frances O’Grady, former SP Energy Networks chief executive Frank Mitchell, British Hydropower Association chief executive Kate Gilmartin, former Association for Renewable Energy and Clean Technology (REA) chief executive Dr Nina Skorupska, and former

Read More »

Westinghouse, KEPCO Settle Dispute over Nuclear Tech Rights

Korea Electric Power Corp. (KEPCO) and Westinghouse Electric Co. LLC have signed an agreement to resolve their intellectual property dispute over nuclear reactor designs and pursue collaboration. Cranberry Township, Pennsylvania-based Westinghouse said Thursday it would work with KEPCO and KEPCO’s Korea Hydro and Nuclear Power Co. Ltd. (KHNP) for the dismissal of all current legal actions. United States litigation and international arbitration are pending concerning Westinghouse’s claim to sub-licensing and export rights against South Korea’s state-owned KEPCO. “This agreement allows both parties to move forward with certainty in the pursuit and deployment of new nuclear reactors”, Westinghouse said in an online statement. “The agreement also sets the stage for future cooperation between the parties to advance new nuclear projects globally”. Westinghouse president and chief executive Patrick Fragman said, “As the world demands more firm baseload power, we look forward to opportunities for cooperation to deploy nuclear power at even greater scale”. Details of the settlement deal are confidential, Westinghouse said. In a recent episode of the legal row, which dates back to 2022, Westinghouse last year protested in Czechia after the Central European country’s state-owned CEZ Group selected KHNP over Westinghouse for two nuclear reactors. Westinghouse argued KHNP’s designs use the former’s technology and that the Korean company did not have clearance under U.S. tech export controls. Announcing the appeal before the Czech Anti-Monopoly Office on August 26, 2024, Westinghouse said, “The tender required vendors to certify they possess the right to transfer and sublicense the nuclear technology offered in their bids to CEZ and local suppliers”. “KHNP’s APR1000 and APR1400 plant designs utilize Westinghouse-licensed Generation II System 80 technology. KHNP neither owns the underlying technology nor has the right to sublicense it to a third party without Westinghouse consent. “Further, only Westinghouse has the legal right to obtain the

Read More »

Lenovo to acquire Infinidat to expand its storage folio

The company, which CEO Phil Bullinger currently leads, was founded by Moshe Yanai in 2011. It also has an office in Waltham, Massachusetts. Lenovo eyes high-end enterprise storage market The acquisition is part of Lenovo’s growth strategy to meet the evolving needs of modern data centers that are expected to handle AI and generative AI workloads, the company said, adding that Infinidat’s offering will find synergy with its Infrastructure Solutions Group and jointly will target the high-end enterprise storage market. Currently, Lenovo’s Infrastructure Solutions business operates in the entry and mid-range enterprise storage market offering a portfolio of options, such as flash and hybrid arrays, hyperconverged infrastructure (HCI), software-defined storage (SDS), and data management suites such as Lenovo TruScale. “This is a win-win for both companies. Lenovo fills a big void in its storage portfolio, while Infinidat is able to leverage a hardware design and manufacturing machine,” Matt Kimball, principal analyst at Moor Strategy and Insights, wrote on LinkedIn. Lenovo is expected to quickly train its sites on Infinidat’s storage software IP and look to where it can leverage this more broadly, Kimball explained, adding that “if Lenovo’s channels are properly leveraged, we can see real disruption in the enterprise storage market.” Early focus on the enterprise storage market According to analysts, Lenovo has been hyper-focused on the enterprise storage market since it acquired IBM’s x86 server business for about $2.3 billion in 2014. Another landmark deal for the company, targeted at competing more aggressively with Dell and HPE — the dominant players in the enterprise storage market, came in 2018 in the form of a partnership with NetApp, under which it also developed a joint venture in China to co-develop a new range of ThinkSystem Infrastructure that imbibes NetApp’s data management expertise.

Read More »

Biden’s clean AI infrastructure plan could be hanging by a thread

“There is barely an aspect of our society that will remain untouched by this force of change,” said UK Prime Minister Keir Starmer in a foreword to the report. “This government will not sit back passively and wait for change to come. It is our responsibility to harness it and make it work for working people.” Litan described the UK plan as “farther reaching and addressing AI data and the workforce, so it is more comprehensive and seems more thoughtful.” Asked for comment on the two strategies, Phil Brunkard, executive counselor at Info-Tech Research Group UK, said, “the US plans to lead the global AI race by combining its national security goals with sustainable infrastructure. Under the new executive order, the DoD and DoE will lease federal land for the private sector to build out AI data centers powered by clean energy, like nuclear, solar, or wind. The gist of their plan is to lead the way in responsible AI development to keep the US as the technology leader while being mindful of the environmental impact.” Meanwhile, the UK’s AI Opportunities Action Plan, he said, “is heavily reliant on collaboration with academia and industry partners, backed by significant private sector investments in AI infrastructure. But its success will depend on how effectively it can solve energy and cooling challenges, especially in areas with limited resources.” Brunkard added, “by focusing on domestic AI production and ethical oversight, the UK is hoping to balance innovation with responsibility, which is an essential step in building long-term technological resilience.” Both plans, he said, “recognize that AI dominance requires more than just the latest and greatest cutting-edge technology; it’s about building solid infrastructure, securing data, and governing AI ethically. While the US emphasizes security and clean energy, the UK focuses on self-reliance and strong regulatory

Read More »

Qualcomm purloins Intel’s chief Xeon designer with eyes toward data center development

If Intel was hoping for a turnaround in 2025, it will have to wait at least a little bit longer. The chief architect for Intel’s Xeon server processors has defected to chip rival Qualcomm, which is making yet another run at entering the data center market. Sailesh Kottapalli, a 28-year Intel veteran and a senior fellow and chief architect for the company’s Xeon processors, made the announcement on LinkedIn on January 13, stating that he joined Qualcomm as a senior vice president. “My journey took me through roles as a validation engineer, logic designer, full-chip floor planner, post-silicon debug engineer, micro architect, and architect,” he wrote. “I worked on CPU cores, memory, IO, and platform aspects of the system, spanning multiple architectures across x86 and Itanium, and products including CPU and GPU, most importantly shaping the Xeon product line.”

Read More »

8 Trends That Will Shape the Data Center Industry In 2025

What lies ahead for the data center industry in 2025? At Data Center Frontier, our eyes are always on the horizon, and we’re constantly talking with industry thought leaders to get their take on key trends. Our Magic 8 Ball prognostications did pretty well last year, so now it’s time to look ahead at what’s in store for the industry over the next 12 months, as we identify eight themes that stand to shape the data center business going forward. We’ll be writing in more depth about many of these trends, but this list provides a view of the topics that we believe will be most relevant in 2025. A publication about the future frontiers of data centers and AI shouldn’t be afraid to put it’s money where its mouth is, and that’s why we used AI tools to help research and compose this year’s annual industry trends forecast. The article is meant to be a bit encyclopedic in the spirit of a digest, less than an exactly prescriptive forecast – although we try to go there as well. The piece contains some dark horse trends. Do we think immersion cooling is going to explode this year, suddenly giving direct-to-chip a run for its money? Not exactly. But do we think that, given the enormous and rapidly expanding parameters of the AI and HPC boom, the sector for immersion cooling could see some breakthroughs this year? Seems reasonable. Ditto for the trends forecasting natural gas and quantum computing advancements. Such topics are definitely on the horizon and highly visible on the frontier of data centers, so we’d better learn more about them, was our thought. Because as borne out by recent history, data center industry trends that start at the bleeding edge (pun intended – also, on the list) sometimes

Read More »

Podcast: Data Center and AI Sustainability Imperatives with iMasons Climate Accord Executive Director, Miranda Gardiner

Miranda was a featured speaker at last September’s inaugural Data Center Frontier Trends Summit. The call for speakers is now open for this year’s event, which will be held again in Reston, Virginia from Aug. 26-28. DCF Show Podcast Quotes from Miranda Gardiner, Executive Director, iMasons Climate Accord On Her Career Journey and Early Passion for Sustainability:   – “My goals have always been kind of sustainability, affordable housing. I shared a story last week on a panel that my mother even found a yearbook of me from my elementary school years. The question that year was like, what do you hope for the future? And mine was there’d be no pollution and everyone would have a home.” On Transitioning to Data Centers:   – “We started to see this mission-critical focus in facilities like data centers, airports, and healthcare buildings. For me, connecting sustainability into the performance of the building made data centers the perfect match.” Overview of the iMasons Climate Accord:   – “The iMasons Climate Accord is an initiative started in 2022. The primary focus is emission reductions, and the only requirement to join is having an emission reduction strategy.”   – “This year, we refined our roadmap to include objectives such as having a climate strategy, incentivizing low-GHG materials like green concrete, and promoting equity by supporting small, women-owned, and minority-owned businesses.” On Industry Collaboration and Leadership:   – “This year, through the Climate Accord, we issued a call to action on the value of environmental product declarations (EPDs). It was signed by AWS, Digital Realty, Google, Microsoft, Schneider Electric, and Meta—talk about a big initiative and impact!” On EPDs and Carbon Disclosure:   – “EPDs provide third-party verification of materials coming into buildings. Pairing that with the Open Compute Project’s carbon disclosure labels on equipment creates vast opportunities for transparency and

Read More »

Accelsius and iM Data Centers Demo Next-Gen Cooling and Sustainability at Miami Data Center

Miami Data Center Developments Update Miami has recently witnessed several significant developments and investments in its data center sector, underscoring the city’s growing importance as a digital infrastructure hub. Notable projects include: Project Apollo:  A proposed 15-megawatt (MW), two-story, 75,000-square-foot data center in unincorporated Miami-Dade County. With an estimated investment of $150 million, construction is slated to commence between 2026 and 2027. The development team has prior experience with major companies such as Amazon, Meta, and Iron Mountain.  RadiusDC’s Acquisition of Miami I:  In August 2024, RadiusDC acquired the Miami I data center located in the Sweetwater area. Spanning 170,000 square feet across two stories, the facility currently offers 3.2MW of capacity, with plans to expand to 9.2 MW by the first half of 2026. The carrier-neutral facility provides connectivity to 11 fiber optic and network service providers.  Iron Mountain’s MIA-1 Data Center: Iron Mountain is developing a 150,000-square-foot, 16 MW data center on a 3.4-acre campus in Central North West Miami. The facility, known as MIA-1, is scheduled to open in 2026 and aims to serve enterprises, cloud providers, and large-scale users in South Florida. It will feature fiber connections to other Iron Mountain facilities and a robust pipeline of carriers and software-defined networks.  EDGNEX’s Investment Plans:  As of this month, Dubai, UAE-based EDGNEX has announced plans to invest $20 billion in the U.S. data center market, with the potential to double this investment. This plan includes a boutique condo project in Miami, estimated to have a $1 billion gross development value, indicating a significant commitment to the region’s digital infrastructure.  All of these developments highlight Miami’s strategic position as a connectivity hub, particularly serving as a gateway to Latin America and the Caribbean. The city’s data center market is characterized by steady growth, with a focus on retail colocation and

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »