Stay Ahead, Stay ONMINE

Training Large Language Models: From TRPO to GRPO

Deepseek has recently made quite a buzz in the AI community, thanks to its impressive performance at relatively low costs. I think this is a perfect opportunity to dive deeper into how Large Language Models (LLMs) are trained. In this article, we will focus on the Reinforcement Learning (RL) side of things: we will cover […]

Deepseek has recently made quite a buzz in the AI community, thanks to its impressive performance at relatively low costs. I think this is a perfect opportunity to dive deeper into how Large Language Models (LLMs) are trained. In this article, we will focus on the Reinforcement Learning (RL) side of things: we will cover TRPO, PPO, and, more recently, GRPO (don’t worry, I will explain all these terms soon!) 

I have aimed to keep this article relatively easy to read and accessible, by minimizing the math, so you won’t need a deep Reinforcement Learning background to follow along. However, I will assume that you have some familiarity with Machine Learning, Deep Learning, and a basic understanding of how LLMs work.

I hope you enjoy the article!

The 3 steps of LLM training

The 3 steps of LLM training [1]

Before diving into RL specifics, let’s briefly recap the three main stages of training a Large Language Model:

  • Pre-training: the model is trained on a massive dataset to predict the next token in a sequence based on preceding tokens.
  • Supervised Fine-Tuning (SFT): the model is then fine-tuned on more targeted data and aligned with specific instructions.
  • Reinforcement Learning (often called RLHF for Reinforcement Learning with Human Feedback): this is the focus of this article. The main goal is to further refine responses’ alignments with human preferences, by allowing the model to learn directly from feedback.

Reinforcement Learning Basics

A robot trying to exit a maze! [2]

Before diving deeper, let’s briefly revisit the core ideas behind Reinforcement Learning.

RL is quite straightforward to understand at a high level: an agent interacts with an environment. The agent resides in a specific state within the environment and can take actions to transition to other states. Each action yields a reward from the environment: this is how the environment provides feedback that guides the agent’s future actions. 

Consider the following example: a robot (the agent) navigates (and tries to exit) a maze (the environment).

  • The state is the current situation of the environment (the robot’s position in the maze).
  • The robot can take different actions: for example, it can move forward, turn left, or turn right.
  • Successfully navigating towards the exit yields a positive reward, while hitting a wall or getting stuck in the maze results in negative rewards.

Easy! Now, let’s now make an analogy to how RL is used in the context of LLMs.

RL in the context of LLMs

Simplified RLHF Process [3]

When used during LLM training, RL is defined by the following components:

  • The LLM itself is the agent
  • Environment: everything external to the LLM, including user prompts, feedback systems, and other contextual information. This is basically the framework the LLM is interacting with during training.
  • Actions: these are responses to a query from the model. More specifically: these are the tokens that the LLM decides to generate in response to a query.
  • State: the current query being answered along with tokens the LLM has generated so far (i.e., the partial responses).
  • Rewards: this is a bit more tricky here: unlike the maze example above, there is usually no binary reward. In the context of LLMs, rewards usually come from a separate reward model, which outputs a score for each (query, response) pair. This model is trained from human-annotated data (hence “RLHF”) where annotators rank different responses. The goal is for higher-quality responses to receive higher rewards.

Note: in some cases, rewards can actually get simpler. For example, in DeepSeekMath, rule-based approaches can be used because math responses tend to be more deterministic (correct or wrong answer)

Policy is the final concept we need for now. In RL terms, a policy is simply the strategy for deciding which action to take. In the case of an LLM, the policy outputs a probability distribution over possible tokens at each step: in short, this is what the model uses to sample the next token to generate. Concretely, the policy is determined by the model’s parameters (weights). During RL training, we adjust these parameters so the LLM becomes more likely to produce “better” tokens— that is, tokens that produce higher reward scores.

We often write the policy as:

where a is the action (a token to generate), s the state (the query and tokens generated so far), and θ (model’s parameters).

This idea of finding the best policy is the whole point of RL! Since we don’t have labeled data (like we do in supervised learning) we use rewards to adjust our policy to take better actions. (In LLM terms: we adjust the parameters of our LLM to generate better tokens.)

TRPO (Trust Region Policy Optimization)

An analogy with supervised learning

Let’s take a quick step back to how supervised learning typically works. you have labeled data and use a loss function (like cross-entropy) to measure how close your model’s predictions are to the true labels.

We can then use algorithms like backpropagation and gradient descent to minimize our loss function and update the weights θ of our model.

Recall that our policy also outputs probabilities! In that sense, it is analogous to the model’s predictions in supervised learning… We are tempted to write something like:

where s is the current state and a is a possible action.

A(s, a) is called the advantage function and measures how good is the chosen action in the current state, compared to a baseline. This is very much like the notion of labels in supervised learning but derived from rewards instead of explicit labeling. To simplify, we can write the advantage as:

In practice, the baseline is calculated using a value function. This is a common term in RL that I will explain later. What you need to know for now is that it measures the expected reward we would receive if we continue following the current policy from the state s.

What is TRPO?

TRPO (Trust Region Policy Optimization) builds on this idea of using the advantage function but adds a critical ingredient for stability: it constrains how far the new policy can deviate from the old policy at each update step (similar to what we do with batch gradient descent for example).

  • It introduces a KL divergence term (see it as a measure of similarity) between the current and the old policy:
  • It also divides the policy by the old policy. This ratio, multiplied by the advantage function, gives us a sense of how beneficial each update is relative to the old policy.

Putting it all together, TRPO tries to maximize a surrogate objective (which involves the advantage and the policy ratio) subject to a KL divergence constraint.

PPO (Proximal Policy Optimization)

While TRPO was a significant advancement, it’s no longer used widely in practice, especially for training LLMs, due to its computationally intensive gradient calculations.

Instead, PPO is now the preferred approach in most LLMs architecture, including ChatGPT, Gemini, and more.

It is actually quite similar to TRPO, but instead of enforcing a hard constraint on the KL divergence, PPO introduces a “clipped surrogate objective” that implicitly restricts policy updates, and greatly simplifies the optimization process.

Here is a breakdown of the PPO objective function we maximize to tweak our model’s parameters.

Image by the Author

GRPO (Group Relative Policy Optimization)

How is the value function usually obtained?

Let’s first talk more about the advantage and the value functions I introduced earlier.

In typical setups (like PPO), a value model is trained alongside the policy. Its goal is to predict the value of each action we take (each token generated by the model), using the rewards we obtain (remember that the value should represent the expected cumulative reward).

Here is how it works in practice. Take the query “What is 2+2?” as an example. Our model outputs “2+2 is 4” and receives a reward of 0.8 for that response. We then go backward and attribute discounted rewards to each prefix:

  • “2+2 is 4” gets a value of 0.8
  • “2+2 is” (1 token backward) gets a value of 0.8γ
  • “2+2” (2 tokens backward) gets a value of 0.8γ²
  • etc.

where γ is the discount factor (0.9 for example). We then use these prefixes and associated values to train the value model.

Important note: the value model and the reward model are two different things. The reward model is trained before the RL process and uses pairs of (query, response) and human ranking. The value model is trained concurrently to the policy, and aims at predicting the future expected reward at each step of the generation process.

What’s new in GRPO

Even if in practice, the reward model is often derived from the policy (training only the “head”), we still end up maintaining many models and handling multiple training procedures (policy, reward, value model). GRPO streamlines this by introducing a more efficient method.

Remember what I said earlier?

In PPO, we decided to use our value function as the baseline. GRPO chooses something else: Here is what GRPO does: concretely, for each query, GRPO generates a group of responses (group of size G) and uses their rewards to calculate each response’s advantage as a z-score:

where rᵢ is the reward of the i-th response and μ and σ are the mean and standard deviation of rewards in that group.

This naturally eliminates the need for a separate value model. This idea makes a lot of sense when you think about it! It aligns with the value function we introduced before and also measures, in a sense, an “expected” reward we can obtain. Also, this new method is well adapted to our problem because LLMs can easily generate multiple non-deterministic outputs by using a low temperature (controls the randomness of tokens generation).

This is the main idea behind GRPO: getting rid of the value model.

Finally, GRPO adds a KL divergence term (to be exact, GRPO uses a simple approximation of the KL divergence to improve the algorithm further) directly into its objective, comparing the current policy to a reference policy (often the post-SFT model).

See the final formulation below:

Image by the Author

And… that’s mostly it for GRPO! I hope this gives you a clear overview of the process: it still relies on the same foundational ideas as TRPO and PPO but introduces additional improvements to make training more efficient, faster, and cheaper — key factors behind DeepSeek’s success.

Conclusion

Reinforcement Learning has become a cornerstone for training today’s Large Language Models, particularly through PPO, and more recently GRPO. Each method rests on the same RL fundamentals — states, actions, rewards, and policies — but adds its own twist to balance stability, efficiency, and human alignment:

TRPO introduced strict policy constraints via KL divergence

PPO eased those constraints with a clipped objective

GRPO took an extra step by removing the value model requirement and using group-based reward normalization. Of course, DeepSeek also benefits from other innovations, like high-quality data and other training strategies, but that is for another time!

I hope this article gave you a clearer picture of how these methods connect and evolve. I believe that Reinforcement Learning will become the main focus in training LLMs to improve their performance, surpassing pre-training and SFT in driving future innovations. 

If you’re interested in diving deeper, feel free to check out the references below or explore my previous posts.

Thanks for reading, and feel free to leave a clap and a comment!


Want to learn more about Transformers or dive into the math behind the Curse of Dimensionality? Check out my previous articles:

Transformers: How Do They Transform Your Data?
Diving into the Transformers architecture and what makes them unbeatable at language taskstowardsdatascience.com

The Math Behind “The Curse of Dimensionality”
Dive into the “Curse of Dimensionality” concept and understand the math behind all the surprising phenomena that arise…towardsdatascience.com



References:

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cirrascale to offer on-prem Google Gemini models

Google Distributed Cloud can be deployed in customer-controlled environments, including installations that are disconnected from the Internet, which is a key requirement for some government and critical-infrastructure users. One of the big challenges is that these models are incredibly valuable and they need to be delivered in a trusted, secure

Read More »

Golden Pass LNG ships first export cargo

Editor’s Note: Updated Apr. 23 to include information provided by the US Energy Information Administration.  Golden Pass LNG, a joint venture between QatarEnergy and ExxonMobil Corp., has loaded and shipped its first LNG export cargo from the plant in Sabine Pass, Tex. The departure comes following first LNG production from Train 1 late last month. Once fully operational, Golden Pass LNG expects to export about 18 million tons/year (tpy) of LNG. Golden Pass LNG is the 10th LNG plant in the US, the US Energy Information Administration (EIA) noted in a separate release Apr. 23. It is the only new US LNG export plant currently expected to begin LNG shipments this year, EIA said. Construction and commissioning continue on Trains 2 and 3, which are expected to come online in turn, following stable operation of Train 1. EIA noted Golden Pass aims to start up Train 2 in second-half 2026 and Train 3 in first-half 2027. QatarEnergy holds 70% interest in Golden Pass LNG, while ExxonMobil holds the remaining 30%. LNG demand  ExxonMobil forecasts natural gas demand to rise 20% by 2050 and LNG demand to rise by 3% per year through 2050. The operator is developing four LNG projects and, by 2030, expects to double its supply compared to 2020 to more than 40 million tpy.

Read More »

Ecopetrol agrees to acquire equity stake in Brava Energia with plans for increased ownership

State-owned Ecopetrol SA, Bogotá, Colombia, has agreed to acquire a 26% equity stake in Brava Energia SA from a group of shareholders and plans to launch a tender offer to increase its ownership to 51%, which would give it control of the Brazilian oil and gas independent. The move would add exposure to roughly 81,000 boe/d of production and 459 MMboe of reserves, expanding Ecopetrol’s footprint in Brazil. Ecopetrol entered into share purchase agreement with Jive, Yellowstone, and Bloco Somah Printemps Quantum, which together constitute a group holding about 26% of the outstanding common shares of Brava Energia. Brava Energia, the second-largest independent company listed in the Brazilian market in terms of reserves and production, was incorporated in 2024 from the merger between 3R Petroleum Óleo e Gás SA and Enauta Participações SA. Completion of the deal is subject to certain conditions, including, among others, approval by Brazil’s Administrative Council for Economic Defense (CADE), the grant of certain waivers and consents considering Brava’s financing instruments and relevant commercial agreements, as well as the purchase by Ecopetrol SA, or one of its affiliates or subsidiaries within the Ecopetrol Group, of the number of shares required to achieve a 51% controlling stake of Brava’s voting share capital. Ecopetrol plans to launch a voluntary tender offer on the B3 stock exchange in Brazil to buy additional shares to reach 51% controlling stake at R$23.00 per share, subject to regulatory requirements and certain conditions. Ecopetrol in Brazil In Brazil, Ecopetrol, through subsidiary Ecopetrol Óleo e Gás do Brasil Ltda., holds 30% interest in 11 blocks in the southern area of Santos basin in consortium with Shell Brasil Petróleo Ltda. (operator, 70%).  The company also holds a 30% non-operated interest in Gato do Mato (BM-S-54) and Sul de Gato do Mato (production sharing agreement), which

Read More »

China leads global oil stockpiles in 2025

China, the United States, and Japan held the world’s largest strategic oil inventories as of December 2025, the US Energy Information Administration (EIA) said in a recent note.  The EIA examined significant global buildup in strategic oil inventories as of December 2025, prior to the International Energy Agency (IEA)-coordinated emergency release in March 2026 triggered by the Strait of Hormuz disruption. These reserves—first established by OECD countries in the 1970s—continue to serve as a critical buffer against supply shocks. China holds the largest volume of oil inventories globally. EIA estimates about 360 million bbl in government-held stocks and roughly 1 billion bbl in commercial inventories, bringing its total to nearly 1.4 billion bbl. The agency said China added about 1.1 million b/d to inventories in 2025, reflecting an aggressive stockpiling strategy. The US follows, with about 413 million bbl in its Strategic Petroleum Reserve (SPR) as of December 2025, alongside more than 400 million bbl in commercial crude stocks, EIA said. Japan ranks third, holding 263 million bbl in government reserves, with an additional 220 million bbl required under Japan’s Oil Stockpiling Act. OECD Europe held about 179 million bbl, and South Korea maintained roughly 79 million bbl.  Among non-OECD countries, estimates are less transparent, EIA noted. Saudi Arabia held about 82 million bbl, Iran 71 million bbl, and the UAE 34 million bbl in on-land inventories, while India’s SPR totaled 21.4 million bbl, with plans to expand storage capacity domestically and abroad. Global estimates remain conservative due to limited transparency and varying definitions of “strategic” inventories, EIA said. In most countries, only government or national oil company holdings are counted, though China is a key exception where commercial inventories are included due to state-directed stockpiling. EIA plans to update its assessment periodically in its Short-Term Energy Outlook beginning this May.

Read More »

Peace signals temper crude rally, Europe jet fuel tightness intensifies

Tentative diplomatic signals offer limited relief to markets still dominated by supply disruption concerns surrounding the Iran war. At the time of writing, Brent crude futures hovered around $105–106/bbl after earlier trading above $107/bbl, while West Texas Intermediate (WTI) held near $95–97/bbl. Prices softened modestly following reports of renewed diplomatic engagement. Iranian Foreign Minister Seyed Abbas Araghchi is expected to visit Pakistan for talks. Separately, Israel and Lebanon agreed to extend their ceasefire by 3 weeks after meetings with US officials in Washington. Stay updated on oil price volatility, shipping disruptions, LNG market analysis, and production output at OGJ’s Iran war content hub. Despite these developments, market participants remain cautious, with analysts warning that any easing in risk premiums may prove temporary. Ongoing tensions linked to the US-Iran conflict continue to disrupt flows through the Strait of Hormuz, a critical artery for global oil trade. In remarks at CNBC’s Converge Live conference, Fatih Birol, executive director of the International Energy Agency (IEA), described the situation as an unprecedented energy security challenge, noting that the strait is operating under what he termed a “double blockade,” severely constraining tanker movements. The impact is being felt acutely in refined product markets, particularly in Europe’s aviation sector. With Middle Eastern exports curtailed, European refiners have shifted output toward jet fuel production, though with limited flexibility. According to Frans Everts of Shell plc, refineries across the region are operating in “max jet mode,” with only marginal capacity to increase yields further. Inventory data underscore the tightening balance. Jet fuel and kerosene stocks in the Amsterdam-Rotterdam-Antwerp hub fell to 597,000 metric tons, the lowest level since April 2020, declining 10% year-on-year. In response, Europe has increasingly relied on imports from the US Gulf Coast to offset lost Middle Eastern supply. According to IEA, global oil supply

Read More »

Shell to expand Canadian operations with $16.4-billion acquisition of ARC Resources

Shell plc has agreed to acquire ARC Resources Ltd. in a transaction valued at about $16.4 billion, including $13.6 billion in equity and roughly $2.8 billion in assumed net debt and leases. The acquisition is expected to strengthen Shell’s integrated gas portfolio and expand its position in Canada through the addition of long-life Montney resources in British Columbia and Alberta, the companies said Apr. 27. “ARC is a high-quality, low-cost, and top-quartile low carbon intensity producer in the Montney that complements our existing footprint in Canada and strengthens our resource base for decades,” said Wael Sawan, Shell chief executive officer. “This establishes Canada as a heartland for Shell while furthering our strategy to deliver more value with less emissions.” ARC produced 374,000 boe/d in 2025 (before royalties). Its assets overlap with Shell’s existing Groundbirch position in British Columbia and the Gold Creek development in Alberta. Groundbirch supplies gas to the 14-million tonnes/year LNG Canada liquefaction plant (Shell, 40%), as well as to the domestic market.

Read More »

Brent holds above $100/bbl; US shale response remains restrained

Global crude markets remained firmly supported Apr. 27 as the ongoing Iran conflict and continued disruption in the Strait of Hormuz reinforced a persistent geopolitical risk premium, offsetting intermittent diplomatic signals. Brent crude traded in the upper-$100/bbl range, while West Texas Intermediate (WTI) held in the high-$90s/bbl, reflecting tight physical supply conditions and uncertainty surrounding Middle East export flows. Stay updated on oil price volatility, shipping disruptions, LNG market analysis, and production output at OGJ’s Iran war content hub. While diplomatic efforts between the US and Iran have produced occasional signs of progress—including reported proposals to reopen the strait—negotiations remain fragile. The situation has evolved into a prolonged stalemate, with neither a full escalation nor a clear resolution in sight. Current market structure reflects a geopolitically driven pricing regime, with volatility concentrated in near-term crude futures while longer-dated contracts remain relatively anchored. The impact of Iran-related supply disruptions is being priced primarily into prompt contracts, whereas deferred benchmarks—such as 2027 WTI—have moved more modestly, holding in the low-$70/bbl range. This divergence suggests that traders view the current supply shock as severe but not necessarily permanent, with expectations of eventual normalization. However, according to the latest Dallas Fed survey, 86% of US oil and gas executives view another future Hormuz disruption within the next 5 years as somewhat or very likely, while 40% do not expect normalization of Hormuz traffic by August. A further 35% believe less than 90% of shut-in Gulf production will eventually return. These figures suggest the industry is calibrating its medium-term strategy around a world of elevated and recurring geopolitical risk. US shale response remains restrained According to an analysis from Macquire, despite favorable pricing, the US upstream response is expected to be measured. With average breakeven levels near $43/bbl WTI, current prices offer highly attractive margins

Read More »

AI data flows force rethink of data center networking at Backblaze

According to a report that Backblaze released this morning, traffic from content delivery networks and hosting and Internet services providers have stayed largely within historical norms over the past year. But traffic from hyperscalers and neoclouds fluctuated dramatically, with steep climbs in September and October and another uptick in March. Another network traffic change related to AI is geography. “Traditionally, it didn’t matter where cloud infrastructure was located,” says Nowak. But with AI workloads, if storage is close to compute, enterprises get lower latency and higher throughput. Today, Virginia and California have a high concentration of AI compute providers. This, in turn, brings in more storage companies. “In July, we chose to double our footprint in US East to increase the proximity to hyperscalers and neoclouds,” says Nowak. And that, in turn, leads to even more demand for compute, and even greater concentration. “There’s a snowball effect,” Nowak says. Why neoclouds for AI? Enterprises might think that they don’t need to worry about network traffic details if they’re using a hyperscaler for their AI workloads because the data and the processing both stay within the cloud. But there are advantages to using a third-party storage provider combined with neoclouds for the GPUs. According to a report released by Synergy Research Group in early April, neocloud revenues hit $9 billion in the fourth quarter of 2025, a 223% year-over-year increase. Revenues passed $25 billion for the whole year and are expected to hit $400 billion by 2031.

Read More »

TD Cowen: AI Adoption Is Already Here. Infrastructure Demand Is What Comes Next.

Enterprise AI adoption is no longer emerging. It is already embedded and beginning to scale in ways that will reshape data center demand. The latest TD Cowen GenAI Adoption Survey makes that clear. Across 689 U.S. enterprises, 92% are now using at least one major AI platform, with Microsoft Copilot, Google Gemini, and ChatGPT forming the core triad of daily enterprise tooling. That’s the baseline. The more important story is what comes next. AI is moving quickly from assistive software to autonomous systems, and that shift carries direct implications for compute demand, power consumption, and infrastructure design. From Copilots to Autonomous Systems Today’s enterprise AI footprint is already broad, but it is still largely human-in-the-loop. That is beginning to change. Roughly a third of respondents say they already have semi-autonomous AI agents running in production, while another large cohort is piloting or planning deployments over the next 12 to 18 months. By 2027, more than three-quarters expect to be running AI agents capable of executing multi-step workflows without human intervention. This is not incremental adoption. It is a step-function shift. Autonomous agents don’t just respond to prompts; they execute tasks, interact with enterprise systems, and continuously access data. For data centers, that translates into more persistent, baseline load: exactly the kind of demand profile that stresses power delivery, increases utilization, and accelerates capacity planning timelines. To wit: AI is moving from a bursty workload to a continuous one. ROI Is No Longer the Question At the same time, the debate around AI return on investment is effectively over. Three-quarters of respondents report positive ROI, while only a small minority report negative outcomes. A meaningful share is already seeing multiples of return on their investments. The implication seems straightforward: AI budgets are becoming durable. This is no longer experimental spend that

Read More »

BYOP Moves to the Center of Data Center Strategy

Self-Sufficiency Becomes a Feature, Not a Risk Consider Wyoming’s Project Jade, where county commissioners approved an AI campus tied to 2.7 GW of new natural gas-fired generation being developed by Tallgrass Energy. Reporting from POWER described the project as a “bring your own power” model designed for a high degree of self-sufficiency, with a mix of natural gas generation and Bloom fuel cells. The campus is expected to scale significantly over time. What stands out is not only the size, but the positioning. Self-sufficiency is becoming a selling point both for developers seeking to de-risk timelines, and for local stakeholders wary of overloading existing utility infrastructure. Fuel Cells and Nuclear: The Middle Ground and the Long Game Fuel cells occupy an important middle ground in this shift. Bloom Energy’s 2026 report positions fuel cells as a leading onsite option due to shorter lead times, modular deployment, and lower local emissions. Market activity suggests that interest is real. For developers, fuel cells can be easier to permit than large turbine installations and can be deployed incrementally. That makes them effective as bridge-to-grid solutions or as permanent components of hybrid architectures. Advanced nuclear remains the most strategically significant, but least immediate, BYOP pathway. Companies including Switch and other data center operators have explored partnerships with Oklo around its Aurora small modular reactor design. Nuclear holds long-term appeal because it offers firm, low-carbon power at scale. But for current AI buildouts, it remains a future option rather than a near-term construction solution. The immediate reality is that gas and modular onsite systems are closing the time-to-power gap, while nuclear is being positioned as a longer-duration successor as licensing and deployment timelines evolve. The model itself is also evolving. BYOP is beginning to blur the line between developer, energy provider, and compute customer. Reuters

Read More »

Microsoft Builds for Two Worlds: Sovereign Cloud and AI Factories

So far in 2026, across the United States and overseas, Microsoft is building an infrastructure portfolio at full hyperscale. The strategy runs on two tracks. The first is familiar: sovereign cloud expansion involving new regions, local data residency, and compliance-driven enterprise infrastructure. The second is larger and more consequential: purpose-built AI factory campuses designed for dense GPU clusters, liquid cooling, private fiber, and power acquisition at a scale that extends far beyond traditional cloud infrastructure. Despite reports last year that Microsoft was pulling back on data center development, the company is accelerating. It is not only advancing its own large-scale campuses, but also absorbing premium AI capacity originally aligned with OpenAI. In Texas and Norway, projects tied to OpenAI’s infrastructure plans have shifted back into Microsoft’s orbit. Even after contractual changes gave OpenAI greater flexibility to source compute elsewhere, Microsoft remains the market’s most reliable backstop buyer for top-tier AI infrastructure. It no longer needs to control every OpenAI build to maintain its position. In 2026, Microsoft is still the company best positioned to turn uncertain AI demand into deployed capacity, e.g. concrete, steel, power, and silicon at scale. Building at Industrial Scale The clearest indicator of Microsoft’s intent is its capital spending. In its January 2026 earnings cycle, Reuters reported that Microsoft’s quarterly capital expenditures reached a record $37.5 billion, up nearly 66% year over year. The company’s cloud backlog rose to $625 billion, with roughly 45% of remaining performance obligations tied to OpenAI. About two-thirds of that quarterly capex was directed toward compute chips. To be clear: this is no speculative buildout. Microsoft is deploying capital against a massive, committed demand pipeline, even as it maintains significant exposure to OpenAI-driven workloads. The company is solving two infrastructure problems at once: supporting broad Azure and Copilot growth, while ensuring

Read More »

AI’s Execution Era: Aligned and Netrality on Power, Speed, and the New Data Center Reality

At Data Center World 2026, the industry didn’t need convincing that something fundamental has shifted. “This feels different,” said Bill Kleyman as he opened a keynote fireside with Phill Lawson-Shanks and Amber Caramella. “In the past 24 months, we’ve seen more evolution… than in the two decades before.” What followed was less a forecast than a field report from the front lines of the AI infrastructure buildout—where demand is immediate, power is decisive, and execution is everything. A Different Kind of Growth Cycle For Caramella, the shift starts with scale—and speed. “What feels fundamentally different is just the sheer pace and breadth of the demand combined with a real shift in architecture,” she said. Vacancy rates have collapsed even as capacity expands. AI workloads are not just additive—they are redefining absorption curves across the market. But the deeper change is behavioral. “Over 75% of people are using AI in their day-to-day business… and now the conversation is shifting to agentic AI,” Caramella noted. That shift—from tools to delegated workflows—points to a second wave of infrastructure demand that has not yet fully materialized. Lawson-Shanks framed the transformation in more structural terms. The industry, he said, has always followed a predictable chain: workload → software → hardware → facility → location. That chain has broken. “We had a very predictable industry… prior to Covid. And Covid changed everything,” he said, describing how hyperscale demand compressed deployment cycles overnight. What followed was a surge that utilities—and supply chains—were not prepared to meet. From Capacity to Constraint: Power Becomes Strategy If AI has a gating factor, it is no longer compute. It is power. “Before it used to be an operational convenience,” Caramella said. “Now it’s a strategic advantage—or constraint if you don’t have it.” That shift is reshaping executive decision-making. Power is no

Read More »

The Trillion-Dollar AIDC Boom Gets Real: Omdia Maps the Path From Megaclusters to Microgrids

The AI data center buildout is getting bigger, denser, and more electrically complex than even many bullish observers expected. That was the core message from Omdia’s Data Center World analyst summit, where Senior Director Vlad Galabov and Practice Lead Shen Wang laid out a view of the market that has grown more expansive in just the past year. What had been a large-scale infrastructure story is now, in Omdia’s telling, something closer to a full-stack industrial transition: hyperscalers are still leading, but enterprises, second-tier cloud providers, and new AI use cases are beginning to add demand on top of demand. Omdia’s updated forecast reflects that shift. Galabov said the firm has now raised its 2030 projection for data center investment beyond the $1.6 trillion figure it showed a year ago, arguing that surging AI usage, expanding buyer classes, and the emergence of new power infrastructure categories have all forced a rethink. “One of the reasons why we raised it is that people keep using more AI,” Galabov said. “And that just means more money, because we need to buy more GPUs to run the AI.” That is the simple version. The more consequential one is that AI is no longer behaving like a contained technology cycle. It is spilling outward into adjacent infrastructure markets, including batteries, gas-fired onsite generation, and high-voltage DC power architectures that until recently sat well outside the mainstream data center conversation. A Market Moving Faster Than the Forecasts Galabov opened by revisiting the predictions Omdia made last year for 2030. On several fronts, he said, the market is already validating them faster than expected. AI applications are becoming commonplace. AI has become the dominant driver of data center investment. Self-generation is no longer a fringe strategy. Even some of the rack-scale architecture concepts that once looked

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »