Stay Ahead, Stay ONMINE

A Visual Guide to How Diffusion Models Work

This article is aimed at those who want to understand exactly how Diffusion Models work, with no prior knowledge expected. I’ve tried to use illustrations wherever possible to provide visual intuitions on each part of these models. I’ve kept mathematical notation and equations to a minimum, and where they are necessary I’ve tried to define […]

This article is aimed at those who want to understand exactly how Diffusion Models work, with no prior knowledge expected. I’ve tried to use illustrations wherever possible to provide visual intuitions on each part of these models. I’ve kept mathematical notation and equations to a minimum, and where they are necessary I’ve tried to define and explain them as they occur.

Intro

I’ve framed this article around three main questions:

  • What exactly is it that diffusion models learn?
  • How and why do diffusion models work?
  • Once you’ve trained a model, how do you get useful stuff out of it?

The examples will be based on the glyffuser, a minimal text-to-image diffusion model that I previously implemented and wrote about. The architecture of this model is a standard text-to-image denoising diffusion model without any bells or whistles. It was trained to generate pictures of new “Chinese” glyphs from English definitions. Have a look at the picture below — even if you’re not familiar with Chinese writing, I hope you’ll agree that the generated glyphs look pretty similar to the real ones!

Random examples of glyffuser training data (left) and generated data (right).

What exactly is it that diffusion models learn?

Generative Ai models are often said to take a big pile of data and “learn” it. For text-to-image diffusion models, the data takes the form of pairs of images and descriptive text. But what exactly is it that we want the model to learn? First, let’s forget about the text for a moment and concentrate on what we are trying to generate: the images.

Probability distributions

Broadly, we can say that we want a generative AI model to learn the underlying probability distribution of the data. What does this mean? Consider the one-dimensional normal (Gaussian) distribution below, commonly written 𝒩(μ,σ²) and parameterized with mean μ = 0 and variance σ² = 1. The black curve below shows the probability density function. We can sample from it: drawing values such that over a large number of samples, the set of values reflects the underlying distribution. These days, we can simply write something like x = random.gauss(0, 1) in Python to sample from the standard normal distribution, although the computational sampling process itself is non-trivial!

Values sampled from an underlying distribution (here, the standard normal 𝒩(0,1)) can then be used to estimate the parameters of that distribution.

We could think of a set of numbers sampled from the above normal distribution as a simple dataset, like that shown as the orange histogram above. In this particular case, we can calculate the parameters of the underlying distribution using maximum likelihood estimation, i.e. by working out the mean and variance. The normal distribution estimated from the samples is shown by the dotted line above. To take some liberties with terminology, you might consider this as a simple example of “learning” an underlying probability distribution. We can also say that here we explicitly learnt the distribution, in contrast with the implicit methods that diffusion models use.

Conceptually, this is all that generative AI is doing — learning a distribution, then sampling from that distribution!

Data representations

What, then, does the underlying probability distribution of a more complex dataset look like, such as that of the image dataset we want to use to train our diffusion model?

First, we need to know what the representation of the data is. Generally, a machine learning (ML) model requires data inputs with a consistent representation, i.e. format. For the example above, it was simply numbers (scalars). For images, this representation is commonly a fixed-length vector.

The image dataset used for the glyffuser model is ~21,000 pictures of Chinese glyphs. The images are all the same size, 128 × 128 = 16384 pixels, and greyscale (single-channel color). Thus an obvious choice for the representation is a vector x of length 16384, where each element corresponds to the color of one pixel: x = (x,x₂,…,x₁₆₃₈₄). We can call the domain of all possible images for our dataset “pixel space”.

An example glyph with pixel values labelled (downsampled to 32 × 32 pixels for readability).

Dataset visualization

We make the assumption that our individual data samples, x, are actually sampled from an underlying probability distribution, q(x), in pixel space, much as the samples from our first example were sampled from an underlying normal distribution in 1-dimensional space. Note: the notation x q(x) is commonly used to mean: “the random variable x sampled from the probability distribution q(x).”

This distribution is clearly much more complex than a Gaussian and cannot be easily parameterized — we need to learn it with a ML model, which we’ll discuss later. First, let’s try to visualize the distribution to gain a better intution.

As humans find it difficult to see in more than 3 dimensions, we need to reduce the dimensionality of our data. A small digression on why this works: the manifold hypothesis posits that natural datasets lie on lower dimensional manifolds embedded in a higher dimensional space — think of a line embedded in a 2-D plane, or a plane embedded in 3-D space. We can use a dimensionality reduction technique such as UMAP to project our dataset from 16384 to 2 dimensions. The 2-D projection retains a lot of structure, consistent with the idea that our data lie on a lower dimensional manifold embedded in pixel space. In our UMAP, we see two large clusters corresponding to characters in which the components are arranged either horizontally (e.g. 明) or vertically (e.g. 草). An interactive version of the plot below with popups on each datapoint is linked here.

 Click here for an interactive version of this plot.

Let’s now use this low-dimensional UMAP dataset as a visual shorthand for our high-dimensional dataset. Remember, we assume that these individual points have been sampled from a continuous underlying probability distribution q(x). To get a sense of what this distribution might look like, we can apply a KDE (kernel density estimation) over the UMAP dataset. (Note: this is just an approximation for visualization purposes.)

This gives a sense of what q(x) should look like: clusters of glyphs correspond to high-probability regions of the distribution. The true q(x) lies in 16384 dimensions — this is the distribution we want to learn with our diffusion model.

We showed that for a simple distribution such as the 1-D Gaussian, we could calculate the parameters (mean and variance) from our data. However, for complex distributions such as images, we need to call on ML methods. Moreover, what we will find is that for diffusion models in practice, rather than parameterizing the distribution directly, they learn it implicitly through the process of learning how to transform noise into data over many steps.

Takeaway

The aim of generative AI such as diffusion models is to learn the complex probability distributions underlying their training data and then sample from these distributions.

How and why do diffusion models work?

Diffusion models have recently come into the spotlight as a particularly effective method for learning these probability distributions. They generate convincing images by starting from pure noise and gradually refining it. To whet your interest, have a look at the animation below that shows the denoising process generating 16 samples.

In this section we’ll only talk about the mechanics of how these models work but if you’re interested in how they arose from the broader context of generative models, have a look at the further reading section below.

What is “noise”?

Let’s first precisely define noise, since the term is thrown around a lot in the context of diffusion. In particular, we are talking about Gaussian noise: consider the samples we talked about in the section about probability distributions. You could think of each sample as an image of a single pixel of noise. An image that is “pure Gaussian noise”, then, is one in which each pixel value is sampled from an independent standard Gaussian distribution, 𝒩(0,1). For a pure noise image in the domain of our glyph dataset, this would be noise drawn from 16384 separate Gaussian distributions. You can see this in the previous animation. One thing to keep in mind is that we can choose the means of these noise distributions, i.e. center them, on specific values — the pixel values of an image, for instance.

For convenience, you’ll often find the noise distributions for image datasets written as a single multivariate distribution 𝒩(0,I) where I is the identity matrix, a covariance matrix with all diagonal entries equal to 1 and zeroes elsewhere. This is simply a compact notation for a set of multiple independent Gaussians — i.e. there are no correlations between the noise on different pixels. In the basic implementations of diffusion models, only uncorrelated (a.k.a. “isotropic”) noise is used. This article contains an excellent interactive introduction on multivariate Gaussians.

Diffusion process overview

Below is an adaptation of the somewhat-famous diagram from Ho et al.’s seminal paper “Denoising Diffusion Probabilistic Models” which gives an overview of the whole diffusion process:

Diagram of the diffusion process adapted from Ho et al. 2020. The glyph 锂, meaning “lithium”, is used as a representative sample from the dataset.

I found that there was a lot to unpack in this diagram and simply understanding what each component meant was very helpful, so let’s go through it and define everything step by step.

We previously used x q(x) to refer to our data. Here, we’ve added a subscript, xₜ, to denote timestep t indicating how many steps of “noising” have taken place. We refer to the samples noised a given timestep as x q(xₜ). x₀​ is clean data and xₜ (t = T) ∼ 𝒩(0,1) is pure noise.

We define a forward diffusion process whereby we corrupt samples with noise. This process is described by the distribution q(xₜ|xₜ₋₁). If we could access the hypothetical reverse process q(xₜ₋₁|xₜ), we could generate samples from noise. As we cannot access it directly because we would need to know x₀​, we use ML to learn the parameters, θ, of a model of this process, 𝑝θ(𝑥ₜ₋₁∣𝑥ₜ). (That should be p subscript θ but medium cannot render it.)

In the following sections we go into detail on how the forward and reverse diffusion processes work.

Forward diffusion, or “noising”

Used as a verb, “noising” an image refers to applying a transformation that moves it towards pure noise by scaling down its pixel values toward 0 while adding proportional Gaussian noise. Mathematically, this transformation is a multivariate Gaussian distribution centered on the pixel values of the preceding image.

In the forward diffusion process, this noising distribution is written as q(xₜ|xₜ₋₁) where the vertical bar symbol “|” is read as “given” or “conditional on”, to indicate the pixel means are passed forward from q(xₜ₋₁) At t = T where T is a large number (commonly 1000) we aim to end up with images of pure noise (which, somewhat confusingly, is also a Gaussian distribution, as discussed previously).

The marginal distributions q(xₜ) represent the distributions that have accumulated the effects of all the previous noising steps (marginalization refers to integration over all possible conditions, which recovers the unconditioned distribution).

Since the conditional distributions are Gaussian, what about their variances? They are determined by a variance schedule that maps timesteps to variance values. Initially, an empirically determined schedule of linearly increasing values from 0.0001 to 0.02 over 1000 steps was presented in Ho et al. Later research by Nichol & Dhariwal suggested an improved cosine schedule. They state that a schedule is most effective when the rate of information destruction through noising is relatively even per step throughout the whole noising process.

Forward diffusion intuition

As we encounter Gaussian distributions both as pure noise q(xₜ, t = T) and as the noising distribution q(xₜ|xₜ₋₁), I’ll try to draw the distinction by giving a visual intuition of the distribution for a single noising step, q(x₁∣x₀), for some arbitrary, structured 2-dimensional data:

Each noising step q(xₜ|xₜ₋₁) is a Gaussian distribution conditioned on the previous step.

The distribution q(x₁∣x₀) is Gaussian, centered around each point in x₀, shown in blue. Several example points x₀⁽ⁱ⁾ are picked to illustrate this, with q(x₁∣x₀ = x₀⁽ⁱ⁾) shown in orange.

In practice, the main usage of these distributions is to generate specific instances of noised samples for training (discussed further below). We can calculate the parameters of the noising distributions at any timestep t directly from the variance schedule, as the chain of Gaussians is itself also Gaussian. This is very convenient, as we don’t need to perform noising sequentially—for any given starting data x₀⁽ⁱ⁾, we can calculate the noised sample xₜ⁽ⁱ⁾ by sampling from q(xₜ∣x₀ = x₀⁽ⁱ⁾) directly.

Forward diffusion visualization

Let’s now return to our glyph dataset (once again using the UMAP visualization as a visual shorthand). The top row of the figure below shows our dataset sampled from distributions noised to various timesteps: xₜ ∼ q(xₜ). As we increase the number of noising steps, you can see that the dataset begins to resemble pure Gaussian noise. The bottom row visualizes the underlying probability distribution q(xₜ).

The dataset xₜ (above) sampled from its probability distribution q(xₜ) (below) at different noising timesteps.

Reverse diffusion overview

It follows that if we knew the reverse distributions q(xₜ₋₁∣xₜ), we could repeatedly subtract a small amount of noise, starting from a pure noise sample xₜ at t = T to arrive at a data sample x₀ ∼ q(x₀). In practice, however, we cannot access these distributions without knowing x₀ beforehand. Intuitively, it’s easy to make a known image much noisier, but given a very noisy image, it’s much harder to guess what the original image was.

So what are we to do? Since we have a large amount of data, we can train an ML model to accurately guess the original image that any given noisy image came from. Specifically, we learn the parameters θ of an ML model that approximates the reverse noising distributions, (xₜ₋₁ ∣ xₜ) for t = 0, …, T. In practice, this is embodied in a single noise prediction model trained over many different samples and timesteps. This allows it to denoise any given input, as shown in the figure below.

The ML model predicts added noise at any given timestep t.

Next, let’s go over how this noise prediction model is implemented and trained in practice.

How the model is implemented

First, we define the ML model — generally a deep neural network of some sort — that will act as our noise prediction model. This is what does the heavy lifting! In practice, any ML model that inputs and outputs data of the correct size can be used; the U-net, an architecture particularly suited to learning images, is what we use here and frequently chosen in practice. More recent models also use vision transformers.

We use the U-net architecture (Ronneberger et al. 2015) for our ML noise prediction model. We train the model by minimizing the difference between predicted and actual noise.

Then we run the training loop depicted in the figure above:

  • We take a random image from our dataset and noise it to a random timestep tt. (In practice, we speed things up by doing many examples in parallel!)
  • We feed the noised image into the ML model and train it to predict the (known to us) noise in the image. We also perform timestep conditioning by feeding the model a timestep embedding, a high-dimensional unique representation of the timestep, so that the model can distinguish between timesteps. This can be a vector the same size as our image directly added to the input (see here for a discussion of how this is implemented).
  • The model “learns” by minimizing the value of a loss function, some measure of the difference between the predicted and actual noise. The mean square error (the mean of the squares of the pixel-wise difference between the predicted and actual noise) is used in our case.
  • Repeat until the model is well trained.

Note: A neural network is essentially a function with a huge number of parameters (on the order of 10for the glyffuser). Neural network ML models are trained by iteratively updating their parameters using backpropagation to minimize a given loss function over many training data examples. This is an excellent introduction. These parameters effectively store the network’s “knowledge”.

A noise prediction model trained in this way eventually sees many different combinations of timesteps and data examples. The glyffuser, for example, was trained over 100 epochs (runs through the whole data set), so it saw around 2 million data samples. Through this process, the model implicity learns the reverse diffusion distributions over the entire dataset at all different timesteps. This allows the model to sample the underlying distribution q(x₀) by stepwise denoising starting from pure noise. Put another way, given an image noised to any given level, the model can predict how to reduce the noise based on its guess of what the original image. By doing this repeatedly, updating its guess of the original image each time, the model can transform any noise to a sample that lies in a high-probability region of the underlying data distribution.

Reverse diffusion in practice

We can now revisit this video of the glyffuser denoising process. Recall a large number of steps from sample to noise e.g. T = 1000 is used during training to make the noise-to-sample trajectory very easy for the model to learn, as changes between steps will be small. Does that mean we need to run 1000 denoising steps every time we want to generate a sample?

Luckily, this is not the case. Essentially, we can run the single-step noise prediction but then rescale it to any given step, although it might not be very good if the gap is too large! This allows us to approximate the full sampling trajectory with fewer steps. The video above uses 120 steps, for instance (most implementations will allow the user to set the number of sampling steps).

Recall that predicting the noise at a given step is equivalent to predicting the original image x₀, and that we can access the equation for any noised image deterministically using only the variance schedule and x₀. Thus, we can calculate xₜ₋ₖ based on any denoising step. The closer the steps are, the better the approximation will be.

Too few steps, however, and the results become worse as the steps become too large for the model to effectively approximate the denoising trajectory. If we only use 5 sampling steps, for example, the sampled characters don’t look very convincing at all:

There is then a whole literature on more advanced sampling methods beyond what we’ve discussed so far, allowing effective sampling with much fewer steps. These often reframe the sampling as a differential equation to be solved deterministically, giving an eerie quality to the sampling videos — I’ve included one at the end if you’re interested. In production-level models, these are usually preferred over the simple method discussed here, but the basic principle of deducing the noise-to-sample trajectory is the same. A full discussion is beyond the scope of this article but see e.g. this paper and its corresponding implementation in the Hugging Face diffusers library for more information.

Alternative intuition from score function

To me, it was still not 100% clear why training the model on noise prediction generalises so well. I found that an alternative interpretation of diffusion models known as “score-based modeling” filled some of the gaps in intuition (for more information, refer to Yang Song’s definitive article on the topic.)

The dataset xₜ sampled from its probability distribution q(xₜ) at different noising timesteps; below, we add the score function ∇ₓ log q(xₜ).

I try to give a visual intuition in the bottom row of the figure above: essentially, learning the noise in our diffusion model is equivalent (to a constant factor) to learning the score function, which is the gradient of the log of the probability distribution: ∇ₓ log q(x). As a gradient, the score function represents a vector field with vectors pointing towards the regions of highest probability density. Subtracting the noise at each step is then equivalent to moving following the directions in this vector field towards regions of high probability density.

As long as there is some signal, the score function effectively guides sampling, but in regions of low probability it tends towards zero as there is little to no gradient to follow. Using many steps to cover different noise levels allows us to avoid this, as we smear out the gradient field at high noise levels, allowing sampling to converge even if we start from low probability density regions of the distribution. The figure shows that as the noise level is increased, more of the domain is covered by the score function vector field.

Summary

  • The aim of diffusion models is learn the underlying probability distribution of a dataset and then be able to sample from it. This requires forward and reverse diffusion (noising) processes.
  • The forward noising process takes samples from our dataset and gradually adds Gaussian noise (pushes them off the data manifold). This forward process is computationally efficient because any level of noise can be added in closed form a single step.
  • The reverse noising process is challenging because we need to predict how to remove the noise at each step without knowing the original data point in advance. We train a ML model to do this by giving it many examples of data noised at different timesteps.
  • Using very small steps in the forward noising process makes it easier for the model to learn to reverse these steps, as the changes are small.
  • By applying the reverse noising process iteratively, the model refines noisy samples step by step, eventually producing a realistic data point (one that lies on the data manifold).

Takeaway

Diffusion models are a powerful framework for learning complex data distributions. The distributions are learnt implicitly by modelling a sequential denoising process. This process can then be used to generate samples similar to those in the training distribution.

Once you’ve trained a model, how do you get useful stuff out of it?

Earlier uses of generative AI such as “This Person Does Not Exist” (ca. 2019) made waves simply because it was the first time most people had seen AI-generated photorealistic human faces. A generative adversarial network or “GAN” was used in that case, but the principle remains the same: the model implicitly learnt a underlying data distribution — in that case, human faces — then sampled from it. So far, our glyffuser model does a similar thing: it samples randomly from the distribution of Chinese glyphs.

The question then arises: can we do something more useful than just sample randomly? You’ve likely already encountered text-to-image models such as Dall-E. They are able to incorporate extra meaning from text prompts into the diffusion process — this in known as conditioning. Likewise, diffusion models for scientific scientific applications like protein (e.g. Chroma, RFdiffusion, AlphaFold3) or inorganic crystal structure generation (e.g. MatterGen) become much more useful if can be conditioned to generate samples with desirable properties such as a specific symmetry, bulk modulus, or band gap.

Conditional distributions

We can consider conditioning as a way to guide the diffusion sampling process towards particular regions of our probability distribution. We mentioned conditional distributions in the context of forward diffusion. Below we show how conditioning can be thought of as reshaping a base distribution.

A simple example of a joint probability distribution p(x, y), shown as a contour map, along with its two marginal 1-D probability distributions, p(x) and p(y). The highest points of p(x, y) are at (x₁, y₁) and (x₂, y₂). The conditional distributions p(xy = y₁) and p(xy = y₂) are shown overlaid on the main plot.

Consider the figure above. Think of p(x) as a distribution we want to sample from (i.e., the images) and p(y) as conditioning information (i.e., the text dataset). These are the marginal distributions of a joint distribution p(x, y). Integrating p(x, y) over y recovers p(x), and vice versa.

Sampling from p(x), we are equally likely to get x₁ or x₂. However, we can condition on p(y = y₁) to obtain p(xy = y₁). You can think of this as taking a slice through p(x, y) at a given value of y. In this conditioned distribution, we are much more likely to sample at x₁ than x₂.

In practice, in order to condition on a text dataset, we need to convert the text into a numerical form. We can do this using large language model (LLM) embeddings that can be injected into the noise prediction model during training.

Embedding text with an LLM

In the glyffuser, our conditioning information is in the form of English text definitions. We have two requirements: 1) ML models prefer fixed-length vectors as input. 2) The numerical representation of our text must understand context — if we have the words “lithium” and “element” nearby, the meaning of “element” should be understood as “chemical element” rather than “heating element”. Both of these requirements can be met by using a pre-trained LLM.

The diagram below shows how an LLM converts text into fixed-length vectors. The text is first tokenized (LLMs break text into tokens, small chunks of characters, as their basic unit of interaction). Each token is converted into a base embedding, which is a fixed-length vector of the size of the LLM input. These vectors are then passed through the pre-trained LLM (here we use the encoder portion of Google’s T5 model), where they are imbued with additional contextual meaning. We end up with a array of n vectors of the same length d, i.e. a (n, d) sized tensor.

We can convert text to a numerical embedding imbued with contextual meaning using a pre-trained LLM.

Note: in some models, notably Dall-E, additional image-text alignment is performed using contrastive pretraining. Imagen seems to show that we can get away without doing this.

Training the diffusion model with text conditioning

The exact method that this embedding vector is injected into the model can vary. In Google’s Imagen model, for example, the embedding tensor is pooled (combined into a single vector in the embedding dimension) and added into the data as it passes through the noise prediction model; it is also included in a different way using cross-attention (a method of learning contextual information between sequences of tokens, most famously used in the transformer models that form the basis of LLMs like ChatGPT).

Conditioning information can be added via multiple different methods but the training loss remains the same.

In the glyffuser, we only use cross-attention to introduce this conditioning information. While a significant architectural change is required to introduce this additional information into the model, the loss function for our noise prediction model remains exactly the same.

Testing the conditioned diffusion model

Let’s do a simple test of the fully trained conditioned diffusion model. In the figure below, we try to denoise in a single step with the text prompt “Gold”. As touched upon in our interactive UMAP, Chinese characters often contain components known as radicals which can convey sound (phonetic radicals) or meaning (semantic radicals). A common semantic radical is derived from the character meaning “gold”, “金”, and is used in characters that are in some broad sense associated with gold or metals.

Even with a single sampling step, conditioning guides denoising towards the relevant regions of the probability distribution.

The figure shows that even though a single step is insufficient to approximate the denoising trajectory very well, we have moved into a region of our probability distribution with the “金” radical. This indicates that the text prompt is effectively guiding our sampling towards a region of the glyph probability distribution related to the meaning of the prompt. The animation below shows a 120 step denoising sequence for the same prompt, “Gold”. You can see that every generated glyph has either the 釒 or 钅 radical (the same radical in traditional and simplified Chinese, respectively).

Takeaway

Conditioning enables us to sample meaningful outputs from diffusion models.

Further remarks

I found that with the help of tutorials and existing libraries, it was possible to implement a working diffusion model despite not having a full understanding of what was going on under the hood. I think this is a good way to start learning and highly recommend Hugging Face’s tutorial on training a simple diffusion model using their diffusers Python library (which now includes my small bugfix!).

I’ve omitted some topics that are crucial to how production-grade diffusion models function, but are unnecessary for core understanding. One is the question of how to generate high resolution images. In our example, we did everything in pixel space, but this becomes very computationally expensive for large images. The general approach is to perform diffusion in a smaller space, then upscale it in a separate step. Methods include latent diffusion (used in Stable Diffusion) and cascaded super-resolution models (used in Imagen). Another topic is classifier-free guidance, a very elegant method for boosting the conditioning effect to give much better prompt adherence. I show the implementation in my previous post on the glyffuser and highly recommend this article if you want to learn more.

Further reading

A non-exhaustive list of materials I found very helpful:

Fun extras

Diffusion sampling using the DPMSolverSDEScheduler developed by Katherine Crowson and implemented in Hugging Face diffusers—note the smooth transition from noise to data.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Takeaways from Cisco’s AI Summit

Software development is in an absolute frenzy right now, Scott said. “You have very, very senior people, the best coders you’ve ever met in your life, who are just completely overwhelmed trying to keep up with the rate of progress that’s happening right now.” Optimizing AI development for agents or humans?

Read More »

Eying AI factories, Nvidia buys bigger stake in CoreWeave

Nvidia continues to throw its sizable bank account around, this time making a $2 billion investment in GPU cloud service provider CoreWeave. The company says the investment reflects Nvidia’s “confidence in CoreWeave’s business, team and growth strategy as a cloud platform built on Nvidia infrastructure.” CoreWeave is not the only

Read More »

AI, security tailwinds signal promising 2026 for Cisco

A big component of AI in communications is agentic agents talking to employees and customers, and bringing trust to the system is where Cisco should shine. It builds and runs its own infrastructure, which is secure by design. Cisco has relationships with governments all over the world, and between Webex

Read More »

Oil Ends Higher Amid Rising Middle East Risks

Oil edged higher as traders parsed conflicting reports on the status of nuclear talks between the US and Iran, clouding the outlook on whether Washington will proceed with military strikes against the major oil producer. West Texas Intermediate rose 3.1% to settle above $65 a barrel. Prices pared gains in post-settlement trading as Iranian Foreign Minister Abbas Araghchi confirmed in a social media post that negotiations will be held in Oman on Friday. The commodity surged earlier on reports that the US told Iran it will not agree to Tehran’s demands to change the location and format of talks planned for Friday, Axios said, citing two US officials. Adding to bullish momentum, US President Donald Trump said that Iran’s Supreme Leader Ayatollah Ali Khamenei “should be very worried” in an interview with NBC. Traders have been closely monitoring the risk of possible US military intervention in Iran, which could disrupt key shipping lanes as well as the country’s roughly 3.3 million barrels-per-day oil production. Doubts over whether talks surrounding Iran’s nuclear program would proceed as planned have intensified since Tuesday, when US and Iranian forces appeared to square off in the sea and air. An Iranian drone approached an American aircraft carrier in the Arabian Sea and was shot down just hours after a US-flagged oil tanker was hailed by small armed ships in the Strait of Hormuz off Iran’s coast. Concern over a potential conflict in the Middle East, a source of about a third of the world’s crude, helped lift prices last month despite signs of a growing oversupply. It has also kept the cost of bullish options high relative to bearish ones for the longest stretch in more than a year. “Geopolitical tensions are really driving it,” Equinor Chief Financial Officer Torgrim Reitan said in a Bloomberg

Read More »

Holtum Says LNG Projects Need New Financing Playbook

Trafigura Group Chief Executive Officer Richard Holtum said the liquefied natural gas industry needs a “bit of innovation” when it comes to financing projects. “I feel sorry for LNG projects in the US,” Holtum said on a panel at the LNG 2026 conference in Doha. “They would only get bank financing when they show that they’ve sold 80%-90% of their volume on long-term projects.”  LNG developers are scrambling to fully finance their projects to export more of the fuel, with the next wave of production from terminals under construction in the US and Qatar due to enter the market over the next decade. In the US, several projects including Delfin LNG off the coast of Louisiana, are working to finalize the debt and equity commitments. The current approach of LNG financing contrasts with oil, where banks are more comfortable with the inherent value of the commodity, the CEO said. “Whilst if your project financing some oil exploration, it’s simply the bank takes a view that, okay, oil is worth $40, $50, $60, $70, whatever it is, it doesn’t matter, they take a view, that’s what it’s worth in the long term,” he said. A similar model for LNG, where project financing is based on a long-term price forecast for the fuel, doesn’t seem to be developing, Holtum said. Still, global LNG capacity is set to rise by about 50% by the end of the decade — the biggest build-out in the industry’s history — led by the US.  The current arrangement, where 90% of LNG is sold to utilities under long-term contracts, risks running into difficulties because many companies and countries have made net-zero commitments, according to the Trafigura CEO. “If they have made those commitments, signing a 20-year contract or a 25-year contract that starts in 2030 is inherently problematic,” Holtum

Read More »

Russian Oil Revenues Plunge to 5 Year Low

The Russian government’s oil revenues collapsed to the lowest in more than five years in January as weaker global prices, steeper discounts for the nation’s barrels, and a stronger currency took a toll on the budget. Oil-related taxes halved to 281.7 billion rubles ($3.7 billion) last month from a year earlier, according to Bloomberg calculations based on finance ministry data published Wednesday. Combined oil and gas revenue also declined by 50%, to 393.3 billion rubles.  Lower proceeds from the two industries, which between them contribute about a quarter of the budget, will put more strain on the nation’s coffers as the war in Ukraine drags toward a fifth year with little sign of ending.  Brent oil futures were 15% lower year on year for the fiscal period, but US sanctions made the market downturn even worse for Russia. January’s oil revenue was the lowest since June 2020. The nation’s flagship grade Urals traded at about $26 a barrel below Dated Brent, a benchmark for physical oil trades, at the point of export. That compares with over $12 below the same marker a year earlier, data from Argus Media show.  The discounts ballooned following the US blacklisting of Rosneft PJSC and Lukoil PJSC, Russia’s two largest producers, measures that were announced in October. This week, US President Donald Trump said the US would cut import tariffs for goods from India — a major buyer of Russian crude — in exchange for New Delhi halting purchases of oil from Moscow. It’s not clear the extent to which India will cut back in practice. Russia’s finance ministry calculated oil revenue based on the average price of Urals of $39.18 a barrel in December, a 38% drop from a year earlier. That’s much lower than the government assumed when planned nation’s budget for this year and expected crude

Read More »

Eneos to Expand Oil Trading Portfolio Outside Japan

Eneos Holdings Inc. plans to expand its team to handle more oil-derivative trading at its overseas offices including Singapore, as Japan’s largest refiner looks to increase its presence at major trading hubs. The company intends to trade more oil derivatives, arbitrages and time spreads, as well as other paper market instruments, according to people familiar with the matter. They asked not to be named as they aren’t authorized to speak to the media.  Eneos will hire traders, as well as other executives in middle and back office roles, said people with knowledge of those plans. Kenneth Quek, a former trader from Mercuria Energy Group, recently joined in Singapore to focus on crude and related derivatives.  A company spokesperson didn’t respond to a request for comment during office hours. Some of these roles may be filled by internal candidates. The beefing up of its trading presence is part of a broader push to create more value across business sectors, including a bid for overseas assets such as Chevron Corp.’s stake in a Singapore oil refinery. Bloomberg previously reported that Eneos was a frontrunner in the process, ahead of rivals including trading houses Glencore Plc and Vitol Group. Oil markets have kicked off the year with a high level of volatility as geopolitical risks ran ahead of market glut concerns. India’s state-owned refiner Bharat Petroleum Corp. is also planning to set up a trading arm in Singapore this month. Eneos has a market capitalization of 3.6 trillion yen ($23 billion), making it Japan’s largest oil processor following years of consolidation in the country’s wider petroleum sector. It acquired renewable energy assets in recent years, and sold off its copper mining assets. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review.

Read More »

ADNOC, TAQA Pen 27 Year TA’ZIZ Deal

In a statement posted on ADNOC’s website recently, ADNOC and Abu Dhabi National Energy Company PJSC (TAQA) announced the signing of a 27 year utilities purchase agreement to supply “critical utilities” to the TA’ZIZ Industrial Chemicals Zone in Ruwais Industrial City, Abu Dhabi. The value of the deal was not disclosed in the statement, which noted that the duration of the agreement includes the offtake of the utilities and construction of the plant. Under the deal, ADNOC and TAQA will jointly develop the central utilities project, including the electricity grid connection, steam production, process cooling, and a range of water and wastewater utilities required to enable TA’ZIZ’s chemicals and transition-fuels projects, the statement revealed. The statement said TA’ZIZ, which is a joint venture between ADNOC and ADQ, will set up and own a service management company which will be the sole offtaker of the utilities, “providing a stable foundation for efficient industrial activity within the TA’ZIZ Industrial Chemicals Zone”. The statement noted that the agreement “marks a significant milestone in the development of the TA’ZIZ ecosystem”. “TA’ZIZ is set to accelerate the UAE’s industrial diversification and is set to produce 4.7 million tons per annum (MTPA) commencing in 2028. This will include methanol, low-carbon ammonia, polyvinyl chloride (PVC), ethylene dichloride (EDC), vinyl chloride monomer (VCM), and caustic soda,” it added. “TAQA’s Generation business continues to expand its regional portfolio with several major projects, including the 1-gigawatt Al Dhafra Gas Turbine project in the UAE and 3.6 GW new high-efficiency power plants – Rumah 2 IPP and Al Nairyah 2 IPP – in Saudi Arabia, being developed alongside partners JERA and AlBawani,” it continued. In the statement, Farid Al Awlaqi, Chief Executive Officer of TAQA’s Generation business, said, “this agreement strengthens TAQA’s role in enabling industrial growth in the UAE by

Read More »

Texas Upstream Employment Rises

Employment in the Texas upstream sector increased between November and December 2025. That’s what the Texas Independent Producers and Royalty Owners Association (TIPRO) said in a statement sent to Rigzone on Friday, which cited the latest Current Employment Statistics (CES) report from the U.S. Bureau of Labor Statistics (BLS) at the time. TIPRO highlighted in the statement that oil and natural gas extraction jobs rose by 500, or 0.7 percent, month on month, to 70,200, and support activities employment grew by 1,500, or 1.1 percent month on month, to 133,200. TIPRO reported in the statement that combined upstream employment increased by 2,000 jobs, or 1.0 percent month on month, to 203,400. “From January to December 2025, employment in the Texas upstream sector showed early gains followed by later fluctuations,” TIPRO said in the statement. “Oil and Gas Extraction added a net 2,000 jobs (+2.9 percent), reaching a peak of 70,200 in June, July, and December, driven by robust Permian production despite market pressures,” it added. “Support Activities employment recorded a net loss of 2,100 jobs (-1.6 percent), with a February0May surge (+2,800) partially offset by mid-year declines (-3,400 in June-July) and subsequent volatility, reflecting rig count reductions and service sector adjustments,” it continued. “Combined, the sectors ended essentially flat, with a net change of -100 jobs (-0.05 percent), reaching 203,400 by December and underscoring the industry’s critical yet volatile role in sustaining Texas’ energy workforce,” TIPRO noted. In the statement, TIPRO said its workforce data “continues to indicate strong job postings for the Texas oil and natural gas industry in December” but added that analysis “revealed a continued decline in Q4 driven by lower oil prices, industry consolidation, and ongoing efficiency gains, which allow companies to maintain or increase production with reduced hiring activity”. There were 7,887 unique industry job postings in Texas during the

Read More »

Azure outage disrupts VMs and identity services for over 10 hours

After multiple infrastructure scale-up attempts failed to handle the backlog and retry volumes, Microsoft ultimately removed traffic from the affected service to repair the underlying infrastructure without load. “The outage didn’t just take websites offline, but it halted development workflows and disrupted real-world operations,” said Pareekh Jain, CEO at EIIRTrend & Pareekh Consulting. Cloud outages on the rise Cloud outages have become more frequent in recent years, with major providers such as AWS, Google Cloud, and IBM all experiencing high-profile disruptions. AWS services were severely impacted for more than 15 hours when a DNS problem rendered the DynamoDB API unreliable. In November, a bad configuration file in Cloudflare’s Bot Management system led to intermittent service disruptions across several online platforms. In June, an invalid automated update disrupted the company’s identity and access management (IAM) system, resulting in users being unable to use Google to authenticate on third-party apps. “The evolving data center architecture is shaped by the shift to more demanding, intricate workloads driven by the new velocity and variability of AI. This rapid expansion is not only introducing complexities but also challenging existing dependencies. So any misconfiguration or mismanagement at the control layer can disrupt the environment,” said Neil Shah, co-founder and VP at Counterpoint Research. Preparing for the next cloud incident This is not an isolated incident. For CIOs, the event only reinforces the need to rethink resilience strategies. In the immediate aftermath when a hyperscale dependency fails, waiting is not a recommended strategy for CIOs, and they should focus on a strategy of stabilize, prioritize, and communicate, stated Jain. “First, stabilize by declaring a formal cloud incident with a single incident commander, quickly determining whether the issue affects control-plane operations or running workloads, and freezing all non-essential changes such as deployments and infrastructure updates.”

Read More »

Intel sets sights on data center GPUs amid AI-driven infrastructure shifts

Supply chain reliability is another underappreciated advantage. Hyperscalers want a credible second source, but only if Intel can offer stable, predictable roadmaps across multiple product generations. However, the company runs into a major constraint at the software layer. “The decisive bottleneck is software,” Rawat said. “CUDA functions as an industry operating standard, embedded across models, pipelines, and DevOps. Intel’s challenge is to prove that migration costs are low, and that ongoing optimization does not become a hidden engineering tax.” For enterprise buyers, that software gap translates directly into switching risk. Tighter integration of Intel CPUs, GPUs, and networking could improve system-level efficiency for enterprises and cloud providers, but the dominance of the CUDA ecosystem remains the primary barrier to switching, said Charlie Dai, VP and principal analyst at Forrester. “Even with strong hardware integration, buyers will hesitate without seamless compatibility with mainstream ML/DL frameworks and tooling,” Dai added.

Read More »

8 hot networking trends for 2026

Recurring license fees may have dissuaded enterprises from adopting AIOps in the past, but that’s changing, Morgan adds: “Over the past few years, vendors have added features and increased the value of those licenses, including 24×7 support. Now, by paying the equivalent of a fraction of a network engineer’s salary in license fees, a mid-sized enterprise can reduce hours spent on operations and level-one support in order to allocate more of their valuable networking experts’ time to AI projects. Every enterprise’s business case will be different, but with networking expertise in high demand, we predict that in 2026, the labor savings will outweigh the additional license costs for the majority of mid-to-large sized enterprises.” 2. AI boosts data center networking investments Enterprise data centers, which not so long ago were on the endangered species list, have made a remarkable comeback, driven by the reality that many AI workloads need to be hosted on premises, either for privacy, security, regulatory, latency or cost considerations. The global market for data center networking technologies was estimated at around $46 billion in 2025 and is projected to reach $103 billion by the end of 2030, a growth rate of nearly 18%, according to BCC Research: “The data center networking technologies market is rapidly changing due to increasing use of AI-powered solutions across data centers and sectors like telecom, IT, banking, financial services, insurance, government and commercial industries.” McKinsey predicts that global demand for data center capacity could nearly triple by 2030, with about 70% of that demand coming from AI workloads. McKinsey says both training and inference workloads are contributing to data center growth, with inference expected to become the dominant workload by 2030. 3. Private clouds roll in Clearly, the hyperscalers are driving most of the new data center construction, but enterprises are

Read More »

Cisco: Infrastructure, trust, model development are key AI challenges

“The G200 chip was for the scale out, because what’s happening now is these models are getting bigger where they don’t just fit within a single data center. You don’t have enough power to just pull into a single data center,” Patel said. “So now you need to have data centers that might be hundreds of kilometers apart, that operate like an ultra-cluster that are coherent. And so that requires a completely different chip architecture to make sure that you have capabilities like deep buffering and so on and so forth… You need to make sure that these data centers can be scaled across physical boundaries.”  “In addition, we are reaching the physical limits of copper and optics, and coherent optics especially are going to be extremely important as we go start building out this data center infrastructure. So that’s an area that you’re starting to see a tremendous amount of progress being made,” Patel said. The second constraint is the AI trust deficit, Patel said. “We currently need to make sure that these systems are trusted by the people that are using them, because if you don’t trust these systems, you’ll never use them,” Patel said. “This is the first time that security is actually becoming a prerequisite for adoption. In the past, you always ask the question whether you want to be secure, or you want to be productive. And those were kind of needs that offset each other,” Patel said. “We need to make sure that we trust not just using AI for cyber defense, but we trust AI itself,” Patel said. The third constraint is the notion of a data gap. AI models get trained on human-generated data that’s publicly available on the Internet, but “we’re running out,” Patel said. “And what you’re starting to see happen

Read More »

How Robotics Is Re-Engineering Data Center Construction and Operations

Physical AI: A Reusable Robotics Stack for Data Center Operations This is where the recent collaboration between Multiply Labs and NVIDIA becomes relevant, even though the application is biomanufacturing rather than data centers. Multiply Labs has outlined a robotics approach built on three core elements: Digital twins using NVIDIA Isaac Sim to model hardware and validate changes in simulation before deployment. Foundation-model-based skill learning via NVIDIA Isaac GR00T, enabling robots to generalize tasks rather than rely on brittle, hard-coded behaviors. Perception pipelines including FoundationPose and FoundationStereo, that convert expert demonstrations into structured training data. Taken together, this represents a reusable blueprint for data center robotics. Applying the Lesson to Data Center Environments The same physical-AI techniques now being applied in lab and manufacturing environments map cleanly onto the realities of data center operations, particularly where safety, uptime, and variability intersect. Digital-twin-first deployment Before a robot ever enters a live data hall, it needs to be trained in simulation. That means modeling aisle geometry, obstacles, rack layouts, reflective surfaces, and lighting variation; along with “what if” scenarios such as blocked aisles, emergency egress conditions, ladders left in place, or spill events. Simulation-first workflows make it possible to validate behavior and edge cases before introducing any new system into a production environment. Skill learning beats hard-coded rules Data centers appear structured, but in practice they are full of variability: temporary cabling, staged parts, mixed-vendor racks, and countless human exceptions. Foundation-model approaches to manipulation are designed to generalize across that messiness far better than traditional rule-based automation, which tends to break when conditions drift even slightly from the expected state. Imitation learning captures tribal knowledge Many operational tasks rely on tacit expertise developed over years in the field, such as how to manage stiff patch cords, visually confirm latch engagement, or stage a

Read More »

Applied Digital CEO Wes Cummins On the Hard Part of the AI Boom: Execution

Designing for What Comes After the Current AI Cycle Applied Digital’s design philosophy starts with a premise many developers still resist: today’s density assumptions may not hold. “We’re designing for maximum flexibility for the future—higher density power, lower density power, higher voltage delivery, and more floor space,” Cummins said. “It’s counterintuitive because densities are going up, but we don’t know what comes next.” That choice – to allocate more floor space even as rack densities climb – signals a long-view approach. Facilities are engineered to accommodate shifts in voltage, cooling topology, and customer requirements without forcing wholesale retrofits. Higher-voltage delivery, mixed cooling configurations, and adaptable data halls are baked in from the start. The goal is not to predict the future perfectly, Cummins stressed, but to avoid painting infrastructure into a corner. Supply Chain as Competitive Advantage If flexibility is the design thesis, supply chain control is the execution weapon. “It’s a huge advantage that we locked in our MEP supply chain 18 to 24 months ago,” Cummins said. “It’s a tight environment, and more timelines are going to get missed in 2026 because of it.” Applied Digital moved early to secure long-lead mechanical, electrical, and plumbing components; well before demand pressure fully rippled through transformers, switchgear, chillers, generators, and breakers. That foresight now underpins the company’s ability to make credible delivery commitments while competitors confront procurement bottlenecks. Cummins was blunt: many delays won’t stem from poor planning, but from simple unavailability. From 100 MW to 700 MW Without Losing Control The past year marked a structural pivot for Applied Digital. What began as a single, 100-megawatt “field of dreams” facility in North Dakota has become more than 700 MW under construction, with expansion still ahead. “A hundred megawatts used to be considered scale,” Cummins said. “Now we’re at 700

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »