Stay Ahead, Stay ONMINE

A Visual Guide to How Diffusion Models Work

This article is aimed at those who want to understand exactly how Diffusion Models work, with no prior knowledge expected. I’ve tried to use illustrations wherever possible to provide visual intuitions on each part of these models. I’ve kept mathematical notation and equations to a minimum, and where they are necessary I’ve tried to define […]

This article is aimed at those who want to understand exactly how Diffusion Models work, with no prior knowledge expected. I’ve tried to use illustrations wherever possible to provide visual intuitions on each part of these models. I’ve kept mathematical notation and equations to a minimum, and where they are necessary I’ve tried to define and explain them as they occur.

Intro

I’ve framed this article around three main questions:

  • What exactly is it that diffusion models learn?
  • How and why do diffusion models work?
  • Once you’ve trained a model, how do you get useful stuff out of it?

The examples will be based on the glyffuser, a minimal text-to-image diffusion model that I previously implemented and wrote about. The architecture of this model is a standard text-to-image denoising diffusion model without any bells or whistles. It was trained to generate pictures of new “Chinese” glyphs from English definitions. Have a look at the picture below — even if you’re not familiar with Chinese writing, I hope you’ll agree that the generated glyphs look pretty similar to the real ones!

Random examples of glyffuser training data (left) and generated data (right).

What exactly is it that diffusion models learn?

Generative Ai models are often said to take a big pile of data and “learn” it. For text-to-image diffusion models, the data takes the form of pairs of images and descriptive text. But what exactly is it that we want the model to learn? First, let’s forget about the text for a moment and concentrate on what we are trying to generate: the images.

Probability distributions

Broadly, we can say that we want a generative AI model to learn the underlying probability distribution of the data. What does this mean? Consider the one-dimensional normal (Gaussian) distribution below, commonly written 𝒩(μ,σ²) and parameterized with mean μ = 0 and variance σ² = 1. The black curve below shows the probability density function. We can sample from it: drawing values such that over a large number of samples, the set of values reflects the underlying distribution. These days, we can simply write something like x = random.gauss(0, 1) in Python to sample from the standard normal distribution, although the computational sampling process itself is non-trivial!

Values sampled from an underlying distribution (here, the standard normal 𝒩(0,1)) can then be used to estimate the parameters of that distribution.

We could think of a set of numbers sampled from the above normal distribution as a simple dataset, like that shown as the orange histogram above. In this particular case, we can calculate the parameters of the underlying distribution using maximum likelihood estimation, i.e. by working out the mean and variance. The normal distribution estimated from the samples is shown by the dotted line above. To take some liberties with terminology, you might consider this as a simple example of “learning” an underlying probability distribution. We can also say that here we explicitly learnt the distribution, in contrast with the implicit methods that diffusion models use.

Conceptually, this is all that generative AI is doing — learning a distribution, then sampling from that distribution!

Data representations

What, then, does the underlying probability distribution of a more complex dataset look like, such as that of the image dataset we want to use to train our diffusion model?

First, we need to know what the representation of the data is. Generally, a machine learning (ML) model requires data inputs with a consistent representation, i.e. format. For the example above, it was simply numbers (scalars). For images, this representation is commonly a fixed-length vector.

The image dataset used for the glyffuser model is ~21,000 pictures of Chinese glyphs. The images are all the same size, 128 × 128 = 16384 pixels, and greyscale (single-channel color). Thus an obvious choice for the representation is a vector x of length 16384, where each element corresponds to the color of one pixel: x = (x,x₂,…,x₁₆₃₈₄). We can call the domain of all possible images for our dataset “pixel space”.

An example glyph with pixel values labelled (downsampled to 32 × 32 pixels for readability).

Dataset visualization

We make the assumption that our individual data samples, x, are actually sampled from an underlying probability distribution, q(x), in pixel space, much as the samples from our first example were sampled from an underlying normal distribution in 1-dimensional space. Note: the notation x q(x) is commonly used to mean: “the random variable x sampled from the probability distribution q(x).”

This distribution is clearly much more complex than a Gaussian and cannot be easily parameterized — we need to learn it with a ML model, which we’ll discuss later. First, let’s try to visualize the distribution to gain a better intution.

As humans find it difficult to see in more than 3 dimensions, we need to reduce the dimensionality of our data. A small digression on why this works: the manifold hypothesis posits that natural datasets lie on lower dimensional manifolds embedded in a higher dimensional space — think of a line embedded in a 2-D plane, or a plane embedded in 3-D space. We can use a dimensionality reduction technique such as UMAP to project our dataset from 16384 to 2 dimensions. The 2-D projection retains a lot of structure, consistent with the idea that our data lie on a lower dimensional manifold embedded in pixel space. In our UMAP, we see two large clusters corresponding to characters in which the components are arranged either horizontally (e.g. 明) or vertically (e.g. 草). An interactive version of the plot below with popups on each datapoint is linked here.

 Click here for an interactive version of this plot.

Let’s now use this low-dimensional UMAP dataset as a visual shorthand for our high-dimensional dataset. Remember, we assume that these individual points have been sampled from a continuous underlying probability distribution q(x). To get a sense of what this distribution might look like, we can apply a KDE (kernel density estimation) over the UMAP dataset. (Note: this is just an approximation for visualization purposes.)

This gives a sense of what q(x) should look like: clusters of glyphs correspond to high-probability regions of the distribution. The true q(x) lies in 16384 dimensions — this is the distribution we want to learn with our diffusion model.

We showed that for a simple distribution such as the 1-D Gaussian, we could calculate the parameters (mean and variance) from our data. However, for complex distributions such as images, we need to call on ML methods. Moreover, what we will find is that for diffusion models in practice, rather than parameterizing the distribution directly, they learn it implicitly through the process of learning how to transform noise into data over many steps.

Takeaway

The aim of generative AI such as diffusion models is to learn the complex probability distributions underlying their training data and then sample from these distributions.

How and why do diffusion models work?

Diffusion models have recently come into the spotlight as a particularly effective method for learning these probability distributions. They generate convincing images by starting from pure noise and gradually refining it. To whet your interest, have a look at the animation below that shows the denoising process generating 16 samples.

In this section we’ll only talk about the mechanics of how these models work but if you’re interested in how they arose from the broader context of generative models, have a look at the further reading section below.

What is “noise”?

Let’s first precisely define noise, since the term is thrown around a lot in the context of diffusion. In particular, we are talking about Gaussian noise: consider the samples we talked about in the section about probability distributions. You could think of each sample as an image of a single pixel of noise. An image that is “pure Gaussian noise”, then, is one in which each pixel value is sampled from an independent standard Gaussian distribution, 𝒩(0,1). For a pure noise image in the domain of our glyph dataset, this would be noise drawn from 16384 separate Gaussian distributions. You can see this in the previous animation. One thing to keep in mind is that we can choose the means of these noise distributions, i.e. center them, on specific values — the pixel values of an image, for instance.

For convenience, you’ll often find the noise distributions for image datasets written as a single multivariate distribution 𝒩(0,I) where I is the identity matrix, a covariance matrix with all diagonal entries equal to 1 and zeroes elsewhere. This is simply a compact notation for a set of multiple independent Gaussians — i.e. there are no correlations between the noise on different pixels. In the basic implementations of diffusion models, only uncorrelated (a.k.a. “isotropic”) noise is used. This article contains an excellent interactive introduction on multivariate Gaussians.

Diffusion process overview

Below is an adaptation of the somewhat-famous diagram from Ho et al.’s seminal paper “Denoising Diffusion Probabilistic Models” which gives an overview of the whole diffusion process:

Diagram of the diffusion process adapted from Ho et al. 2020. The glyph 锂, meaning “lithium”, is used as a representative sample from the dataset.

I found that there was a lot to unpack in this diagram and simply understanding what each component meant was very helpful, so let’s go through it and define everything step by step.

We previously used x q(x) to refer to our data. Here, we’ve added a subscript, xₜ, to denote timestep t indicating how many steps of “noising” have taken place. We refer to the samples noised a given timestep as x q(xₜ). x₀​ is clean data and xₜ (t = T) ∼ 𝒩(0,1) is pure noise.

We define a forward diffusion process whereby we corrupt samples with noise. This process is described by the distribution q(xₜ|xₜ₋₁). If we could access the hypothetical reverse process q(xₜ₋₁|xₜ), we could generate samples from noise. As we cannot access it directly because we would need to know x₀​, we use ML to learn the parameters, θ, of a model of this process, 𝑝θ(𝑥ₜ₋₁∣𝑥ₜ). (That should be p subscript θ but medium cannot render it.)

In the following sections we go into detail on how the forward and reverse diffusion processes work.

Forward diffusion, or “noising”

Used as a verb, “noising” an image refers to applying a transformation that moves it towards pure noise by scaling down its pixel values toward 0 while adding proportional Gaussian noise. Mathematically, this transformation is a multivariate Gaussian distribution centered on the pixel values of the preceding image.

In the forward diffusion process, this noising distribution is written as q(xₜ|xₜ₋₁) where the vertical bar symbol “|” is read as “given” or “conditional on”, to indicate the pixel means are passed forward from q(xₜ₋₁) At t = T where T is a large number (commonly 1000) we aim to end up with images of pure noise (which, somewhat confusingly, is also a Gaussian distribution, as discussed previously).

The marginal distributions q(xₜ) represent the distributions that have accumulated the effects of all the previous noising steps (marginalization refers to integration over all possible conditions, which recovers the unconditioned distribution).

Since the conditional distributions are Gaussian, what about their variances? They are determined by a variance schedule that maps timesteps to variance values. Initially, an empirically determined schedule of linearly increasing values from 0.0001 to 0.02 over 1000 steps was presented in Ho et al. Later research by Nichol & Dhariwal suggested an improved cosine schedule. They state that a schedule is most effective when the rate of information destruction through noising is relatively even per step throughout the whole noising process.

Forward diffusion intuition

As we encounter Gaussian distributions both as pure noise q(xₜ, t = T) and as the noising distribution q(xₜ|xₜ₋₁), I’ll try to draw the distinction by giving a visual intuition of the distribution for a single noising step, q(x₁∣x₀), for some arbitrary, structured 2-dimensional data:

Each noising step q(xₜ|xₜ₋₁) is a Gaussian distribution conditioned on the previous step.

The distribution q(x₁∣x₀) is Gaussian, centered around each point in x₀, shown in blue. Several example points x₀⁽ⁱ⁾ are picked to illustrate this, with q(x₁∣x₀ = x₀⁽ⁱ⁾) shown in orange.

In practice, the main usage of these distributions is to generate specific instances of noised samples for training (discussed further below). We can calculate the parameters of the noising distributions at any timestep t directly from the variance schedule, as the chain of Gaussians is itself also Gaussian. This is very convenient, as we don’t need to perform noising sequentially—for any given starting data x₀⁽ⁱ⁾, we can calculate the noised sample xₜ⁽ⁱ⁾ by sampling from q(xₜ∣x₀ = x₀⁽ⁱ⁾) directly.

Forward diffusion visualization

Let’s now return to our glyph dataset (once again using the UMAP visualization as a visual shorthand). The top row of the figure below shows our dataset sampled from distributions noised to various timesteps: xₜ ∼ q(xₜ). As we increase the number of noising steps, you can see that the dataset begins to resemble pure Gaussian noise. The bottom row visualizes the underlying probability distribution q(xₜ).

The dataset xₜ (above) sampled from its probability distribution q(xₜ) (below) at different noising timesteps.

Reverse diffusion overview

It follows that if we knew the reverse distributions q(xₜ₋₁∣xₜ), we could repeatedly subtract a small amount of noise, starting from a pure noise sample xₜ at t = T to arrive at a data sample x₀ ∼ q(x₀). In practice, however, we cannot access these distributions without knowing x₀ beforehand. Intuitively, it’s easy to make a known image much noisier, but given a very noisy image, it’s much harder to guess what the original image was.

So what are we to do? Since we have a large amount of data, we can train an ML model to accurately guess the original image that any given noisy image came from. Specifically, we learn the parameters θ of an ML model that approximates the reverse noising distributions, (xₜ₋₁ ∣ xₜ) for t = 0, …, T. In practice, this is embodied in a single noise prediction model trained over many different samples and timesteps. This allows it to denoise any given input, as shown in the figure below.

The ML model predicts added noise at any given timestep t.

Next, let’s go over how this noise prediction model is implemented and trained in practice.

How the model is implemented

First, we define the ML model — generally a deep neural network of some sort — that will act as our noise prediction model. This is what does the heavy lifting! In practice, any ML model that inputs and outputs data of the correct size can be used; the U-net, an architecture particularly suited to learning images, is what we use here and frequently chosen in practice. More recent models also use vision transformers.

We use the U-net architecture (Ronneberger et al. 2015) for our ML noise prediction model. We train the model by minimizing the difference between predicted and actual noise.

Then we run the training loop depicted in the figure above:

  • We take a random image from our dataset and noise it to a random timestep tt. (In practice, we speed things up by doing many examples in parallel!)
  • We feed the noised image into the ML model and train it to predict the (known to us) noise in the image. We also perform timestep conditioning by feeding the model a timestep embedding, a high-dimensional unique representation of the timestep, so that the model can distinguish between timesteps. This can be a vector the same size as our image directly added to the input (see here for a discussion of how this is implemented).
  • The model “learns” by minimizing the value of a loss function, some measure of the difference between the predicted and actual noise. The mean square error (the mean of the squares of the pixel-wise difference between the predicted and actual noise) is used in our case.
  • Repeat until the model is well trained.

Note: A neural network is essentially a function with a huge number of parameters (on the order of 10for the glyffuser). Neural network ML models are trained by iteratively updating their parameters using backpropagation to minimize a given loss function over many training data examples. This is an excellent introduction. These parameters effectively store the network’s “knowledge”.

A noise prediction model trained in this way eventually sees many different combinations of timesteps and data examples. The glyffuser, for example, was trained over 100 epochs (runs through the whole data set), so it saw around 2 million data samples. Through this process, the model implicity learns the reverse diffusion distributions over the entire dataset at all different timesteps. This allows the model to sample the underlying distribution q(x₀) by stepwise denoising starting from pure noise. Put another way, given an image noised to any given level, the model can predict how to reduce the noise based on its guess of what the original image. By doing this repeatedly, updating its guess of the original image each time, the model can transform any noise to a sample that lies in a high-probability region of the underlying data distribution.

Reverse diffusion in practice

We can now revisit this video of the glyffuser denoising process. Recall a large number of steps from sample to noise e.g. T = 1000 is used during training to make the noise-to-sample trajectory very easy for the model to learn, as changes between steps will be small. Does that mean we need to run 1000 denoising steps every time we want to generate a sample?

Luckily, this is not the case. Essentially, we can run the single-step noise prediction but then rescale it to any given step, although it might not be very good if the gap is too large! This allows us to approximate the full sampling trajectory with fewer steps. The video above uses 120 steps, for instance (most implementations will allow the user to set the number of sampling steps).

Recall that predicting the noise at a given step is equivalent to predicting the original image x₀, and that we can access the equation for any noised image deterministically using only the variance schedule and x₀. Thus, we can calculate xₜ₋ₖ based on any denoising step. The closer the steps are, the better the approximation will be.

Too few steps, however, and the results become worse as the steps become too large for the model to effectively approximate the denoising trajectory. If we only use 5 sampling steps, for example, the sampled characters don’t look very convincing at all:

There is then a whole literature on more advanced sampling methods beyond what we’ve discussed so far, allowing effective sampling with much fewer steps. These often reframe the sampling as a differential equation to be solved deterministically, giving an eerie quality to the sampling videos — I’ve included one at the end if you’re interested. In production-level models, these are usually preferred over the simple method discussed here, but the basic principle of deducing the noise-to-sample trajectory is the same. A full discussion is beyond the scope of this article but see e.g. this paper and its corresponding implementation in the Hugging Face diffusers library for more information.

Alternative intuition from score function

To me, it was still not 100% clear why training the model on noise prediction generalises so well. I found that an alternative interpretation of diffusion models known as “score-based modeling” filled some of the gaps in intuition (for more information, refer to Yang Song’s definitive article on the topic.)

The dataset xₜ sampled from its probability distribution q(xₜ) at different noising timesteps; below, we add the score function ∇ₓ log q(xₜ).

I try to give a visual intuition in the bottom row of the figure above: essentially, learning the noise in our diffusion model is equivalent (to a constant factor) to learning the score function, which is the gradient of the log of the probability distribution: ∇ₓ log q(x). As a gradient, the score function represents a vector field with vectors pointing towards the regions of highest probability density. Subtracting the noise at each step is then equivalent to moving following the directions in this vector field towards regions of high probability density.

As long as there is some signal, the score function effectively guides sampling, but in regions of low probability it tends towards zero as there is little to no gradient to follow. Using many steps to cover different noise levels allows us to avoid this, as we smear out the gradient field at high noise levels, allowing sampling to converge even if we start from low probability density regions of the distribution. The figure shows that as the noise level is increased, more of the domain is covered by the score function vector field.

Summary

  • The aim of diffusion models is learn the underlying probability distribution of a dataset and then be able to sample from it. This requires forward and reverse diffusion (noising) processes.
  • The forward noising process takes samples from our dataset and gradually adds Gaussian noise (pushes them off the data manifold). This forward process is computationally efficient because any level of noise can be added in closed form a single step.
  • The reverse noising process is challenging because we need to predict how to remove the noise at each step without knowing the original data point in advance. We train a ML model to do this by giving it many examples of data noised at different timesteps.
  • Using very small steps in the forward noising process makes it easier for the model to learn to reverse these steps, as the changes are small.
  • By applying the reverse noising process iteratively, the model refines noisy samples step by step, eventually producing a realistic data point (one that lies on the data manifold).

Takeaway

Diffusion models are a powerful framework for learning complex data distributions. The distributions are learnt implicitly by modelling a sequential denoising process. This process can then be used to generate samples similar to those in the training distribution.

Once you’ve trained a model, how do you get useful stuff out of it?

Earlier uses of generative AI such as “This Person Does Not Exist” (ca. 2019) made waves simply because it was the first time most people had seen AI-generated photorealistic human faces. A generative adversarial network or “GAN” was used in that case, but the principle remains the same: the model implicitly learnt a underlying data distribution — in that case, human faces — then sampled from it. So far, our glyffuser model does a similar thing: it samples randomly from the distribution of Chinese glyphs.

The question then arises: can we do something more useful than just sample randomly? You’ve likely already encountered text-to-image models such as Dall-E. They are able to incorporate extra meaning from text prompts into the diffusion process — this in known as conditioning. Likewise, diffusion models for scientific scientific applications like protein (e.g. Chroma, RFdiffusion, AlphaFold3) or inorganic crystal structure generation (e.g. MatterGen) become much more useful if can be conditioned to generate samples with desirable properties such as a specific symmetry, bulk modulus, or band gap.

Conditional distributions

We can consider conditioning as a way to guide the diffusion sampling process towards particular regions of our probability distribution. We mentioned conditional distributions in the context of forward diffusion. Below we show how conditioning can be thought of as reshaping a base distribution.

A simple example of a joint probability distribution p(x, y), shown as a contour map, along with its two marginal 1-D probability distributions, p(x) and p(y). The highest points of p(x, y) are at (x₁, y₁) and (x₂, y₂). The conditional distributions p(xy = y₁) and p(xy = y₂) are shown overlaid on the main plot.

Consider the figure above. Think of p(x) as a distribution we want to sample from (i.e., the images) and p(y) as conditioning information (i.e., the text dataset). These are the marginal distributions of a joint distribution p(x, y). Integrating p(x, y) over y recovers p(x), and vice versa.

Sampling from p(x), we are equally likely to get x₁ or x₂. However, we can condition on p(y = y₁) to obtain p(xy = y₁). You can think of this as taking a slice through p(x, y) at a given value of y. In this conditioned distribution, we are much more likely to sample at x₁ than x₂.

In practice, in order to condition on a text dataset, we need to convert the text into a numerical form. We can do this using large language model (LLM) embeddings that can be injected into the noise prediction model during training.

Embedding text with an LLM

In the glyffuser, our conditioning information is in the form of English text definitions. We have two requirements: 1) ML models prefer fixed-length vectors as input. 2) The numerical representation of our text must understand context — if we have the words “lithium” and “element” nearby, the meaning of “element” should be understood as “chemical element” rather than “heating element”. Both of these requirements can be met by using a pre-trained LLM.

The diagram below shows how an LLM converts text into fixed-length vectors. The text is first tokenized (LLMs break text into tokens, small chunks of characters, as their basic unit of interaction). Each token is converted into a base embedding, which is a fixed-length vector of the size of the LLM input. These vectors are then passed through the pre-trained LLM (here we use the encoder portion of Google’s T5 model), where they are imbued with additional contextual meaning. We end up with a array of n vectors of the same length d, i.e. a (n, d) sized tensor.

We can convert text to a numerical embedding imbued with contextual meaning using a pre-trained LLM.

Note: in some models, notably Dall-E, additional image-text alignment is performed using contrastive pretraining. Imagen seems to show that we can get away without doing this.

Training the diffusion model with text conditioning

The exact method that this embedding vector is injected into the model can vary. In Google’s Imagen model, for example, the embedding tensor is pooled (combined into a single vector in the embedding dimension) and added into the data as it passes through the noise prediction model; it is also included in a different way using cross-attention (a method of learning contextual information between sequences of tokens, most famously used in the transformer models that form the basis of LLMs like ChatGPT).

Conditioning information can be added via multiple different methods but the training loss remains the same.

In the glyffuser, we only use cross-attention to introduce this conditioning information. While a significant architectural change is required to introduce this additional information into the model, the loss function for our noise prediction model remains exactly the same.

Testing the conditioned diffusion model

Let’s do a simple test of the fully trained conditioned diffusion model. In the figure below, we try to denoise in a single step with the text prompt “Gold”. As touched upon in our interactive UMAP, Chinese characters often contain components known as radicals which can convey sound (phonetic radicals) or meaning (semantic radicals). A common semantic radical is derived from the character meaning “gold”, “金”, and is used in characters that are in some broad sense associated with gold or metals.

Even with a single sampling step, conditioning guides denoising towards the relevant regions of the probability distribution.

The figure shows that even though a single step is insufficient to approximate the denoising trajectory very well, we have moved into a region of our probability distribution with the “金” radical. This indicates that the text prompt is effectively guiding our sampling towards a region of the glyph probability distribution related to the meaning of the prompt. The animation below shows a 120 step denoising sequence for the same prompt, “Gold”. You can see that every generated glyph has either the 釒 or 钅 radical (the same radical in traditional and simplified Chinese, respectively).

Takeaway

Conditioning enables us to sample meaningful outputs from diffusion models.

Further remarks

I found that with the help of tutorials and existing libraries, it was possible to implement a working diffusion model despite not having a full understanding of what was going on under the hood. I think this is a good way to start learning and highly recommend Hugging Face’s tutorial on training a simple diffusion model using their diffusers Python library (which now includes my small bugfix!).

I’ve omitted some topics that are crucial to how production-grade diffusion models function, but are unnecessary for core understanding. One is the question of how to generate high resolution images. In our example, we did everything in pixel space, but this becomes very computationally expensive for large images. The general approach is to perform diffusion in a smaller space, then upscale it in a separate step. Methods include latent diffusion (used in Stable Diffusion) and cascaded super-resolution models (used in Imagen). Another topic is classifier-free guidance, a very elegant method for boosting the conditioning effect to give much better prompt adherence. I show the implementation in my previous post on the glyffuser and highly recommend this article if you want to learn more.

Further reading

A non-exhaustive list of materials I found very helpful:

Fun extras

Diffusion sampling using the DPMSolverSDEScheduler developed by Katherine Crowson and implemented in Hugging Face diffusers—note the smooth transition from noise to data.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cirrascale to offer on-prem Google Gemini models

Google Distributed Cloud can be deployed in customer-controlled environments, including installations that are disconnected from the Internet, which is a key requirement for some government and critical-infrastructure users. One of the big challenges is that these models are incredibly valuable and they need to be delivered in a trusted, secure

Read More »

Cisco switch aimed at building practical quantum networks

Cisco today unveiled a prototype switch it says will significantly accelerate the timeline for practical, distributed, quantum-computing-based networks. Cisco’s Universal Quantum Switch is designed to connect quantum systems from different vendors, such as IBM, IonQ, Google and Rigetti, in all major qubit encoding technologies, at room temperature, and over standard

Read More »

It’s the end of set-and-forget security

For IT pros, this translates into: Designing topologies and routing policies that support near real‑time, partial restores of critical services without hard cutovers. Ensuring backup traffic, recovery workflows, and security tooling share telemetry so SecOps can correlate “what changed on the wire” with “what was restored.” Treating recovery points and

Read More »

Google bets on workload-specific TPUs with 8t and 8i launch

For model providers, such as OpenAI and Anthropic, the choice of chips, Dai pointed out, enables a clearer separation between training and serving fleets, while still allowing reuse of common tools and code paths, in turn lowering total costs, improving fleet efficiency, and simplifying model lifecycle transitions. In fact, Google

Read More »

US BLM to offer 400,000 acres for oil and gas leasing under ANWR’s coastal plain in June

The US Bureau of Land Management (BLM) will offer oil and gas leases on 400,000 acres under the Alaska National Wildlife Refuge (ANWR)’s coastal plain on June 5, the first in a series of at least four sales required under the One Big Beautiful Bill Act (OBBBA), which the Trump administration now calls the Working Families Tax Cut act. Recent attempts to lease land for oil and gas development in the 1.5-million-acre coastal plain (the “1002 Area”) of ANWR have generated little interest, with the most recent federal lease sale in January 2025 yielding zero bids and no revenue for federal or state taxpayers. This sale was the second auction mandated by another bill, the 2017 Tax Cuts and Jobs Act. The first sale under that law, held in January 2021, offered 1.1 million acres but yielded only $14.4 million in high bids, less than 1% of the roughly $1 billion originally estimated. BLM noted, however, that a recent federal lease sale in the National Petroleum Reserve in Alaska generated strong participation, which could portend a stronger showing for the upcoming ANWR sale. “The record-breaking success of last month’s lease sale in Alaska’s National Petroleum Reserve sent a clear signal: There is robust and continuing demand for Alaskan energy, underscoring the need for more opportunities like the Coastal Plain sale,” Acting BLM Director Bill Groffy said in a statement. “By expanding these opportunities, we strengthen our national energy security, support high-paying jobs for Alaskans, and help ensure Americans have access to affordable energy.” The Mar. 18 NPR-A sale resulted in 187 leases and $163.7 million in total receipts. Oil and gas development in ANWR remains contentious because of its ecologically sensitive environment and ongoing lawsuits from indigenous groups and environmental organizations. Majors, including ExxonMobil, ConocoPhillips, and bp have left the area

Read More »

Oil prices decline as Strait traffic resumes

Friday’s move has the May 2026 WTI NYMEX futures are trading below the 8-, 13-, and 21-day Moving Averages with a Low that breached the Lower-Bollinger Band limit. Volume is down to 80,000 as May expires next week and traders turn their attention to June. The Relative Strength Indicator (RSI), a momentum indicator, has fallen back into neutral territory at 42. Resistance is now pegged at $93.70 (8-day MA) while near-term Support is $82.45 (Bollinger Band). As has been the pattern for several weeks now, traders have to be cautious with their Friday positions as the market is closed until Sunday evening and the US/Iran talks continue on Saturday.   Looking ahead Questions now remain in terms of the duration of the Israeli ceasefire with Lebanon which Iran has tied to the opening of the Strait of Hormuz. Should Israel violate the ceasefire, it would put Iran’s IRGC back in direct conflict with US naval forces in the area should the former attempt to close the Strait again. US/Iran negotiations are scheduled to continue this weekend in Islamabad. Once again, markets will be closed until Sunday evening so the outcome of those talks will be key to market direction on the Open. Should peace hold, there will need to be a very detailed assessment of the long-term damage to all oil and gas infrastructure in the region. The tanker tracking map below indicates loaded oil vessels are exiting the Strait of Hormuz. Natural gas, fundamental analysis May NYMEX natural gas futures have now been on a 5-week downtrend on mild weather and a larger-than-expected storage injections despite healthy LNG export volumes. The week’s High was Monday’s $2.72/MMbtu while the Low was Tuesday’s $2.56, a tight range which indicates market direction uncertainty.   Natural gas demand this week has been estimated at about

Read More »

Phillips 66, Kinder Morgan move forward with Western Gateway pipeline with secured shipper interest

Phillips 66 Co. and Kinder Morgan Inc. have secured sufficient shipper interest to advance the proposed Western Gateway refined products pipeline project to supply fuel to ‌Arizona and California, the companies said in a joint release Apr. 20. Following a second open season to secure long-term shipper commitments, the companies will “move the project forward, subject to the execution of definitive transportation service agreements, joint venture agreements, and respective board approvals,” the companies said. “Customer response during the open season underscores the importance of Western Gateway in addressing long term refined products logistics needs in the region,” said Phillips 66 chairman and chief executive officer Mark Lashier. “By utilizing existing pipeline assets across multiple states along the route, we’re uniquely well-positioned to support a refined products transportation solution,” said Kim Dang, Kinder Morgan chief executive officer. Western Gateway pipeline specs The planned 200,000-b/d Western Gateway project is designed as a 1,300-mile refined products system with a new-build pipeline from Borger, Tex. to Phoenix, Ariz., combined with Kinder Morgan’s existing SFPP LP pipeline from Colton, Calif. to Phoenix, Ariz., which will be reversed to enable east-to-west product flows into California. It will be fed from supplies connected to Borger as well as supplies already connected to SFPP’s system in El Paso, Tex. The Gold Pipeline, operated by Phillips 66, which currently flows from Borger to St. Louis, will be reversed to enable refined products from midcontinent refineries to flow toward Borger and supply the Western Gateway pipeline. Western Gateway will also have connectivity to Las Vegas, Nev. via Kinder Morgan’s 566-mile CALNEV Pipeline. The Western Gateway Pipeline is targeting completion by 2029.  Phillips 66 will build the entirety of the new pipeline and will operate the line from Borger, Tex., to El Paso, Tex. Kinder Morgan will operate the line from El

Read More »

Viva Energy reports on Geelong refinery status following fire

Viva Energy Group Ltd. has stabilized operations at its 120,000-b/d Geelong refinery in Victoria, Australia, which continues operating at reduced rates following a mid-April fire in the site’s gasoline complex. In an Apr. 20 update to the market, Viva Energy confirmed the Apr. 15 fire specifically occurred in the complex’s alkylation unit and was not fully extinguished until the morning of Apr. 16. While the refinery’s crude distillation units and reformer continue operating, the site’s residue catalytic cracking unit (RCCU) remains temporarily offline as part of ongoing stabilization efforts, according to the company. In the near term, Viva Energy said it expects the refinery’s diesel and jet fuel production to average about 80% normal capacity, with gasoline output reduced to about 60% capacity. The company anticipates production constraints to ease in the coming weeks, subject to inspection and restart of the RCCU, which would allow the refinery’s combined output diesel, jet fuel, and gasoline to exceed 90% of nameplate capacity until all necessary repairs are completed. With sufficient fuel inventories already on hand, Viva Energy said it remains well-positioned to maintain normal fuel supplies to customers during the production shortfalls. “The whole Viva Energy team understands how important our refinery is to the energy security of the country, especially at the current time. We will progressively restore production once we are confident that it is safe to do so, and do not expect any disruptions to fuel availability or price increases for Viva Energy’s customers as a result of this incident,” Scott Wyatt, Viva Energy’s chief executive officer, said in a separate statement. While the company confirmed an assessment of damage to the alkylation unit and associated systems is under way, estimated timelines for full repairs and financial impacts resulting from the fire have yet to be determined. Alongside prioritizing

Read More »

Oil prices plunge following full reopening of the Strait of Hormuz to commercial vessels

Oil prices plunged on Apr. 17, as geopolitical tensions in the Middle East showed signs of easing, following the full reopening of the Strait of Hormuz to commercial vessels. Global crude markets reacted sharply after Iran confirmed that the Strait of Hormuz is now “completely open” to commercial shipping during an ongoing ceasefire tied to regional conflict negotiations. The announcement marked a major turning point after weeks of disruption that had severely constrained global oil flows. Stay updated on oil price volatility, shipping disruptions, LNG market analysis, and production output at OGJ’s Iran war content hub. Brent crude fell by more than 10%, dropping to around $88–89/bbl, while US West Texas Intermediate (WTI) declined to the low $80s—both benchmarks hitting their lowest levels in over a month. The sell-off reflects a rapid unwinding of the geopolitical risk premium that had built up during the conflict. The reopening follows a fragile, 10-day ceasefire involving Israel and Lebanon, alongside tentative progress in US–Iran negotiations. While the waterway is now open, the US has maintained a naval blockade on Iranian ports, signaling that broader geopolitical risks have not fully dissipated. The return of tanker traffic through the Gulf could gradually restore millions of barrels per day to global markets, easing the tight conditions that had driven recent price volatility. However, some uncertainty remains over how quickly shipping activity will normalize and whether the ceasefire will hold. Despite the sharp price decline, the oil market remains structurally fragile. Weeks of disruption have depleted inventories and altered trade flows, and it may take time for supply chains to fully recover. Additionally, any breakdown in ceasefire talks could quickly reverse the current trend. Beyond energy markets, the development rippled across global financial systems. Equity markets surged, with major US indices posting strong gains as lower oil

Read More »

EIA: US crude inventories up 1.9 million bbl

US crude oil inventories for the week ended Apr. 17, excluding the Strategic Petroleum Reserve, increased by 1.9 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). At 465.7 million bbl, US crude oil inventories are about 3% above the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 4.6 million bbl from last week and are about 0.5% below the 5-year average for this time of year. Finished gasoline inventories increased while blending components inventories decreased last week. Distillate fuel inventories decreased by 3.4 million bbl last week and are about 8% below the 5-year average for this time of year. Propane-propylene inventories increased by 2.1 million bbl from last week and are 69% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.0 million b/d for the week, which was 55,000 b/d less than the previous week’s average. Refineries operated at 89.1% of capacity. Gasoline production increased, averaging 10.1 million b/d. Distillate fuel production increased, averaging 5.0 million b/d. US crude oil imports averaged 6.1 million b/d, up 787,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 6.0 million b/d, 0.4% less than the same 4-week period last year. Total motor gasoline imports averaged 587,000 b/d. Distillate fuel imports averaged 190,000 b/d.

Read More »

Space data-center news: Roundup of extraterrestrial AI endeavors

Orbital is betting that distributed inference can scale as a constellation, with each satellite handling workloads in parallel. The company is also filing with the FCC for a larger constellation. Lonestar announces first commercial space data storage service April 2026: Lonestar Data Holdings announced StarVault, which it’s calling “the world’s first commercially operational space-based sovereign data storage platform.” The service launches in October 2026 aboard Sidus Space’s LizzieSat-4 mission. StarVault isn’t a full data center — it’s data storage with “advanced cryptographic key escrow capabilities,” according to the announcement. But it’s the first commercial space data service that enterprises can actually buy. Lonestar says demand from governments, financial institutions, and critical infrastructure operators has already exceeded expectations, and the company has ordered a second payload for launch next year. Lonestar has already flown four proof-of-concept data centers to space, including two to the Moon, according to the announcement. This is different because it’s the first one designed for paying customers. Atomic-6 launches a marketplace for buying orbital capacity April 2026: Atomic-6, a space systems company in Marietta, Georgia, has launched ODC.space — basically, a marketplace where you spec, price, and order orbital data center capacity the way you’d order a rack from a colo provider. You can buy either a sovereign satellite, where you get the whole thing, or colocated, where you rent space on someone else’s capacity, according to the announcement. Atomic-6 handles spacecraft build, launch, licensing, and operations through a partner network. You just supply the processors and the workload. Delivery runs two to three years, which Atomic-6 is carefully positioning against terrestrial data center timelines that now routinely run five-plus. Base configurations start with 1U nodes on satellites rated up to 100 kW. Connectivity starts at 1 Gbps. A sovereign rack runs $3.5 million a month, Atomic-6

Read More »

AI Infrastructure Brief: Power, Capital, and the Feeling That Something Is Tightening

It was one of those weeks where the headlines kept coming in terms of deals, campuses, gigawatts, billions.  Taken together they indicated a quieter signal beneath the noise: the AI infrastructure buildout is accelerating, but the system supporting it is beginning to show its seams.  Not cracking, not breaking. But tightening. Power, Everywhere, All at Once Start with power, because everything now starts with power. Bloom Energy and Oracle expanded their partnership toward 2.8 gigawatts of deployment – an almost casual number at this point, except it isn’t. It’s the kind of figure that used to define regional grids, now repurposed for compute. Elsewhere: And then there was the U.S. Air Force, quietly exploring Alaska bases as potential AI data center sites; because if the grid won’t come to you, you start looking for where it already exists? Even Microsoft’s expansion in Cheyenne fits the pattern: go where the power can be made to work. At the same time, Maine’s legislature passed what’s being described as the first-in-the-nation ban on data centers; a move that may or may not hold, because it’s temporary and includes exemptions. But doesn’t need to last forever. The signal for 2026 is enough: the social license layer is no longer hypothetical. Capital Is Still Flowing But It’s Wearing a Suit Now If power is the constraint, capital is still the accelerant; but it’s currently trending as more self-aware: And then the demand-side gravity: These are no exploratory partnerships. They are pre-committed consumption curves, locked in ahead of capacity that is still being negotiated, permitted, and in some cases imagined. Capital hasn’t pulled back. But it has started asking quieter questions. Speed Is the New Differentiator (or the New Risk) AWS has reportedly launched something called “Project Houdini,” aimed at accelerating data center construction timelines, which

Read More »

How AI is changing copper, fiber networking

In a side-by-side comparison using 1.6 Tb/s ports, optical cables can consume up to 20 watts of power, vs. virtually none for copper. That gap has major implications at scale. In massive AI installations with thousands of connections, optical power draw can quickly add up to a meaningful share of a facility’s total energy usage. Despite its efficiency, copper has a hard physical limitation: distance. As data rates increase, the maximum length of passive copper cables shrinks dramatically. At common speeds—such as 1Gb/s—copper Ethernet cables can span long distances without issue. But at the speeds used inside AI systems, the story changes. At roughly 200 Gb/s per lane, passive copper connections are limited to only a few meters, typically around two to three meters. Beyond that, signal integrity breaks down and fiber becomes inevitable, said Shainer. This constraint shapes how modern data centers are built. Copper is ideal for scale‑up networking, such as connecting GPUs within the same rack, where distances are short. Scale‑out networking—linking racks across rows, halls, or entire buildings—requires fiber optics. Fiber also matches copper in raw speed potential. Both media can support extremely high data rates, but fiber maintains those speeds over vastly longer distances. The tradeoff is higher cost, greater fragility, and significantly higher power consumption. Copper cables are physically tough and difficult to damage. Fiber cables contain delicate glass strands that can break if bent or mishandled.

Read More »

Almost 40% of data center projects will be late this year, 2027 looks no better

Add to that the significant parts and components shortage as well as the growing revolt by both nearby residents living near proposed data center sites as well as state and local governments. OpenAI told the Financial Times,  “Our historic data center build-out is on schedule and we will accelerate from here. In partnership with Oracle, SB Energy and a broader ecosystem of partners, we are delivering rapid progress in Abilene, Shackelford County and Milam County in Texas,” while Oracle said,  “Each data center we’re developing for OpenAI is moving forward on time, and construction is proceeding according to plan.” Two construction executives working on OpenAI-linked projects said there were not enough specialist workers, from electricians to pipe fitters, to meet demand across the build-out as companies race to construct clusters of increasingly large and complex facilities. Data center construction is facing growing headwinds from all quarters. Umm the high hardware demands of AI’s data centers has resulted in a significant shortage of not only GPUs but also memory and storage. Hard drive makers are sold out through the end of this year and into next year and memories going for hundreds if not thousands of dollars. Power is another issue. GPUs especially our power hungry and the demands of data centers have gone through the roof. With the current grid unable to support the demands, data center providers are looking to provide their own power, namely through modular nuclear data centers. Nuclear power has come back into vogue after being on the outs for so many years. Then there’s the revolt of both citizens and governments. What started out as individual groups in cities and states opposing data centers has now moved on to the state of Maine putting a pause on all data center construction through next year, and

Read More »

Data centers are costing local governments billions

Tax benefits for hyperscalers and other data center operators are costing local administrations billions of dollars. In the US, three states are already giving away more than $1 billion in potential tax revenue, while 14 are failing to declare how much data center subsidies are costing taxpayers, according to Good Jobs First. The campaign group said the failure to declare the tax subsidies goes against US Generally Accepted Accounting Principles (GAAP) and that they should, since 2017, be declared as lost revenue. “Tax-abatement laws written long ago for much smaller data centers, predating massive artificial intelligence (AI) facilities, are now unexpectedly costing governments billions of dollars in lost tax revenue,” Good Jobs First said. “Three states, Georgia, Virginia, and Texas, already lose $1 billion or more per year,” it reported in its new study, “Data Center Tax Abatements: Why States and Localities Must Disclose These Soaring Revenue Losses.”

Read More »

Equinix offering targets automated AI-centric network operations

Another component, Fabric Application Connect, functions as a private, dedicated connectivity marketplace for AI services. It lets enterprises access inference, training, storage, and security providers over private connections, bypassing the public Internet and limiting data exposure during AI development and deployment. Operational visibility is provided through Fabric Insights, an AI-powered monitoring layer that analyzes real-time network telemetry to detect anomalies and predict potential issues before they impact workloads. Fabric Insights integrates with security information and event management (SIEM) platforms such as Splunk and Datadog and feeds data directly into Fabric Super-Agent to support automated remediation. Fabric Intelligence operates on top of Equinix’s global infrastructure footprint, which includes hundreds of data centers across dozens of metropolitan markets. The platform is positioned as part of Equinix Fabric, a connectivity portfolio used by thousands of customers worldwide to link cloud providers, enterprises, and network services. Fabric Intelligence is available now to preview.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »