Stay Ahead, Stay ONMINE

A Visual Guide to How Diffusion Models Work

This article is aimed at those who want to understand exactly how Diffusion Models work, with no prior knowledge expected. I’ve tried to use illustrations wherever possible to provide visual intuitions on each part of these models. I’ve kept mathematical notation and equations to a minimum, and where they are necessary I’ve tried to define […]

This article is aimed at those who want to understand exactly how Diffusion Models work, with no prior knowledge expected. I’ve tried to use illustrations wherever possible to provide visual intuitions on each part of these models. I’ve kept mathematical notation and equations to a minimum, and where they are necessary I’ve tried to define and explain them as they occur.

Intro

I’ve framed this article around three main questions:

  • What exactly is it that diffusion models learn?
  • How and why do diffusion models work?
  • Once you’ve trained a model, how do you get useful stuff out of it?

The examples will be based on the glyffuser, a minimal text-to-image diffusion model that I previously implemented and wrote about. The architecture of this model is a standard text-to-image denoising diffusion model without any bells or whistles. It was trained to generate pictures of new “Chinese” glyphs from English definitions. Have a look at the picture below — even if you’re not familiar with Chinese writing, I hope you’ll agree that the generated glyphs look pretty similar to the real ones!

Random examples of glyffuser training data (left) and generated data (right).

What exactly is it that diffusion models learn?

Generative Ai models are often said to take a big pile of data and “learn” it. For text-to-image diffusion models, the data takes the form of pairs of images and descriptive text. But what exactly is it that we want the model to learn? First, let’s forget about the text for a moment and concentrate on what we are trying to generate: the images.

Probability distributions

Broadly, we can say that we want a generative AI model to learn the underlying probability distribution of the data. What does this mean? Consider the one-dimensional normal (Gaussian) distribution below, commonly written 𝒩(μ,σ²) and parameterized with mean μ = 0 and variance σ² = 1. The black curve below shows the probability density function. We can sample from it: drawing values such that over a large number of samples, the set of values reflects the underlying distribution. These days, we can simply write something like x = random.gauss(0, 1) in Python to sample from the standard normal distribution, although the computational sampling process itself is non-trivial!

Values sampled from an underlying distribution (here, the standard normal 𝒩(0,1)) can then be used to estimate the parameters of that distribution.

We could think of a set of numbers sampled from the above normal distribution as a simple dataset, like that shown as the orange histogram above. In this particular case, we can calculate the parameters of the underlying distribution using maximum likelihood estimation, i.e. by working out the mean and variance. The normal distribution estimated from the samples is shown by the dotted line above. To take some liberties with terminology, you might consider this as a simple example of “learning” an underlying probability distribution. We can also say that here we explicitly learnt the distribution, in contrast with the implicit methods that diffusion models use.

Conceptually, this is all that generative AI is doing — learning a distribution, then sampling from that distribution!

Data representations

What, then, does the underlying probability distribution of a more complex dataset look like, such as that of the image dataset we want to use to train our diffusion model?

First, we need to know what the representation of the data is. Generally, a machine learning (ML) model requires data inputs with a consistent representation, i.e. format. For the example above, it was simply numbers (scalars). For images, this representation is commonly a fixed-length vector.

The image dataset used for the glyffuser model is ~21,000 pictures of Chinese glyphs. The images are all the same size, 128 × 128 = 16384 pixels, and greyscale (single-channel color). Thus an obvious choice for the representation is a vector x of length 16384, where each element corresponds to the color of one pixel: x = (x,x₂,…,x₁₆₃₈₄). We can call the domain of all possible images for our dataset “pixel space”.

An example glyph with pixel values labelled (downsampled to 32 × 32 pixels for readability).

Dataset visualization

We make the assumption that our individual data samples, x, are actually sampled from an underlying probability distribution, q(x), in pixel space, much as the samples from our first example were sampled from an underlying normal distribution in 1-dimensional space. Note: the notation x q(x) is commonly used to mean: “the random variable x sampled from the probability distribution q(x).”

This distribution is clearly much more complex than a Gaussian and cannot be easily parameterized — we need to learn it with a ML model, which we’ll discuss later. First, let’s try to visualize the distribution to gain a better intution.

As humans find it difficult to see in more than 3 dimensions, we need to reduce the dimensionality of our data. A small digression on why this works: the manifold hypothesis posits that natural datasets lie on lower dimensional manifolds embedded in a higher dimensional space — think of a line embedded in a 2-D plane, or a plane embedded in 3-D space. We can use a dimensionality reduction technique such as UMAP to project our dataset from 16384 to 2 dimensions. The 2-D projection retains a lot of structure, consistent with the idea that our data lie on a lower dimensional manifold embedded in pixel space. In our UMAP, we see two large clusters corresponding to characters in which the components are arranged either horizontally (e.g. 明) or vertically (e.g. 草). An interactive version of the plot below with popups on each datapoint is linked here.

 Click here for an interactive version of this plot.

Let’s now use this low-dimensional UMAP dataset as a visual shorthand for our high-dimensional dataset. Remember, we assume that these individual points have been sampled from a continuous underlying probability distribution q(x). To get a sense of what this distribution might look like, we can apply a KDE (kernel density estimation) over the UMAP dataset. (Note: this is just an approximation for visualization purposes.)

This gives a sense of what q(x) should look like: clusters of glyphs correspond to high-probability regions of the distribution. The true q(x) lies in 16384 dimensions — this is the distribution we want to learn with our diffusion model.

We showed that for a simple distribution such as the 1-D Gaussian, we could calculate the parameters (mean and variance) from our data. However, for complex distributions such as images, we need to call on ML methods. Moreover, what we will find is that for diffusion models in practice, rather than parameterizing the distribution directly, they learn it implicitly through the process of learning how to transform noise into data over many steps.

Takeaway

The aim of generative AI such as diffusion models is to learn the complex probability distributions underlying their training data and then sample from these distributions.

How and why do diffusion models work?

Diffusion models have recently come into the spotlight as a particularly effective method for learning these probability distributions. They generate convincing images by starting from pure noise and gradually refining it. To whet your interest, have a look at the animation below that shows the denoising process generating 16 samples.

In this section we’ll only talk about the mechanics of how these models work but if you’re interested in how they arose from the broader context of generative models, have a look at the further reading section below.

What is “noise”?

Let’s first precisely define noise, since the term is thrown around a lot in the context of diffusion. In particular, we are talking about Gaussian noise: consider the samples we talked about in the section about probability distributions. You could think of each sample as an image of a single pixel of noise. An image that is “pure Gaussian noise”, then, is one in which each pixel value is sampled from an independent standard Gaussian distribution, 𝒩(0,1). For a pure noise image in the domain of our glyph dataset, this would be noise drawn from 16384 separate Gaussian distributions. You can see this in the previous animation. One thing to keep in mind is that we can choose the means of these noise distributions, i.e. center them, on specific values — the pixel values of an image, for instance.

For convenience, you’ll often find the noise distributions for image datasets written as a single multivariate distribution 𝒩(0,I) where I is the identity matrix, a covariance matrix with all diagonal entries equal to 1 and zeroes elsewhere. This is simply a compact notation for a set of multiple independent Gaussians — i.e. there are no correlations between the noise on different pixels. In the basic implementations of diffusion models, only uncorrelated (a.k.a. “isotropic”) noise is used. This article contains an excellent interactive introduction on multivariate Gaussians.

Diffusion process overview

Below is an adaptation of the somewhat-famous diagram from Ho et al.’s seminal paper “Denoising Diffusion Probabilistic Models” which gives an overview of the whole diffusion process:

Diagram of the diffusion process adapted from Ho et al. 2020. The glyph 锂, meaning “lithium”, is used as a representative sample from the dataset.

I found that there was a lot to unpack in this diagram and simply understanding what each component meant was very helpful, so let’s go through it and define everything step by step.

We previously used x q(x) to refer to our data. Here, we’ve added a subscript, xₜ, to denote timestep t indicating how many steps of “noising” have taken place. We refer to the samples noised a given timestep as x q(xₜ). x₀​ is clean data and xₜ (t = T) ∼ 𝒩(0,1) is pure noise.

We define a forward diffusion process whereby we corrupt samples with noise. This process is described by the distribution q(xₜ|xₜ₋₁). If we could access the hypothetical reverse process q(xₜ₋₁|xₜ), we could generate samples from noise. As we cannot access it directly because we would need to know x₀​, we use ML to learn the parameters, θ, of a model of this process, 𝑝θ(𝑥ₜ₋₁∣𝑥ₜ). (That should be p subscript θ but medium cannot render it.)

In the following sections we go into detail on how the forward and reverse diffusion processes work.

Forward diffusion, or “noising”

Used as a verb, “noising” an image refers to applying a transformation that moves it towards pure noise by scaling down its pixel values toward 0 while adding proportional Gaussian noise. Mathematically, this transformation is a multivariate Gaussian distribution centered on the pixel values of the preceding image.

In the forward diffusion process, this noising distribution is written as q(xₜ|xₜ₋₁) where the vertical bar symbol “|” is read as “given” or “conditional on”, to indicate the pixel means are passed forward from q(xₜ₋₁) At t = T where T is a large number (commonly 1000) we aim to end up with images of pure noise (which, somewhat confusingly, is also a Gaussian distribution, as discussed previously).

The marginal distributions q(xₜ) represent the distributions that have accumulated the effects of all the previous noising steps (marginalization refers to integration over all possible conditions, which recovers the unconditioned distribution).

Since the conditional distributions are Gaussian, what about their variances? They are determined by a variance schedule that maps timesteps to variance values. Initially, an empirically determined schedule of linearly increasing values from 0.0001 to 0.02 over 1000 steps was presented in Ho et al. Later research by Nichol & Dhariwal suggested an improved cosine schedule. They state that a schedule is most effective when the rate of information destruction through noising is relatively even per step throughout the whole noising process.

Forward diffusion intuition

As we encounter Gaussian distributions both as pure noise q(xₜ, t = T) and as the noising distribution q(xₜ|xₜ₋₁), I’ll try to draw the distinction by giving a visual intuition of the distribution for a single noising step, q(x₁∣x₀), for some arbitrary, structured 2-dimensional data:

Each noising step q(xₜ|xₜ₋₁) is a Gaussian distribution conditioned on the previous step.

The distribution q(x₁∣x₀) is Gaussian, centered around each point in x₀, shown in blue. Several example points x₀⁽ⁱ⁾ are picked to illustrate this, with q(x₁∣x₀ = x₀⁽ⁱ⁾) shown in orange.

In practice, the main usage of these distributions is to generate specific instances of noised samples for training (discussed further below). We can calculate the parameters of the noising distributions at any timestep t directly from the variance schedule, as the chain of Gaussians is itself also Gaussian. This is very convenient, as we don’t need to perform noising sequentially—for any given starting data x₀⁽ⁱ⁾, we can calculate the noised sample xₜ⁽ⁱ⁾ by sampling from q(xₜ∣x₀ = x₀⁽ⁱ⁾) directly.

Forward diffusion visualization

Let’s now return to our glyph dataset (once again using the UMAP visualization as a visual shorthand). The top row of the figure below shows our dataset sampled from distributions noised to various timesteps: xₜ ∼ q(xₜ). As we increase the number of noising steps, you can see that the dataset begins to resemble pure Gaussian noise. The bottom row visualizes the underlying probability distribution q(xₜ).

The dataset xₜ (above) sampled from its probability distribution q(xₜ) (below) at different noising timesteps.

Reverse diffusion overview

It follows that if we knew the reverse distributions q(xₜ₋₁∣xₜ), we could repeatedly subtract a small amount of noise, starting from a pure noise sample xₜ at t = T to arrive at a data sample x₀ ∼ q(x₀). In practice, however, we cannot access these distributions without knowing x₀ beforehand. Intuitively, it’s easy to make a known image much noisier, but given a very noisy image, it’s much harder to guess what the original image was.

So what are we to do? Since we have a large amount of data, we can train an ML model to accurately guess the original image that any given noisy image came from. Specifically, we learn the parameters θ of an ML model that approximates the reverse noising distributions, (xₜ₋₁ ∣ xₜ) for t = 0, …, T. In practice, this is embodied in a single noise prediction model trained over many different samples and timesteps. This allows it to denoise any given input, as shown in the figure below.

The ML model predicts added noise at any given timestep t.

Next, let’s go over how this noise prediction model is implemented and trained in practice.

How the model is implemented

First, we define the ML model — generally a deep neural network of some sort — that will act as our noise prediction model. This is what does the heavy lifting! In practice, any ML model that inputs and outputs data of the correct size can be used; the U-net, an architecture particularly suited to learning images, is what we use here and frequently chosen in practice. More recent models also use vision transformers.

We use the U-net architecture (Ronneberger et al. 2015) for our ML noise prediction model. We train the model by minimizing the difference between predicted and actual noise.

Then we run the training loop depicted in the figure above:

  • We take a random image from our dataset and noise it to a random timestep tt. (In practice, we speed things up by doing many examples in parallel!)
  • We feed the noised image into the ML model and train it to predict the (known to us) noise in the image. We also perform timestep conditioning by feeding the model a timestep embedding, a high-dimensional unique representation of the timestep, so that the model can distinguish between timesteps. This can be a vector the same size as our image directly added to the input (see here for a discussion of how this is implemented).
  • The model “learns” by minimizing the value of a loss function, some measure of the difference between the predicted and actual noise. The mean square error (the mean of the squares of the pixel-wise difference between the predicted and actual noise) is used in our case.
  • Repeat until the model is well trained.

Note: A neural network is essentially a function with a huge number of parameters (on the order of 10for the glyffuser). Neural network ML models are trained by iteratively updating their parameters using backpropagation to minimize a given loss function over many training data examples. This is an excellent introduction. These parameters effectively store the network’s “knowledge”.

A noise prediction model trained in this way eventually sees many different combinations of timesteps and data examples. The glyffuser, for example, was trained over 100 epochs (runs through the whole data set), so it saw around 2 million data samples. Through this process, the model implicity learns the reverse diffusion distributions over the entire dataset at all different timesteps. This allows the model to sample the underlying distribution q(x₀) by stepwise denoising starting from pure noise. Put another way, given an image noised to any given level, the model can predict how to reduce the noise based on its guess of what the original image. By doing this repeatedly, updating its guess of the original image each time, the model can transform any noise to a sample that lies in a high-probability region of the underlying data distribution.

Reverse diffusion in practice

We can now revisit this video of the glyffuser denoising process. Recall a large number of steps from sample to noise e.g. T = 1000 is used during training to make the noise-to-sample trajectory very easy for the model to learn, as changes between steps will be small. Does that mean we need to run 1000 denoising steps every time we want to generate a sample?

Luckily, this is not the case. Essentially, we can run the single-step noise prediction but then rescale it to any given step, although it might not be very good if the gap is too large! This allows us to approximate the full sampling trajectory with fewer steps. The video above uses 120 steps, for instance (most implementations will allow the user to set the number of sampling steps).

Recall that predicting the noise at a given step is equivalent to predicting the original image x₀, and that we can access the equation for any noised image deterministically using only the variance schedule and x₀. Thus, we can calculate xₜ₋ₖ based on any denoising step. The closer the steps are, the better the approximation will be.

Too few steps, however, and the results become worse as the steps become too large for the model to effectively approximate the denoising trajectory. If we only use 5 sampling steps, for example, the sampled characters don’t look very convincing at all:

There is then a whole literature on more advanced sampling methods beyond what we’ve discussed so far, allowing effective sampling with much fewer steps. These often reframe the sampling as a differential equation to be solved deterministically, giving an eerie quality to the sampling videos — I’ve included one at the end if you’re interested. In production-level models, these are usually preferred over the simple method discussed here, but the basic principle of deducing the noise-to-sample trajectory is the same. A full discussion is beyond the scope of this article but see e.g. this paper and its corresponding implementation in the Hugging Face diffusers library for more information.

Alternative intuition from score function

To me, it was still not 100% clear why training the model on noise prediction generalises so well. I found that an alternative interpretation of diffusion models known as “score-based modeling” filled some of the gaps in intuition (for more information, refer to Yang Song’s definitive article on the topic.)

The dataset xₜ sampled from its probability distribution q(xₜ) at different noising timesteps; below, we add the score function ∇ₓ log q(xₜ).

I try to give a visual intuition in the bottom row of the figure above: essentially, learning the noise in our diffusion model is equivalent (to a constant factor) to learning the score function, which is the gradient of the log of the probability distribution: ∇ₓ log q(x). As a gradient, the score function represents a vector field with vectors pointing towards the regions of highest probability density. Subtracting the noise at each step is then equivalent to moving following the directions in this vector field towards regions of high probability density.

As long as there is some signal, the score function effectively guides sampling, but in regions of low probability it tends towards zero as there is little to no gradient to follow. Using many steps to cover different noise levels allows us to avoid this, as we smear out the gradient field at high noise levels, allowing sampling to converge even if we start from low probability density regions of the distribution. The figure shows that as the noise level is increased, more of the domain is covered by the score function vector field.

Summary

  • The aim of diffusion models is learn the underlying probability distribution of a dataset and then be able to sample from it. This requires forward and reverse diffusion (noising) processes.
  • The forward noising process takes samples from our dataset and gradually adds Gaussian noise (pushes them off the data manifold). This forward process is computationally efficient because any level of noise can be added in closed form a single step.
  • The reverse noising process is challenging because we need to predict how to remove the noise at each step without knowing the original data point in advance. We train a ML model to do this by giving it many examples of data noised at different timesteps.
  • Using very small steps in the forward noising process makes it easier for the model to learn to reverse these steps, as the changes are small.
  • By applying the reverse noising process iteratively, the model refines noisy samples step by step, eventually producing a realistic data point (one that lies on the data manifold).

Takeaway

Diffusion models are a powerful framework for learning complex data distributions. The distributions are learnt implicitly by modelling a sequential denoising process. This process can then be used to generate samples similar to those in the training distribution.

Once you’ve trained a model, how do you get useful stuff out of it?

Earlier uses of generative AI such as “This Person Does Not Exist” (ca. 2019) made waves simply because it was the first time most people had seen AI-generated photorealistic human faces. A generative adversarial network or “GAN” was used in that case, but the principle remains the same: the model implicitly learnt a underlying data distribution — in that case, human faces — then sampled from it. So far, our glyffuser model does a similar thing: it samples randomly from the distribution of Chinese glyphs.

The question then arises: can we do something more useful than just sample randomly? You’ve likely already encountered text-to-image models such as Dall-E. They are able to incorporate extra meaning from text prompts into the diffusion process — this in known as conditioning. Likewise, diffusion models for scientific scientific applications like protein (e.g. Chroma, RFdiffusion, AlphaFold3) or inorganic crystal structure generation (e.g. MatterGen) become much more useful if can be conditioned to generate samples with desirable properties such as a specific symmetry, bulk modulus, or band gap.

Conditional distributions

We can consider conditioning as a way to guide the diffusion sampling process towards particular regions of our probability distribution. We mentioned conditional distributions in the context of forward diffusion. Below we show how conditioning can be thought of as reshaping a base distribution.

A simple example of a joint probability distribution p(x, y), shown as a contour map, along with its two marginal 1-D probability distributions, p(x) and p(y). The highest points of p(x, y) are at (x₁, y₁) and (x₂, y₂). The conditional distributions p(xy = y₁) and p(xy = y₂) are shown overlaid on the main plot.

Consider the figure above. Think of p(x) as a distribution we want to sample from (i.e., the images) and p(y) as conditioning information (i.e., the text dataset). These are the marginal distributions of a joint distribution p(x, y). Integrating p(x, y) over y recovers p(x), and vice versa.

Sampling from p(x), we are equally likely to get x₁ or x₂. However, we can condition on p(y = y₁) to obtain p(xy = y₁). You can think of this as taking a slice through p(x, y) at a given value of y. In this conditioned distribution, we are much more likely to sample at x₁ than x₂.

In practice, in order to condition on a text dataset, we need to convert the text into a numerical form. We can do this using large language model (LLM) embeddings that can be injected into the noise prediction model during training.

Embedding text with an LLM

In the glyffuser, our conditioning information is in the form of English text definitions. We have two requirements: 1) ML models prefer fixed-length vectors as input. 2) The numerical representation of our text must understand context — if we have the words “lithium” and “element” nearby, the meaning of “element” should be understood as “chemical element” rather than “heating element”. Both of these requirements can be met by using a pre-trained LLM.

The diagram below shows how an LLM converts text into fixed-length vectors. The text is first tokenized (LLMs break text into tokens, small chunks of characters, as their basic unit of interaction). Each token is converted into a base embedding, which is a fixed-length vector of the size of the LLM input. These vectors are then passed through the pre-trained LLM (here we use the encoder portion of Google’s T5 model), where they are imbued with additional contextual meaning. We end up with a array of n vectors of the same length d, i.e. a (n, d) sized tensor.

We can convert text to a numerical embedding imbued with contextual meaning using a pre-trained LLM.

Note: in some models, notably Dall-E, additional image-text alignment is performed using contrastive pretraining. Imagen seems to show that we can get away without doing this.

Training the diffusion model with text conditioning

The exact method that this embedding vector is injected into the model can vary. In Google’s Imagen model, for example, the embedding tensor is pooled (combined into a single vector in the embedding dimension) and added into the data as it passes through the noise prediction model; it is also included in a different way using cross-attention (a method of learning contextual information between sequences of tokens, most famously used in the transformer models that form the basis of LLMs like ChatGPT).

Conditioning information can be added via multiple different methods but the training loss remains the same.

In the glyffuser, we only use cross-attention to introduce this conditioning information. While a significant architectural change is required to introduce this additional information into the model, the loss function for our noise prediction model remains exactly the same.

Testing the conditioned diffusion model

Let’s do a simple test of the fully trained conditioned diffusion model. In the figure below, we try to denoise in a single step with the text prompt “Gold”. As touched upon in our interactive UMAP, Chinese characters often contain components known as radicals which can convey sound (phonetic radicals) or meaning (semantic radicals). A common semantic radical is derived from the character meaning “gold”, “金”, and is used in characters that are in some broad sense associated with gold or metals.

Even with a single sampling step, conditioning guides denoising towards the relevant regions of the probability distribution.

The figure shows that even though a single step is insufficient to approximate the denoising trajectory very well, we have moved into a region of our probability distribution with the “金” radical. This indicates that the text prompt is effectively guiding our sampling towards a region of the glyph probability distribution related to the meaning of the prompt. The animation below shows a 120 step denoising sequence for the same prompt, “Gold”. You can see that every generated glyph has either the 釒 or 钅 radical (the same radical in traditional and simplified Chinese, respectively).

Takeaway

Conditioning enables us to sample meaningful outputs from diffusion models.

Further remarks

I found that with the help of tutorials and existing libraries, it was possible to implement a working diffusion model despite not having a full understanding of what was going on under the hood. I think this is a good way to start learning and highly recommend Hugging Face’s tutorial on training a simple diffusion model using their diffusers Python library (which now includes my small bugfix!).

I’ve omitted some topics that are crucial to how production-grade diffusion models function, but are unnecessary for core understanding. One is the question of how to generate high resolution images. In our example, we did everything in pixel space, but this becomes very computationally expensive for large images. The general approach is to perform diffusion in a smaller space, then upscale it in a separate step. Methods include latent diffusion (used in Stable Diffusion) and cascaded super-resolution models (used in Imagen). Another topic is classifier-free guidance, a very elegant method for boosting the conditioning effect to give much better prompt adherence. I show the implementation in my previous post on the glyffuser and highly recommend this article if you want to learn more.

Further reading

A non-exhaustive list of materials I found very helpful:

Fun extras

Diffusion sampling using the DPMSolverSDEScheduler developed by Katherine Crowson and implemented in Hugging Face diffusers—note the smooth transition from noise to data.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco strengthens integrated IT/OT network and security controls

Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along

Read More »

MOL’s Tiszaújváros steam cracker processes first circular feedstock

MOL Group has completed its first certified production trial using circular feedstock at subsidiary MOL Petrochemicals Co. Ltd. complex in Tiszaújváros, Hungary, advancing the company’s strategic push toward circular economy integration in petrochemical production. Confirmed completed as of Sept. 15, the pilot marked MOL Group’s first use of post-consumer plastic

Read More »

Network jobs watch: Hiring, skills and certification trends

Desire for higher compensation Improve career prospects Want more interesting work “A robust and engaged tech workforce is essential to keeping enterprises operating at the highest level,” said Julia Kanouse, Chief Membership Officer at ISACA, in a statement. “In better understanding IT professionals’ motivations and pain points, including how these

Read More »

WTI Falls on Stockpile, Fed Moves

Oil eased after a three-session advance as traders assessed fresh US stockpile data and a Federal Reserve interest-rate cut. West Texas Intermediate fell 0.7% to settle above $64 a barrel after the Federal Reserve lowered its benchmark interest rate by a quarter percentage point and penciled in two more reductions this year. Although lower rates typically boost energy demand, investors focused on policymakers’ warnings of mounting labor market weakness. Traders had also mostly priced in a 25 basis-point cut ahead of the decision, leading some to unwind hedges against a bigger-than-expected reduction. The dollar strengthened, making commodities priced in the currency less attractive. “There is a somewhat counterintuitive reaction to the Fed’s cut, but the dovish pivot cements their shift to protect the labor side of their mandate,” said Frank Monkam, head of macro trading at Buffalo Bayou Commodities. The shift suggests “an admission that growth risks to the economy are becoming more apparent and concerning.” The Fed move compounded an earlier slide as traders discounted the most recent US stockpile data, which showed crude inventories fell 9.29 million barrels amid a sizable increase in exports. However, the adjustment factor ballooned and distillate inventories rose to the highest since January, adding a bearish tilt to the report. “Traders like to see domestic demand pulling the inventories,” as opposed to exports, said Dennis Kissler, senior vice president for trading at BOK Financial Securities. The distillate buildup also stunted a rally following Ukraine’s attack on the Saratov refinery in its latest strike on Russian energy facilities — which have helped cut the OPEC+ member’s production to its lowest post-pandemic level, according to Goldman Sachs Group Inc. Still, the strikes haven’t been enough to push oil out of the $5 band it has been in for most of the past month-and-a-half, buffeted between

Read More »

XRG Walks Away From $19B Santos Takeover

Abu Dhabi National Oil Co. dropped its planned $19 billion takeover of Australian natural gas producer Santos Ltd., walking away from an ambitious effort to expand overseas after failing to agree on key terms. A “combination of factors” discouraged the company’s XRG unit from making a final bid, it said Wednesday. The decision was strictly commercial and reflected disagreement over issues including valuation and tax, people familiar with the matter said, asking not to be identified discussing private information. It’s a notable retreat for XRG, the Adnoc spinoff launched to great fanfare last year and tasked with deploying Abu Dhabi’s billions into international dealmaking. The firm has been looking to build a global portfolio, particularly in chemicals and liquefied natural gas, and nixing the Santos transaction may slow an M&A drive aimed at diversifying the Middle Eastern emirate away from crude. The company made its indicative offer in June with a consortium that included Abu Dhabi Development Holding Co. and Carlyle Group Inc. The board of Santos, Australia’s second-largest fossil-fuel producer, recommended the $5.76-a-share proposal, which represented a 28% premium to the stock price at the time. But although the shares surged that day, they have remained well below the offer price, potentially indicating investors were skeptical the consortium could land the deal. Santos extended an exclusivity period for a second time last month, saying the group had sought more time to complete due diligence and obtain approvals. “The market will ask questions about Santos’ valuation after this,” Saul Kavonic, an energy analyst at MST Marquee, said by email. Investors may be wary about “any skeletons that may be lurking there, all the more so because XRG was a less price-sensitive buyer than most, yet still couldn’t make it work.” Santos’ American depositary receipts slumped as much as 9.5% to $4.69 on Wednesday. Covestro Hurdles Following agreements for

Read More »

Slovakia and Hungary Resist Trump Bid to Halt Russian Energy

Slovakia and Hungary signaled they would resist pressure from US President Donald Trump to cut Russian oil and gas imports until the European Union member states find sufficient alternative supplies.  “Before we can fully commit, we need to have the right conditions in place — otherwise we risk seriously damaging our industry and economy,” Slovak Economy Minister Denisa Sakova told reporters in Bratislava on Wednesday.  The minister said sufficient infrastructure must first be in place to support alternative routes. The comments amount to a pushback against fresh pressure from Trump for all EU states to end Russian energy imports, a move that would hit Slovakia and Hungary.  Hungarian Cabinet Minister Gergely Gulyas reiterated that his country would rebuff EU initiatives that threatened the security of its energy supplies. Sakova said she made clear Slovakia’s position during talks with US Energy Secretary Chris Wright in Vienna this week. She said the Trump official expressed understanding, while acknowledging that the US must boost energy projects in Europe.  Trump said over the weekend that he’s prepared to move ahead with “major” sanctions on Russian oil if European nations do the same. The government in Bratislava is prepared to shut its Russian energy links if it has sufficient infrastructure to transport volumes, Sakova said.  “As long as we have an alternative route, and the transmission capacity is sufficient, Slovakia has no problem diversifying,” the minister said. A complete cutoff of Russian supplies would pose a risk, she said, because Slovakia is located at the very end of alternative supply routes coming from the West.  Slovakia and Hungary, landlocked nations bordering Ukraine, have historically depended on Russian oil and gas. After Russia’s full-scale invasion of Ukraine in 2022, both launched several diversification initiatives. Slovakia imports around third of its oil from non-Russian sources via the Adria pipeline

Read More »

Slovakia Resists Pressure to Quickly Halt Russian Energy

Slovakia and Hungary signaled they would resist pressure from US President Donald Trump to cut Russian oil and gas imports until the European Union member states find sufficient alternative supplies.  “Before we can fully commit, we need to have the right conditions in place — otherwise we risk seriously damaging our industry and economy,” Slovak Economy Minister Denisa Sakova told reporters in Bratislava on Wednesday.  The minister said sufficient infrastructure must first be in place to support alternative routes. The comments amount to a pushback against fresh pressure from Trump for all EU states to end Russian energy imports, a move that would hit Slovakia and Hungary.  Hungarian Cabinet Minister Gergely Gulyas reiterated that his country would rebuff EU initiatives that threatened the security of its energy supplies. Sakova said she made clear Slovakia’s position during talks with US Energy Secretary Chris Wright in Vienna this week. She said the Trump official expressed understanding, while acknowledging that the US must boost energy projects in Europe.  Trump said over the weekend that he’s prepared to move ahead with “major” sanctions on Russian oil if European nations do the same. The government in Bratislava is prepared to shut its Russian energy links if it has sufficient infrastructure to transport volumes, Sakova said.  “As long as we have an alternative route, and the transmission capacity is sufficient, Slovakia has no problem diversifying,” the minister said. A complete cutoff of Russian supplies would pose a risk, she said, because Slovakia is located at the very end of alternative supply routes coming from the West.  Slovakia and Hungary, landlocked nations bordering Ukraine, have historically depended on Russian oil and gas. After Russia’s full-scale invasion of Ukraine in 2022, both launched several diversification initiatives. Slovakia imports around third of its oil from non-Russian sources via the Adria pipeline

Read More »

Energy-related US CO2 emissions down 20% since 2005: EIA

Listen to the article 2 min This audio is auto-generated. Please let us know if you have feedback. Per capita carbon dioxide emissions from energy consumption fell in every state from 2005 to 2023, primarily due to less coal being burned, the U.S. Energy Information Administration said in a Monday report.  In total, CO2 emissions fell by 20% in those years. The U.S. population increased by 14% during that period, so per capita, emissions fell by 30%, according to EIA. “Increased electricity generation from natural gas, which releases about half as many CO2 emissions per unit of energy when combusted as coal, and from non-CO2-emitting wind and solar generation offset the decrease in coal generation,” EIA said. Emissions decreased in every state, falling the most in Maryland and the District of Columbia, which saw per capita drops of 49% and 48%, respectively. Emissions fell the least in Idaho, where they dropped by 3%, and Mississippi, where they dropped by 1%. Optional Caption Courtesy of Energy Information Administration “In 2023, Maryland had the lowest per capita CO2 emissions of any state, at 7.8 metric tons of CO2 (mtCO2), which is the second lowest in recorded data beginning in 1960,” EIA said. “The District of Columbia has lower per capita CO2 emissions than any state and tied its record low of 3.6 mtCO2 in 2023.” EIA forecasts a 1% increase in total U.S. emissions from energy consumption this year, “in part because of more recent increased fossil fuel consumption for crude oil production and electricity generation growth.” In 2023, the transportation sector was responsible for the largest share of emissions from energy consumption across 28 states, EIA said. In 2005, the electric power sector had “accounted for the largest share of emissions in 31 states, while the transportation sector made up the

Read More »

Chord Announces ‘Strategic Acquisition of Williston Basin Assets’

Chord Energy Corporation announced a “strategic acquisition of Williston Basin assets” in a statement posted on its website recently. In the statement, Chord said a wholly owned subsidiary of the company has entered into a definitive agreement to acquire assets in the Williston Basin from XTO Energy Inc. and affiliates for a total cash consideration of $550 million, subject to customary purchase price adjustments. The consideration is expected to be funded through a combination of cash on hand and borrowings, Chord noted in the statement, which highlighted that the effective date for the transaction is September 1, 2025, and that the deal is expected to close by year-end. Chord outlined in the statement that the deal includes 48,000 net acres in the Williston core, noting that “90 net 10,000 foot equivalent locations (72 net operated) extend Chord’s inventory life”. Pointing out “inventory quality” in the statement, Chord highlighted that “low average NYMEX WTI breakeven economics ($40s) compete at the front-end of Chord’s program and lower the weighted-average breakeven of Chord’s portfolio”. The company outlined that the deal is “expected to be accretive to all key metrics including cash flow, free cash flow and NAV in both near and long-term”. “We are excited to announce the acquisition of these high-quality assets,” Danny Brown, Chord Energy’s President and Chief Executive Officer, said in the statement. “The acquired assets are in one of the best areas of the Williston Basin and have significant overlap with Chord’s existing footprint, setting the stage for long-lateral development. The assets have a low average NYMEX WTI breakeven and are immediately competitive for capital,” he added. “We expect that the transaction will create significant accretion for shareholders across all key metrics, while maintaining pro forma leverage below the peer group and supporting sustainable FCF generation and return of capital,” he continued.

Read More »

Land and Expand: CleanArc Data Centers, Google, Duke Energy, Aligned’s ODATA, Fermi America

Land and Expand is a monthly feature at Data Center Frontier highlighting the latest data center development news, including new sites, land acquisitions and campus expansions. Here are some of the new and notable developments from hyperscale and colocation data center operators about which we’ve been reading lately. Caroline County, VA, Approves 650-Acre Data Center Campus from CleanArc Caroline County, Virginia, has approved redevelopment of the former Virginia Bazaar property in Ruther Glen into a 650-acre data center campus in partnership with CleanArc Data Centers Operating, LLC. On September 9, 2025, the Caroline County Board of Supervisors unanimously approved an economic development performance agreement with CleanArc to transform the long-vacant flea market site just off I-95. The agreement allows for the phased construction of three initial data center buildings, each measuring roughly 500,000 square feet, which CleanArc plans to lease to major operators. The project represents one of the county’s largest-ever private investments. While CleanArc has not released a final capital cost, county filings suggest the development could reach into the multi-billion-dollar range over its full buildout. Key provisions include: Local hiring: At least 50 permanent jobs at no less than 150% of the prevailing county wage. Revenue sharing: Caroline County will provide annual incentive grants equal to 25% of incremental tax revenue generated by the campus. Water stewardship: CleanArc is prohibited from using potable county water for data center cooling, requiring the developer to pursue alternative technologies such as non-potable sources, recycled water, or advanced liquid cooling systems. Local officials have emphasized the deal’s importance for diversifying the county’s tax base, while community observers will be watching closely to see which cooling strategies CleanArc adopts in order to comply with the water-use restrictions. Google to Build $10 Billion Data Center Campus in Arkansas Moses Tucker Partners, one of Arkansas’

Read More »

Hyperion and Alice & Bob Call on HPC Centers to Prepare Now for Early Fault-Tolerant Quantum Computing

As the data center industry continues to chase greater performance for AI and scientific workloads, a new joint report from Hyperion Research and Alice & Bob is urging high performance computing (HPC) centers to take immediate steps toward integrating early fault-tolerant quantum computing (eFTQC) into their infrastructure. The report, “Seizing Quantum’s Edge: Why and How HPC Should Prepare for eFTQC,” paints a clear picture: the next five years will demand hybrid HPC-quantum workflows if institutions want to stay at the forefront of computational science. According to the analysis, up to half of current HPC workloads at U.S. government research labs—Los Alamos National Laboratory, the National Energy Research Scientific Computing Center, and Department of Energy leadership computing facilities among them—could benefit from the speedups and efficiency gains of eFTQC. “Quantum technologies are a pivotal opportunity for the HPC community, offering the potential to significantly accelerate a wide range of critical science and engineering applications in the near-term,” said Bob Sorensen, Senior VP and Chief Analyst for Quantum Computing at Hyperion Research. “However, these machines won’t be plug-and-play, so HPC centers should begin preparing for integration now, ensuring they can influence system design and gain early operational expertise.” The HPC Bottleneck: Why Quantum is Urgent The report underscores a familiar challenge for the HPC community: classical performance gains have slowed as transistor sizes approach physical limits and energy efficiency becomes increasingly difficult to scale. Meanwhile, the threshold for useful quantum applications is drawing nearer. Advances in qubit stability and error correction, particularly Alice & Bob’s cat qubit technology, have compressed the resource requirements for algorithms like Shor’s by an estimated factor of 1,000. Within the next five years, the report projects that quantum computers with 100–1,000 logical qubits and logical error rates between 10⁻⁶ and 10⁻¹⁰ will accelerate applications across materials science, quantum

Read More »

Google Partners With Utilities to Ease AI Data Center Grid Strain

Transmission and Power Strategy These agreements build on Google’s growing set of strategies to manage electricity needs. In June of 2025, Google announced a deal with CTC Global to upgrade transmission lines with high-capacity composite conductors that increase throughput without requiring new towers. In July 2025, Google and Brookfield Asset Management unveiled a hydropower framework agreement worth up to $3 billion, designed to secure firm clean energy for data centers in PJM and Eastern markets. Alongside renewable deals, Google has signed nuclear supply agreements as well, most notably a landmark contract with Kairos Power for small modular reactor capacity. Each of these moves reflects Google’s effort to create more headroom on the grid while securing firm, carbon-free power. Workload Flexibility and Grid Innovation The demand-response strategy is uniquely suited to AI data centers because of workload diversity. Machine learning training runs can sometimes be paused or rescheduled, unlike latency-sensitive workloads. This flexibility allows Google to throttle certain compute-heavy processes in coordination with utilities. In practice, Google can preemptively pause or shift workloads when notified of peak events, ensuring critical services remain uninterrupted while still creating significant grid relief. Local Utility Impact For utilities like I&M and TVA, partnering with hyperscale customers has a dual benefit: stabilizing the grid while keeping large customers satisfied and growing within their service territories. It also signals to regulators and ratepayers that data centers, often criticized for their heavy energy footprint, can actively contribute to reliability. These agreements may help avoid contentious rate cases or delays in permitting new power plants. Policy, Interconnection Queues, and the Economics of Speed One of the biggest hurdles for data center development today is the long wait in interconnection queues. In regions like PJM Interconnection, developers often face waits of three to five years before new projects can connect

Read More »

Generators, Gas, and Grid Strategy: Inside Generac’s Data Center Play

A Strategic Leap Generac’s entry represents a strategic leap. Long established as a leader in residential, commercial, and industrial generation—particularly in the sub-2 megawatt range—the company has now expanded into mission-critical applications with new products spanning 2.2 to 3.5 megawatts. Navarro said the timing was deliberate, citing market constraints that have slowed hyperscale and colocation growth. “The current OEMs serving this market are actually limiting the ability to produce and to grow the data center market,” he noted. “Having another player … with enough capacity to compensate those shortfalls has been received very, very well.” While Generac isn’t seeking to reinvent the wheel, it is intent on differentiation. Customers, Navarro explained, want a good quality product, uneventful deployment, and a responsive support network. On top of those essentials, Generac is leveraging its ongoing transformation from generator manufacturer to energy technology company, a shift accelerated by a series of acquisitions in areas like telemetry, monitoring, and energy management. “We’ve made several acquisitions to move away from being just a generator manufacturer to actually being an energy technology company,” Navarro said. “So we are entering this space of energy efficiency, energy management—monitoring, telemetrics, everything that improves the experience and improves the usage of those generators and the energy management at sites.” That foundation positions Generac to meet the newest challenge reshaping backup generation: the rise of AI-centric workloads. Natural Gas Interest—and the Race to Shorter Lead Times As the industry looks beyond diesel, customer interest in natural gas generation is rising. Navarro acknowledged the shift, but noted that diesel still retains an edge. “We’ve seen an increase on gas requests,” he said. “But the power density of diesel is more convenient than gas today.” That tradeoff, however, could narrow. Navarro pointed to innovations such as industrial storage paired with gas units, which

Read More »

Executive Roundtable: Cooling, Costs, and Integration in the AI Data Center Era

Becky Wacker, Trane:  As AI workloads increasingly dominate new data center builds, operators face significant challenges in managing thermal loads and water resources. These challenges include significantly higher heat density, large, aggregated load spikes, uneven distribution of cooling needs, and substantial water requirements if using traditional evaporative cooling methods. The most critical risks include overheating, inefficient cooling systems, and water scarcity. These issues can lead to reduced hardware lifespan, hardware throttling, sudden shutdowns, failure to meet PUE targets, higher operational costs, and limitations on where AI data centers can be built due to water constraints. At Trane, we are evolving our solutions to meet these challenges through advanced cooling technologies such as liquid cooling and immersion cooling, which offer higher efficiency and lower thermal resistance compared to traditional air-cooling methods. Flexibility and scalability are central to our design philosophy. We believe a total system solution is crucial, integrating components such as CDUs, Fan Walls, CRAHs, and Chillers to anticipate demand and respond effectively. In addition, we are developing smart monitoring and control systems that leverage AI to predict and manage thermal loads in real-time, ensuring optimal performance and preventing overheating through Building Management Systems and integration with DCIM platforms. Our water management solutions are also being enhanced to recycle and reuse water, minimizing consumption and addressing scarcity concerns.

Read More »

Power shortages are the only thing slowing the data center market

Another major shortage – which should not be news to anyone – is power. Lynch said that it is the primary reason many data centers are moving out of the heavily congested areas, like Northern Virginia and Santa Clara, and into secondary markets. Power is more available in smaller markets than larger ones. “If our client needs multi-megawatt capacity in Silicon Valley, we’re being told by the utility providers that that capacity will not be available for up to 10 years from now,” so out of necessity, many have moved to secondary markets, such as Hillsborough, Oregon, Reno, Nevada, and Columbus, Ohio. The growth of hyperscalers as well as AI is driving up the power requirements of facilities further into the multi-megawatt range. The power industry moves at a very different pace than the IT world, much slower and more deliberate. Lynch said the lead time for equipment makes it difficult to predict when some large scale, ambitious data centers can be completed. A multi-megawatt facility may even require new transmission lines to be built out as well. This translates into longer build times for new data centers. CBRE found that the average data center now takes about three years to complete, up from 2 years just a short time ago. Intel, AMD, and Nvidia haven’t even laid out a road map for three years, but with new architectures coming every year, a data center risks being obsolete by the time it’s completed. However, what’s the alternative? To wait? Customers will never catch up at that rate, Lynch said.   That is simply not a viable option, so development and construction must go on even with short supplies of everything from concrete and steel to servers and power transformers.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »