Stay Ahead, Stay ONMINE

A Visual Guide to How Diffusion Models Work

This article is aimed at those who want to understand exactly how Diffusion Models work, with no prior knowledge expected. I’ve tried to use illustrations wherever possible to provide visual intuitions on each part of these models. I’ve kept mathematical notation and equations to a minimum, and where they are necessary I’ve tried to define […]

This article is aimed at those who want to understand exactly how Diffusion Models work, with no prior knowledge expected. I’ve tried to use illustrations wherever possible to provide visual intuitions on each part of these models. I’ve kept mathematical notation and equations to a minimum, and where they are necessary I’ve tried to define and explain them as they occur.

Intro

I’ve framed this article around three main questions:

  • What exactly is it that diffusion models learn?
  • How and why do diffusion models work?
  • Once you’ve trained a model, how do you get useful stuff out of it?

The examples will be based on the glyffuser, a minimal text-to-image diffusion model that I previously implemented and wrote about. The architecture of this model is a standard text-to-image denoising diffusion model without any bells or whistles. It was trained to generate pictures of new “Chinese” glyphs from English definitions. Have a look at the picture below — even if you’re not familiar with Chinese writing, I hope you’ll agree that the generated glyphs look pretty similar to the real ones!

Random examples of glyffuser training data (left) and generated data (right).

What exactly is it that diffusion models learn?

Generative Ai models are often said to take a big pile of data and “learn” it. For text-to-image diffusion models, the data takes the form of pairs of images and descriptive text. But what exactly is it that we want the model to learn? First, let’s forget about the text for a moment and concentrate on what we are trying to generate: the images.

Probability distributions

Broadly, we can say that we want a generative AI model to learn the underlying probability distribution of the data. What does this mean? Consider the one-dimensional normal (Gaussian) distribution below, commonly written 𝒩(μ,σ²) and parameterized with mean μ = 0 and variance σ² = 1. The black curve below shows the probability density function. We can sample from it: drawing values such that over a large number of samples, the set of values reflects the underlying distribution. These days, we can simply write something like x = random.gauss(0, 1) in Python to sample from the standard normal distribution, although the computational sampling process itself is non-trivial!

Values sampled from an underlying distribution (here, the standard normal 𝒩(0,1)) can then be used to estimate the parameters of that distribution.

We could think of a set of numbers sampled from the above normal distribution as a simple dataset, like that shown as the orange histogram above. In this particular case, we can calculate the parameters of the underlying distribution using maximum likelihood estimation, i.e. by working out the mean and variance. The normal distribution estimated from the samples is shown by the dotted line above. To take some liberties with terminology, you might consider this as a simple example of “learning” an underlying probability distribution. We can also say that here we explicitly learnt the distribution, in contrast with the implicit methods that diffusion models use.

Conceptually, this is all that generative AI is doing — learning a distribution, then sampling from that distribution!

Data representations

What, then, does the underlying probability distribution of a more complex dataset look like, such as that of the image dataset we want to use to train our diffusion model?

First, we need to know what the representation of the data is. Generally, a machine learning (ML) model requires data inputs with a consistent representation, i.e. format. For the example above, it was simply numbers (scalars). For images, this representation is commonly a fixed-length vector.

The image dataset used for the glyffuser model is ~21,000 pictures of Chinese glyphs. The images are all the same size, 128 × 128 = 16384 pixels, and greyscale (single-channel color). Thus an obvious choice for the representation is a vector x of length 16384, where each element corresponds to the color of one pixel: x = (x,x₂,…,x₁₆₃₈₄). We can call the domain of all possible images for our dataset “pixel space”.

An example glyph with pixel values labelled (downsampled to 32 × 32 pixels for readability).

Dataset visualization

We make the assumption that our individual data samples, x, are actually sampled from an underlying probability distribution, q(x), in pixel space, much as the samples from our first example were sampled from an underlying normal distribution in 1-dimensional space. Note: the notation x q(x) is commonly used to mean: “the random variable x sampled from the probability distribution q(x).”

This distribution is clearly much more complex than a Gaussian and cannot be easily parameterized — we need to learn it with a ML model, which we’ll discuss later. First, let’s try to visualize the distribution to gain a better intution.

As humans find it difficult to see in more than 3 dimensions, we need to reduce the dimensionality of our data. A small digression on why this works: the manifold hypothesis posits that natural datasets lie on lower dimensional manifolds embedded in a higher dimensional space — think of a line embedded in a 2-D plane, or a plane embedded in 3-D space. We can use a dimensionality reduction technique such as UMAP to project our dataset from 16384 to 2 dimensions. The 2-D projection retains a lot of structure, consistent with the idea that our data lie on a lower dimensional manifold embedded in pixel space. In our UMAP, we see two large clusters corresponding to characters in which the components are arranged either horizontally (e.g. 明) or vertically (e.g. 草). An interactive version of the plot below with popups on each datapoint is linked here.

 Click here for an interactive version of this plot.

Let’s now use this low-dimensional UMAP dataset as a visual shorthand for our high-dimensional dataset. Remember, we assume that these individual points have been sampled from a continuous underlying probability distribution q(x). To get a sense of what this distribution might look like, we can apply a KDE (kernel density estimation) over the UMAP dataset. (Note: this is just an approximation for visualization purposes.)

This gives a sense of what q(x) should look like: clusters of glyphs correspond to high-probability regions of the distribution. The true q(x) lies in 16384 dimensions — this is the distribution we want to learn with our diffusion model.

We showed that for a simple distribution such as the 1-D Gaussian, we could calculate the parameters (mean and variance) from our data. However, for complex distributions such as images, we need to call on ML methods. Moreover, what we will find is that for diffusion models in practice, rather than parameterizing the distribution directly, they learn it implicitly through the process of learning how to transform noise into data over many steps.

Takeaway

The aim of generative AI such as diffusion models is to learn the complex probability distributions underlying their training data and then sample from these distributions.

How and why do diffusion models work?

Diffusion models have recently come into the spotlight as a particularly effective method for learning these probability distributions. They generate convincing images by starting from pure noise and gradually refining it. To whet your interest, have a look at the animation below that shows the denoising process generating 16 samples.

In this section we’ll only talk about the mechanics of how these models work but if you’re interested in how they arose from the broader context of generative models, have a look at the further reading section below.

What is “noise”?

Let’s first precisely define noise, since the term is thrown around a lot in the context of diffusion. In particular, we are talking about Gaussian noise: consider the samples we talked about in the section about probability distributions. You could think of each sample as an image of a single pixel of noise. An image that is “pure Gaussian noise”, then, is one in which each pixel value is sampled from an independent standard Gaussian distribution, 𝒩(0,1). For a pure noise image in the domain of our glyph dataset, this would be noise drawn from 16384 separate Gaussian distributions. You can see this in the previous animation. One thing to keep in mind is that we can choose the means of these noise distributions, i.e. center them, on specific values — the pixel values of an image, for instance.

For convenience, you’ll often find the noise distributions for image datasets written as a single multivariate distribution 𝒩(0,I) where I is the identity matrix, a covariance matrix with all diagonal entries equal to 1 and zeroes elsewhere. This is simply a compact notation for a set of multiple independent Gaussians — i.e. there are no correlations between the noise on different pixels. In the basic implementations of diffusion models, only uncorrelated (a.k.a. “isotropic”) noise is used. This article contains an excellent interactive introduction on multivariate Gaussians.

Diffusion process overview

Below is an adaptation of the somewhat-famous diagram from Ho et al.’s seminal paper “Denoising Diffusion Probabilistic Models” which gives an overview of the whole diffusion process:

Diagram of the diffusion process adapted from Ho et al. 2020. The glyph 锂, meaning “lithium”, is used as a representative sample from the dataset.

I found that there was a lot to unpack in this diagram and simply understanding what each component meant was very helpful, so let’s go through it and define everything step by step.

We previously used x q(x) to refer to our data. Here, we’ve added a subscript, xₜ, to denote timestep t indicating how many steps of “noising” have taken place. We refer to the samples noised a given timestep as x q(xₜ). x₀​ is clean data and xₜ (t = T) ∼ 𝒩(0,1) is pure noise.

We define a forward diffusion process whereby we corrupt samples with noise. This process is described by the distribution q(xₜ|xₜ₋₁). If we could access the hypothetical reverse process q(xₜ₋₁|xₜ), we could generate samples from noise. As we cannot access it directly because we would need to know x₀​, we use ML to learn the parameters, θ, of a model of this process, 𝑝θ(𝑥ₜ₋₁∣𝑥ₜ). (That should be p subscript θ but medium cannot render it.)

In the following sections we go into detail on how the forward and reverse diffusion processes work.

Forward diffusion, or “noising”

Used as a verb, “noising” an image refers to applying a transformation that moves it towards pure noise by scaling down its pixel values toward 0 while adding proportional Gaussian noise. Mathematically, this transformation is a multivariate Gaussian distribution centered on the pixel values of the preceding image.

In the forward diffusion process, this noising distribution is written as q(xₜ|xₜ₋₁) where the vertical bar symbol “|” is read as “given” or “conditional on”, to indicate the pixel means are passed forward from q(xₜ₋₁) At t = T where T is a large number (commonly 1000) we aim to end up with images of pure noise (which, somewhat confusingly, is also a Gaussian distribution, as discussed previously).

The marginal distributions q(xₜ) represent the distributions that have accumulated the effects of all the previous noising steps (marginalization refers to integration over all possible conditions, which recovers the unconditioned distribution).

Since the conditional distributions are Gaussian, what about their variances? They are determined by a variance schedule that maps timesteps to variance values. Initially, an empirically determined schedule of linearly increasing values from 0.0001 to 0.02 over 1000 steps was presented in Ho et al. Later research by Nichol & Dhariwal suggested an improved cosine schedule. They state that a schedule is most effective when the rate of information destruction through noising is relatively even per step throughout the whole noising process.

Forward diffusion intuition

As we encounter Gaussian distributions both as pure noise q(xₜ, t = T) and as the noising distribution q(xₜ|xₜ₋₁), I’ll try to draw the distinction by giving a visual intuition of the distribution for a single noising step, q(x₁∣x₀), for some arbitrary, structured 2-dimensional data:

Each noising step q(xₜ|xₜ₋₁) is a Gaussian distribution conditioned on the previous step.

The distribution q(x₁∣x₀) is Gaussian, centered around each point in x₀, shown in blue. Several example points x₀⁽ⁱ⁾ are picked to illustrate this, with q(x₁∣x₀ = x₀⁽ⁱ⁾) shown in orange.

In practice, the main usage of these distributions is to generate specific instances of noised samples for training (discussed further below). We can calculate the parameters of the noising distributions at any timestep t directly from the variance schedule, as the chain of Gaussians is itself also Gaussian. This is very convenient, as we don’t need to perform noising sequentially—for any given starting data x₀⁽ⁱ⁾, we can calculate the noised sample xₜ⁽ⁱ⁾ by sampling from q(xₜ∣x₀ = x₀⁽ⁱ⁾) directly.

Forward diffusion visualization

Let’s now return to our glyph dataset (once again using the UMAP visualization as a visual shorthand). The top row of the figure below shows our dataset sampled from distributions noised to various timesteps: xₜ ∼ q(xₜ). As we increase the number of noising steps, you can see that the dataset begins to resemble pure Gaussian noise. The bottom row visualizes the underlying probability distribution q(xₜ).

The dataset xₜ (above) sampled from its probability distribution q(xₜ) (below) at different noising timesteps.

Reverse diffusion overview

It follows that if we knew the reverse distributions q(xₜ₋₁∣xₜ), we could repeatedly subtract a small amount of noise, starting from a pure noise sample xₜ at t = T to arrive at a data sample x₀ ∼ q(x₀). In practice, however, we cannot access these distributions without knowing x₀ beforehand. Intuitively, it’s easy to make a known image much noisier, but given a very noisy image, it’s much harder to guess what the original image was.

So what are we to do? Since we have a large amount of data, we can train an ML model to accurately guess the original image that any given noisy image came from. Specifically, we learn the parameters θ of an ML model that approximates the reverse noising distributions, (xₜ₋₁ ∣ xₜ) for t = 0, …, T. In practice, this is embodied in a single noise prediction model trained over many different samples and timesteps. This allows it to denoise any given input, as shown in the figure below.

The ML model predicts added noise at any given timestep t.

Next, let’s go over how this noise prediction model is implemented and trained in practice.

How the model is implemented

First, we define the ML model — generally a deep neural network of some sort — that will act as our noise prediction model. This is what does the heavy lifting! In practice, any ML model that inputs and outputs data of the correct size can be used; the U-net, an architecture particularly suited to learning images, is what we use here and frequently chosen in practice. More recent models also use vision transformers.

We use the U-net architecture (Ronneberger et al. 2015) for our ML noise prediction model. We train the model by minimizing the difference between predicted and actual noise.

Then we run the training loop depicted in the figure above:

  • We take a random image from our dataset and noise it to a random timestep tt. (In practice, we speed things up by doing many examples in parallel!)
  • We feed the noised image into the ML model and train it to predict the (known to us) noise in the image. We also perform timestep conditioning by feeding the model a timestep embedding, a high-dimensional unique representation of the timestep, so that the model can distinguish between timesteps. This can be a vector the same size as our image directly added to the input (see here for a discussion of how this is implemented).
  • The model “learns” by minimizing the value of a loss function, some measure of the difference between the predicted and actual noise. The mean square error (the mean of the squares of the pixel-wise difference between the predicted and actual noise) is used in our case.
  • Repeat until the model is well trained.

Note: A neural network is essentially a function with a huge number of parameters (on the order of 10for the glyffuser). Neural network ML models are trained by iteratively updating their parameters using backpropagation to minimize a given loss function over many training data examples. This is an excellent introduction. These parameters effectively store the network’s “knowledge”.

A noise prediction model trained in this way eventually sees many different combinations of timesteps and data examples. The glyffuser, for example, was trained over 100 epochs (runs through the whole data set), so it saw around 2 million data samples. Through this process, the model implicity learns the reverse diffusion distributions over the entire dataset at all different timesteps. This allows the model to sample the underlying distribution q(x₀) by stepwise denoising starting from pure noise. Put another way, given an image noised to any given level, the model can predict how to reduce the noise based on its guess of what the original image. By doing this repeatedly, updating its guess of the original image each time, the model can transform any noise to a sample that lies in a high-probability region of the underlying data distribution.

Reverse diffusion in practice

We can now revisit this video of the glyffuser denoising process. Recall a large number of steps from sample to noise e.g. T = 1000 is used during training to make the noise-to-sample trajectory very easy for the model to learn, as changes between steps will be small. Does that mean we need to run 1000 denoising steps every time we want to generate a sample?

Luckily, this is not the case. Essentially, we can run the single-step noise prediction but then rescale it to any given step, although it might not be very good if the gap is too large! This allows us to approximate the full sampling trajectory with fewer steps. The video above uses 120 steps, for instance (most implementations will allow the user to set the number of sampling steps).

Recall that predicting the noise at a given step is equivalent to predicting the original image x₀, and that we can access the equation for any noised image deterministically using only the variance schedule and x₀. Thus, we can calculate xₜ₋ₖ based on any denoising step. The closer the steps are, the better the approximation will be.

Too few steps, however, and the results become worse as the steps become too large for the model to effectively approximate the denoising trajectory. If we only use 5 sampling steps, for example, the sampled characters don’t look very convincing at all:

There is then a whole literature on more advanced sampling methods beyond what we’ve discussed so far, allowing effective sampling with much fewer steps. These often reframe the sampling as a differential equation to be solved deterministically, giving an eerie quality to the sampling videos — I’ve included one at the end if you’re interested. In production-level models, these are usually preferred over the simple method discussed here, but the basic principle of deducing the noise-to-sample trajectory is the same. A full discussion is beyond the scope of this article but see e.g. this paper and its corresponding implementation in the Hugging Face diffusers library for more information.

Alternative intuition from score function

To me, it was still not 100% clear why training the model on noise prediction generalises so well. I found that an alternative interpretation of diffusion models known as “score-based modeling” filled some of the gaps in intuition (for more information, refer to Yang Song’s definitive article on the topic.)

The dataset xₜ sampled from its probability distribution q(xₜ) at different noising timesteps; below, we add the score function ∇ₓ log q(xₜ).

I try to give a visual intuition in the bottom row of the figure above: essentially, learning the noise in our diffusion model is equivalent (to a constant factor) to learning the score function, which is the gradient of the log of the probability distribution: ∇ₓ log q(x). As a gradient, the score function represents a vector field with vectors pointing towards the regions of highest probability density. Subtracting the noise at each step is then equivalent to moving following the directions in this vector field towards regions of high probability density.

As long as there is some signal, the score function effectively guides sampling, but in regions of low probability it tends towards zero as there is little to no gradient to follow. Using many steps to cover different noise levels allows us to avoid this, as we smear out the gradient field at high noise levels, allowing sampling to converge even if we start from low probability density regions of the distribution. The figure shows that as the noise level is increased, more of the domain is covered by the score function vector field.

Summary

  • The aim of diffusion models is learn the underlying probability distribution of a dataset and then be able to sample from it. This requires forward and reverse diffusion (noising) processes.
  • The forward noising process takes samples from our dataset and gradually adds Gaussian noise (pushes them off the data manifold). This forward process is computationally efficient because any level of noise can be added in closed form a single step.
  • The reverse noising process is challenging because we need to predict how to remove the noise at each step without knowing the original data point in advance. We train a ML model to do this by giving it many examples of data noised at different timesteps.
  • Using very small steps in the forward noising process makes it easier for the model to learn to reverse these steps, as the changes are small.
  • By applying the reverse noising process iteratively, the model refines noisy samples step by step, eventually producing a realistic data point (one that lies on the data manifold).

Takeaway

Diffusion models are a powerful framework for learning complex data distributions. The distributions are learnt implicitly by modelling a sequential denoising process. This process can then be used to generate samples similar to those in the training distribution.

Once you’ve trained a model, how do you get useful stuff out of it?

Earlier uses of generative AI such as “This Person Does Not Exist” (ca. 2019) made waves simply because it was the first time most people had seen AI-generated photorealistic human faces. A generative adversarial network or “GAN” was used in that case, but the principle remains the same: the model implicitly learnt a underlying data distribution — in that case, human faces — then sampled from it. So far, our glyffuser model does a similar thing: it samples randomly from the distribution of Chinese glyphs.

The question then arises: can we do something more useful than just sample randomly? You’ve likely already encountered text-to-image models such as Dall-E. They are able to incorporate extra meaning from text prompts into the diffusion process — this in known as conditioning. Likewise, diffusion models for scientific scientific applications like protein (e.g. Chroma, RFdiffusion, AlphaFold3) or inorganic crystal structure generation (e.g. MatterGen) become much more useful if can be conditioned to generate samples with desirable properties such as a specific symmetry, bulk modulus, or band gap.

Conditional distributions

We can consider conditioning as a way to guide the diffusion sampling process towards particular regions of our probability distribution. We mentioned conditional distributions in the context of forward diffusion. Below we show how conditioning can be thought of as reshaping a base distribution.

A simple example of a joint probability distribution p(x, y), shown as a contour map, along with its two marginal 1-D probability distributions, p(x) and p(y). The highest points of p(x, y) are at (x₁, y₁) and (x₂, y₂). The conditional distributions p(xy = y₁) and p(xy = y₂) are shown overlaid on the main plot.

Consider the figure above. Think of p(x) as a distribution we want to sample from (i.e., the images) and p(y) as conditioning information (i.e., the text dataset). These are the marginal distributions of a joint distribution p(x, y). Integrating p(x, y) over y recovers p(x), and vice versa.

Sampling from p(x), we are equally likely to get x₁ or x₂. However, we can condition on p(y = y₁) to obtain p(xy = y₁). You can think of this as taking a slice through p(x, y) at a given value of y. In this conditioned distribution, we are much more likely to sample at x₁ than x₂.

In practice, in order to condition on a text dataset, we need to convert the text into a numerical form. We can do this using large language model (LLM) embeddings that can be injected into the noise prediction model during training.

Embedding text with an LLM

In the glyffuser, our conditioning information is in the form of English text definitions. We have two requirements: 1) ML models prefer fixed-length vectors as input. 2) The numerical representation of our text must understand context — if we have the words “lithium” and “element” nearby, the meaning of “element” should be understood as “chemical element” rather than “heating element”. Both of these requirements can be met by using a pre-trained LLM.

The diagram below shows how an LLM converts text into fixed-length vectors. The text is first tokenized (LLMs break text into tokens, small chunks of characters, as their basic unit of interaction). Each token is converted into a base embedding, which is a fixed-length vector of the size of the LLM input. These vectors are then passed through the pre-trained LLM (here we use the encoder portion of Google’s T5 model), where they are imbued with additional contextual meaning. We end up with a array of n vectors of the same length d, i.e. a (n, d) sized tensor.

We can convert text to a numerical embedding imbued with contextual meaning using a pre-trained LLM.

Note: in some models, notably Dall-E, additional image-text alignment is performed using contrastive pretraining. Imagen seems to show that we can get away without doing this.

Training the diffusion model with text conditioning

The exact method that this embedding vector is injected into the model can vary. In Google’s Imagen model, for example, the embedding tensor is pooled (combined into a single vector in the embedding dimension) and added into the data as it passes through the noise prediction model; it is also included in a different way using cross-attention (a method of learning contextual information between sequences of tokens, most famously used in the transformer models that form the basis of LLMs like ChatGPT).

Conditioning information can be added via multiple different methods but the training loss remains the same.

In the glyffuser, we only use cross-attention to introduce this conditioning information. While a significant architectural change is required to introduce this additional information into the model, the loss function for our noise prediction model remains exactly the same.

Testing the conditioned diffusion model

Let’s do a simple test of the fully trained conditioned diffusion model. In the figure below, we try to denoise in a single step with the text prompt “Gold”. As touched upon in our interactive UMAP, Chinese characters often contain components known as radicals which can convey sound (phonetic radicals) or meaning (semantic radicals). A common semantic radical is derived from the character meaning “gold”, “金”, and is used in characters that are in some broad sense associated with gold or metals.

Even with a single sampling step, conditioning guides denoising towards the relevant regions of the probability distribution.

The figure shows that even though a single step is insufficient to approximate the denoising trajectory very well, we have moved into a region of our probability distribution with the “金” radical. This indicates that the text prompt is effectively guiding our sampling towards a region of the glyph probability distribution related to the meaning of the prompt. The animation below shows a 120 step denoising sequence for the same prompt, “Gold”. You can see that every generated glyph has either the 釒 or 钅 radical (the same radical in traditional and simplified Chinese, respectively).

Takeaway

Conditioning enables us to sample meaningful outputs from diffusion models.

Further remarks

I found that with the help of tutorials and existing libraries, it was possible to implement a working diffusion model despite not having a full understanding of what was going on under the hood. I think this is a good way to start learning and highly recommend Hugging Face’s tutorial on training a simple diffusion model using their diffusers Python library (which now includes my small bugfix!).

I’ve omitted some topics that are crucial to how production-grade diffusion models function, but are unnecessary for core understanding. One is the question of how to generate high resolution images. In our example, we did everything in pixel space, but this becomes very computationally expensive for large images. The general approach is to perform diffusion in a smaller space, then upscale it in a separate step. Methods include latent diffusion (used in Stable Diffusion) and cascaded super-resolution models (used in Imagen). Another topic is classifier-free guidance, a very elegant method for boosting the conditioning effect to give much better prompt adherence. I show the implementation in my previous post on the glyffuser and highly recommend this article if you want to learn more.

Further reading

A non-exhaustive list of materials I found very helpful:

Fun extras

Diffusion sampling using the DPMSolverSDEScheduler developed by Katherine Crowson and implemented in Hugging Face diffusers—note the smooth transition from noise to data.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AI-driven network management gains enterprise trust

The way the full process works is that the raw data feed comes in, and machine learning is used to identify an anomaly that could be a possible incident. That’s where the generative AI agents step up. In addition to the history of similar issues, the agents also look for

Read More »

Chinese cyberspies target VMware vSphere for long-term persistence

Designed to work in virtualized environments The CISA, NSA, and Canadian Cyber Center analysts note that some of the BRICKSTORM samples are virtualization-aware and they create a virtual socket (VSOCK) interface that enables inter-VM communication and data exfiltration. The malware also checks the environment upon execution to ensure it’s running

Read More »

IBM boosts DNS protection for multicloud operations

“In addition to this DNS synchronization, you can publish DNS configurations to your Amazon Simple Storage Service (S3) bucket. As you implement DNS changes, the S3 bucket will automatically update. The ability to store multiple configurations in your S3 bucket allows you to choose the most appropriate restore point if

Read More »

Noble to Sell 6 Jackups, Become Pureplay Deepwater Driller

Noble Corp said Monday it had signed separate deals to sell five jackup rigs to Borr Drilling Ltd for $360 million and one jackup to Ocean Oilfield Drilling for $64 million. After the completion of the transactions, expected next year, “Noble will be a pureplay deepwater and ultra-harsh environment jackup operator”, the offshore driller said in an online statement. Borr will acquire Noble Resilient (built 2009), Noble Resolute (built 2009), Noble Mick O’Brien (built 2013), Noble Regina Allen (built 2013) and Noble Tom Prosser (built 2014). The purchase price consists of $210 million in cash and $150 million in seller notes. “The $150 million in proposed seller notes to Borr are expected to have a six-year maturity and be secured by a first lien on three jackups (Noble Tom Prosser, Noble Regina Allen and Noble Resilient)”, Noble said. “Additionally, Noble intends to operate two rigs – Noble Mick O’Brien and Noble Resolute – under a bareboat charter agreement with Borr for one year from signing of the definitive agreement”, it said. Meanwhile Ocean Oilfield Drilling will buy Noble Resolve, built 2009, after the rig’s ongoing contract ends. Noble Resolve will be freed in the first quarter of 2026, Nobel says on its online fleet inventory. The rig is currently deployed in Spain for an unnamed operator, according to Noble’s latest fleet status report, published October 27. Ocean Oilfield Drilling will pay in cash. “These transactions are expected to be immediately accretive to our shareholders based on both trailing 2025 and anticipated 2026 EBITDA and free cash flow, while also bolstering our balance sheet and sharpening the focus on our established positions in the deepwater and ultra-harsh jackup segments”, said president and chief executive Robert W. Eifler. In its quarterly report October 27, Noble said the Noble Globetrotter II drillship, built 2013, was also being sold. During the third

Read More »

Equinor Scores 2 Gas, Condensate Discoveries in Sleipner

Equinor ASA and its partners have achieved two new natural gas and condensate discoveries in the Sleipner area on Norway’s side of the North Sea. Preliminary estimates for Lofn (well 15/5-8 S) and Langemann (15/5-8 A), in production license 1140, indicate 5-18 million standard cubic meters oil-equivalent recoverable resources, or 30-110 million barrels, according to the Norwegian majority state-owned company. “These are Equinor’s largest discoveries so far this year and can be developed for the European market through existing infrastructure”, it said in an online statement. The discoveries sit between the Gudrun and Eirin fields and about 40 kilometers (24.85 miles) northwest of the Sleipner A processing, drilling and living quarters platform, according to Equinor. The platform is one of several installations serving the Sleipner gas and condensate fields Sleipner East (which started production 1993), Gungne (started up 1996) and Sleipner West (also put onstream 1996). Sleipner infrastructure also serves tie-in fields Sigyn (online since 2002), Volve (started up 2008), Gudrun (started up 2014) and Gina Krog (started up 2017). Lofn and Langemann encountered gas and condensate in the Hugin Formation, which consists of sandstones with “good reservoir properties”, Equinor said. “The discoveries reduce uncertainty in several nearby prospects, which will now be further evaluated”, it said. Kjetil Hove, executive vice president for Norwegian exploration and production at Equinor, said, “This demonstrates the importance of maintaining exploration activity on the Norwegian continental shelf. There are still significant energy resources on the shelf, and Europe needs stable oil and gas deliveries”. “Discoveries near existing fields can be developed quickly through subsea facilities, with limited environmental impact, very low CO2 emissions from production and strong profitability”, Hove said. “Equinor plans to accelerate such developments on the Norwegian continental shelf”. Karl Johnny Hersvik, chief executive of license co-owner Aker BP ASA, said separately the

Read More »

Crude Settles Lower

Oil eased by the most in almost three weeks as traders monitored India’s buying of Russian crude and refined products markets slumped, leading the energy complex lower. West Texas Intermediate futures fell 2% to settle near $59 a barrel, weighed down by losses in US equities, and have now been trading in a range of less than $4 since the start of November. Russian President Vladimir Putin last week promised “uninterrupted shipments” of fuel to India even as Moscow faces steeper sanctions over its war in Ukraine. The shipments will likely be a key point for discussions as US negotiators arrive in the South Asian nation for trade talks. “Oversupply concerns will eventually be realized, especially as Russian oil and refined product flows eventually circumvent existing sanctions,” said Vivek Dhar, an analyst with Commonwealth Bank of Australia. That will see Brent futures fall toward $60 a barrel through 2026, he said. Among products, gasoline futures dropped 2% in New York, after hitting the lowest level since May 2021 last week. Diesel prices also weakened in a drag on energy commodities across the board. The focus on Moscow’s flows comes as a potential peace deal between Ukraine and Russia also remained in focus. US President Donald Trump said he was disappointed in Ukrainian President Volodymyr Zelenskiy’s handling of a US proposal to end the nearly four-year-old war. Those tensions will be weighed against glut concerns, with higher supply from OPEC+ and producers outside the group — including the US, Brazil and Guyana — set to overwhelm tepid demand growth. The US’s Energy Information Administration, the International Energy Agency and the Organization of the Petroleum Exporting Countries will publish monthly market outlooks this week that may provide further insights. Both WTI and Brent remain on their longest runs below their 100-day moving

Read More »

Energy Department Announces $11 Million in Awards to Develop HALEU Transportation Packages

IDAHO FALLS, ID. —The U.S. Department of Energy (DOE) today announced $11 million in awards to five U.S. companies to develop and license new or modified transportation packages for high-assay low-enriched uranium (HALEU). The announcement was made during U.S. Secretary of Energy Chris Wright’s visit to Idaho National Laboratory (INL), marking the final stop in his ongoing tour of all 17 DOE National Laboratories. These selections advance President Trump’s recent executive orders and commitment to rebuild the Nation’s nuclear fuel cycle, strengthen domestic enrichment and fabrication capabilities, and accelerate the deployment of advanced reactors to usher in a new American nuclear renaissance. “From critical minerals to nuclear fuel, the Trump administration is fully committed to restoring the supply chains needed to secure America’s future,” said Secretary Wright. “Thanks to President Trump, the Energy Department is operating at record speeds to unleash the next American Nuclear Renaissance and to deliver more affordable, reliable, and secure energy for American families and businesses.” DOE’s $11 million in awards will support industry-led efforts to design, modify, and license transportation packages through the U.S. Nuclear Regulatory Commission (NRC). These investments will help establish long-term, economical HALEU transport capabilities that better serve domestic reactor developers and strengthen the U.S. nuclear supply chain. The following companies were selected to develop long-term economic solutions for the safe transport of HALEU through two topic areas: Topic Area 1: Develop new package designs that can be licensed by the NRC NAC International Westinghouse Electric Company Container Technologies Industries, LLC American Centrifuge Operating Paragon D&E Topic Area 2: Modify existing design packages for NRC certification NAC International Projects under Topic Area 1 will have performance periods of up to three years; the Topic Area 2 project will have a performance period of up to two years. Funding is provided through DOE’s

Read More »

Newsom Sparks Rebellion in Bay Area Town

A small city perched on San Francisco Bay poses a big obstacle to California Governor Gavin Newsom’s plans to prevent gasoline price spikes in a state that already pays more at the pump than any other.  Valero Energy Corp. plans to shut its refinery in Benicia in April, part of a wave of refinery closures across California as the state shifts away from fossil fuels. Newsom is counting on increased imports to ensure gas prices don’t soar, and his administration is exploring the Valero site — which is connected to a marine port — as a potential storage hub, said Benicia Mayor Steve Young.  The idea, however, doesn’t sit well with Young or other leaders in this community of 27,000, which relies on the refinery for jobs and taxes. If Valero can’t be persuaded to keep the refinery open, he would rather redevelop the site to attract a new industry, or fill it with retail and housing.   “We’re going to put up whatever resistance we can,” Young said in an interview. Making the site a fuel storage hub “is a terrible situation, because there are no jobs, there are no taxes and you have continuous emissions from tankers.”  Young and the governor’s staff discussed the idea in meetings last month, he said, with state officials asking if the city would accept a storage facility for up to 20 years. No formal proposal has been submitted to the city, he said. Young also warned that Benicia could push forward a ballot measure to tax gasoline imports, if necessary. The governor’s office said they “remain engaged with all interested and impacted stakeholders,” declining to comment further. Valero, based in San Antonio, Texas, didn’t respond to requests for comment. California has seen its fleet of refineries shrink as the state moves to renewable

Read More »

Sanctioned Russian LNG Plant Ships to China

A Russian liquefied natural gas export facility delivered its first shipment to China since being sanctioned by the US in January, the latest sign of increased energy cooperation between Beijing and Moscow. The Valera vessel, which loaded a shipment from Gazprom PJSC’s Portovaya facility on the Baltic Sea in October, arrived at the Beihai import terminal in southern China on Monday, ship data compiled by Bloomberg shows. Both Valera and Portovaya were sanctioned by Joe Biden’s administration to thwart Russia’s plans to boost LNG exports. China, which doesn’t recognize the unilateral sanctions, has increasingly bought blacklisted Russian gas over the last few months, ratcheting up energy ties between the two countries. Beijing has also ignored a broader push by US President Donald Trump to halt sales of Russian oil, which will likely be a key part of trade negotiations between Washington and New Delhi this week. Russia has two relatively small LNG export facilities on the Baltic Sea, with the Novatek PJSC-led Vysotsk plant also blacklisted by the US. Another sanctioned Russian plant, the Arctic LNG 2 site in Siberia, started delivering fuel to Beihai in late August. Total Russian LNG shipments to China, including from unsanctioned plants, rose about 14 percent from September through November from the same period a year earlier, ship data shows. If unloaded, Valera would be the 19th shipment of LNG into China from a blacklisted Russian plant since August, the data shows. In mid-October, satellite images showed a tanker that loaded at Portovaya transferring fuel into another vessel registered to a Hong Kong-based company near Malaysia. That ship, known as CCH Gas, has been sending out false location signals, and was spotted by satellites near China last month. It isn’t clear where it is currently located. What do you think? We’d love to hear from

Read More »

What does Arm need to do to gain enterprise acceptance?

But in 2017, AMD released the Zen architecture, which was equal if not superior to the Intel architecture. Zen made AMD competitive, and it fueled an explosive rebirth for a company that was near death a few years prior. AMD now has about 30% market share, while Intel suffers from a loss of technology as well as corporate leadership. Now, customers have a choice of Intel or AMD, and they don’t have to worry about porting their applications to a new platform like they would have to do if they switched to Arm. Analysts weigh in on Arm Tim Crawford sees no demand for Arm in the data center. Crawford is president of AVOA, a CIO consultancy. In his role, he talks to IT professionals all the time, but he’s not hearing much interest in Arm. “I don’t see Arm really making a dent, ever, into the general-purpose processor space,” Crawford said. “I think the opportunity for Arm is special applications and special silicon. If you look at the major cloud providers, their custom silicon is specifically built to do training or optimized to do inference. Arm is kind of in the same situation in the sense that it has to be optimized.” “The problem [for Arm] is that there’s not necessarily a need to fulfill at this point in time,” said Rob Enderle, principal analyst with The Enderle Group. “Obviously, there’s always room for other solutions, but Arm is still going to face the challenge of software compatibility.” And therein lies what may be Arm’s greatest challenge: software compatibility. Software doesn’t care (usually) if it’s on Intel or AMD, because both use the x86 architecture, with some differences in extensions. But Arm is a whole new platform, and that requires porting and testing. Enterprises generally don’t like disruption —

Read More »

Intel decides to keep networking business after all

That doesn’t explain why Intel made the decision to pursue spin-off in the first place. In July, NEX chief Sachin Katti issued a memo that outlined plans to establish key elements of the Networking and Communications business as a stand-alone company. It looked like a done deal, experts said. Jim Hines, research director for enabling technologies and semiconductors at IDC, declined to speculate on whether Intel could get a decent offer but noted NEX is losing ground. IDC estimates Intel’s market share in overall semiconductors at 6.8% in Q3 2025, which is down from 7.4% for the full year 2024 and 9.2% for the full year 2023. Intel’s course reversal “is a positive for Intel in the long term, and recent improvements in its financial situation may have contributed to the decision to keep NEX in house,” he said. When Tan took over as CEO earlier this year, prioritized strengthening the balance sheet and bringing a greater focus on execution. Divest NEX was aligned with these priorities, but since then, Intel has secured investments from the US Government, Nvidia and SoftBank that have reduced the need to raise cash through other means, Hines notes. “The NEX business will prove to be a strategic asset for Intel as it looks to protect and expand its position in the AI datacenter market. Success in this market now requires processor suppliers to offer a full-stack solution, not just silicon. Scale-up and scale-out networking solutions are a key piece of the package, and Intel will be able to leverage its NEX technologies and software, including silicon photonics, to develop differentiated product offerings in this space,” Hines said.

Read More »

At the Crossroads of AI and the Edge: Inside 1623 Farnam’s Rising Role as a Midwest Interconnection Powerhouse

That was the thread that carried through our recent conversation for the DCF Show podcast, where Severn walked through the role Farnam now plays in AI-driven networking, multi-cloud connectivity, and the resurgence of regional interconnection as a core part of U.S. digital infrastructure. Aggregation, Not Proximity: The Practical Edge Severn is clear-eyed about what makes the edge work and what doesn’t. The idea that real content delivery could aggregate at the base of cell towers, he noted, has never been realistic. The traffic simply isn’t there. Content goes where the network already concentrates, and the network concentrates where carriers, broadband providers, cloud onramps, and CDNs have amassed critical mass. In Farnam’s case, that density has grown steadily since the building changed hands in 2018. At the time an “underappreciated asset,” the facility has since become a meeting point for more than 40 broadband providers and over 60 carriers, with major content operators and hyperscale platforms routing traffic directly through its MMRs. That aggregation effect feeds on itself; as more carrier and content traffic converges, more participants anchor themselves to the hub, increasing its gravitational pull. Geography only reinforces that position. Located on the 41st parallel, the building sits at the historical shortest-distance path for early transcontinental fiber routes. It also lies at the crossroads of major east–west and north–south paths that have made Omaha a natural meeting point for backhaul routes and hyperscale expansions across the Midwest. AI and the New Interconnection Economy Perhaps the clearest sign of Farnam’s changing role is the sheer volume of fiber entering the building. More than 5,000 new strands are being brought into the property, with another 5,000 strands being added internally within the Meet-Me Rooms in 2025 alone. These are not incremental upgrades—they are hyperscale-grade expansions driven by the demands of AI traffic,

Read More »

Schneider Electric’s $2.3 Billion in AI Power and Cooling Deals Sends Message to Data Center Sector

When Schneider Electric emerged from its 2025 North American Innovation Summit in Las Vegas last week with nearly $2.3 billion in fresh U.S. data center commitments, it didn’t just notch a big sales win. It arguably put a stake in the ground about who controls the AI power-and-cooling stack over the rest of this decade. Within a single news cycle, Schneider announced: Together, the deals total about $2.27 billion in U.S. data center infrastructure, a number Schneider confirmed in background with multiple outlets and which Reuters highlighted as a bellwether for AI-driven demand.  For the AI data center ecosystem, these contracts function like early-stage fuel supply deals for the power and cooling systems that underpin the “AI factory.” Supply Capacity Agreements: Locking in the AI Supply Chain Significantly, both deals are structured as supply capacity agreements, not traditional one-off equipment purchase orders. Under the SCA model, Schneider is committing dedicated manufacturing lines and inventory to these customers, guaranteeing output of power and cooling systems over a multi-year horizon. In return, Switch and Digital Realty are providing Schneider with forecastable volume and visibility at the scale of gigawatt-class campus build-outs.  A Schneider spokesperson told Reuters that the two contracts are phased across 2025 and 2026, underscoring that this arrangement is about pipeline, as opposed to a one-time backlog spike.  That structure does three important things for the market: Signals confidence that AI demand is durable.You don’t ring-fence billions of dollars of factory output for two customers unless you’re highly confident the AI load curve runs beyond the current GPU cycle. Pre-allocates power & cooling the way the industry pre-allocated GPUs.Hyperscalers and neoclouds have already spent two years locking up Nvidia and AMD capacity. These SCAs suggest power trains and thermal systems are joining chips on the list of constrained strategic resources.

Read More »

The Data Center Power Squeeze: Mapping the Real Limits of AI-Scale Growth

As we all know, the data center industry is at a crossroads. As artificial intelligence reshapes the already insatiable digital landscape, the demand for computing power is surging at a pace that outstrips the growth of the US electric grid. As engines of the AI economy, an estimated 1,000 new data centers1 are needed to process, store, and analyze the vast datasets that run everything from generative models to autonomous systems. But this transformation comes with a steep price and the new defining criteria for real estate: power. Our appetite for electricity is now the single greatest constraint on our expansion, threatening to stall the very innovation we enable. In 2024, US data centers consumed roughly 4% of the nation’s total electricity, a figure that is projected to triple by 2030, reaching 12% or more.2 For AI-driven hyperscale facilities, the numbers are even more staggering. With the largest planned data centers requiring gigawatts of power, enough to supply entire cities, the cumulative demand from all data centers is expected to reach 134 gigawatts by 2030, nearly three times the current load.​3 This presents a systemic challenge. The U.S. power grid, built for a different era, is struggling to keep pace. Utilities are reporting record interconnection requests, with some regions seeing demand projections that exceed their total system capacity by fivefold.4 In Virginia and Texas, the epicenters of data center expansion, grid operators are warning of tight supply-demand balances and the risk of blackouts during peak periods.5 The problem is not just the sheer volume of power needed, but the speed at which it must be delivered. Data center operators are racing to secure power for projects that could be online in as little as 18 months, but grid upgrades and new generation can take years, if not decades. The result

Read More »

The Future of Hyperscale: Neoverse Joins NVLink Fusion as SC25 Accelerates Rack-Scale AI Architectures

Neoverse’s Expanding Footprint and the Power-Efficiency Imperative With Neoverse deployments now approaching roughly 50% of all compute shipped into top hyperscalers in 2025 (representing more than a billion Arm cores) and with nation-scale AI campuses such as the Stargate project already anchored on Arm compute, the addition of NVLink Fusion becomes a pivotal extension of the Neoverse roadmap. Partners can now connect custom Arm CPUs to their preferred NVIDIA accelerators across a coherent, high-bandwidth, rack-scale fabric. Arm characterized the shift as a generational inflection point in data-center architecture, noting that “power—not FLOPs—is the bottleneck,” and that future design priorities hinge on maximizing “intelligence per watt.” Ian Buck, vice president and general manager of accelerated computing at NVIDIA, underscored the practical impact: “Folks building their own Arm CPU, or using an Arm IP, can actually have access to NVLink Fusion—be able to connect that Arm CPU to an NVIDIA GPU or to the rest of the NVLink ecosystem—and that’s happening at the racks and scale-up infrastructure.” Despite the expanded design flexibility, this is not being positioned as an open interconnect ecosystem. NVIDIA continues to control the NVLink Fusion fabric, and all connections ultimately run through NVIDIA’s architecture. For data-center planners, the SC25 announcement translates into several concrete implications: 1.   NVIDIA “Grace-style” Racks Without Buying Grace With NVLink Fusion now baked into Neoverse, hyperscalers and sovereign operators can design their own Arm-based control-plane or pre-processing CPUs that attach coherently to NVIDIA GPU domains—such as NVL72 racks or HGX B200/B300 systems—without relying on Grace CPUs. A rack-level architecture might now resemble: Custom Neoverse SoC for ingest, orchestration, agent logic, and pre/post-processing NVLink Fusion fabric Blackwell GPU islands and/or NVLink-attached custom accelerators (Marvell, MediaTek, others) This decouples CPU choice from NVIDIA’s GPU roadmap while retaining the full NVLink fabric. In practice, it also opens

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »