Stay Ahead, Stay ONMINE

Nine Rules for SIMD Acceleration of Your Rust Code (Part 1)

Thanks to Ben Lichtman (B3NNY) at the Seattle Rust Meetup for pointing me in the right direction on SIMD. SIMD (Single Instruction, Multiple Data) operations have been a feature of Intel/AMD and ARM CPUs since the early 2000s. These operations enable you to, for example, add an array of eight i32 to another array of eight i32 with just one CPU operation on a single core. Using SIMD operations greatly speeds up certain tasks. If you’re not using SIMD, you may not be fully using your CPU’s capabilities. Is this “Yet Another Rust and SIMD” article? Yes and no. Yes, I did apply SIMD to a programming problem and then feel compelled to write an article about it. No, I hope that this article also goes into enough depth that it can guide you through your project. It explains the newly available SIMD capabilities and settings in Rust nightly. It includes a Rust SIMD cheatsheet. It shows how to make your SIMD code generic without leaving safe Rust. It gets you started with tools such as Godbolt and Criterion. Finally, it introduces new cargo commands that make the process easier. The range-set-blaze crate uses its RangeSetBlaze::from_iter method to ingest potentially long sequences of integers. When the integers are “clumpy”, it can do this 30 times faster than Rust’s standard HashSet::from_iter. Can we do even better if we use Simd operations? Yes! See this documentation for the definition of “clumpy”. Also, what happens if the integers are not clumpy? RangeSetBlaze is 2 to 3 times slower than HashSet. On clumpy integers, RangeSetBlaze::from_slice — a new method based on SIMD operations — is 7 times faster than RangeSetBlaze::from_iter. That makes it more than 200 times faster than HashSet::from_iter. (When the integers are not clumpy, it is still 2 to 3 times slower than HashSet.) Over the course of implementing this speed up, I learned nine rules that can help you accelerate your projects with SIMD operations. The rules are: Use nightly Rust and core::simd, Rust’s experimental standard SIMD module. CCC: Check, Control, and Choose your computer’s SIMD capabilities. Learn core::simd, but selectively. Brainstorm candidate algorithms. Use Godbolt and AI to understand your code’s assembly, even if you don’t know assembly language. Generalize to all types and LANES with in-lined generics, (and when that doesn’t work) macros, and (when that doesn’t work) traits. See Part 2 for these rules: 7. Use Criterion benchmarking to pick an algorithm and to discover that LANES should (almost) always be 32 or 64. 8. Integrate your best SIMD algorithm into your project with as_simd, special code for i128/u128, and additional in-context benchmarking. 9. Extricate your best SIMD algorithm from your project (for now) with an optional cargo feature. Aside: To avoid wishy-washiness, I call these “rules”, but they are, of course, just suggestions. Rule 1: Use nightly Rust and core::simd, Rust’s experimental standard SIMD module. Rust can access SIMD operations either via the stable core::arch module or via nighty’s core::simd module. Let’s compare them: core::arch core::simd Nightly Delightfully easy and portable. Limits downstream users to nightly. I decided to go with “easy”. If you decide to take the harder road, starting first with the easier path may still be worthwhile. In either case, before we try to use SIMD operations in a larger project, let’s make sure we can get them working at all. Here are the steps: First, create a project called simd_hello: cargo new simd_hello cd simd_hello Edit src/main.rs to contain (Rust playground): // Tell nightly Rust to enable ‘portable_simd’ #![feature(portable_simd)] use core::simd::prelude::*; // constant Simd structs const LANES: usize = 32; const THIRTEENS: Simd = Simd::::from_array([13; LANES]); const TWENTYSIXS: Simd = Simd::::from_array([26; LANES]); const ZEES: Simd = Simd::::from_array([b’Z’; LANES]); fn main() { // create a Simd struct from a slice of LANES bytes let mut data = Simd::::from_slice(b”URYYBJBEYQVQBUBCRVGFNYYTBVATJRYY”); data += THIRTEENS; // add 13 to each byte // compare each byte to ‘Z’, where the byte is greater than ‘Z’, subtract 26 let mask = data.simd_gt(ZEES); // compare each byte to ‘Z’ data = mask.select(data – TWENTYSIXS, data); let output = String::from_utf8_lossy(data.as_array()); assert_eq!(output, “HELLOWORLDIDOHOPEITSALLGOINGWELL”); println!(“{}”, output); } Next — full SIMD capabilities require the nightly version of Rust. Assuming you have Rust installed, install nightly (rustup install nightly). Make sure you have the latest nightly version (rustup update nightly). Finally, set this project to use nightly (rustup override set nightly). You can now run the program with cargo run. The program applies ROT13 decryption to 32 bytes of upper-case letters. With SIMD, the program can decrypt all 32 bytes simultaneously. Let’s look at each section of the program to see how it works. It starts with: #![feature(portable_simd)] use core::simd::prelude::*; Rust nightly offers its extra capabilities (or “features”) only on request. The #![feature(portable_simd)] statement requests that Rust nightly make available the new experimental core::simd module. The use statement then imports the module’s most important types and traits. In the code’s next section, we define useful constants: const LANES: usize = 32; const THIRTEENS: Simd = Simd::::from_array([13; LANES]); const TWENTYSIXS: Simd = Simd::::from_array([26; LANES]); const ZEES: Simd = Simd::::from_array([b’Z’; LANES]); The Simd struct is a special kind of Rust array. (It is, for example, always memory aligned.) The constant LANES tells the length of the Simd array. The from_array constructor copies a regular Rust array to create a Simd. In this case, because we want const Simd’s, the arrays we construct from must also be const. The next two lines copy our encrypted text into data and then adds 13 to each letter. let mut data = Simd::::from_slice(b”URYYBJBEYQVQBUBCRVGFNYYTBVATJRYY”); data += THIRTEENS; What if you make an error and your encrypted text isn’t exactly length LANES (32)? Sadly, the compiler won’t tell you. Instead, when you run the program, from_slice will panic. What if the encrypted text contains non-upper-case letters? In this example program, we’ll ignore that possibility. The += operator does element-wise addition between the Simd data and Simd THIRTEENS. It puts the result in data. Recall that debug builds of regular Rust addition check for overflows. Not so with SIMD. Rust defines SIMD arithmetic operators to always wrap. Values of type u8 wrap after 255. Coincidentally, Rot13 decryption also requires wrapping, but after ‘Z’ rather than after 255. Here is one approach to coding the needed Rot13 wrapping. It subtracts 26 from any values on beyond ‘Z’. let mask = data.simd_gt(ZEES); data = mask.select(data – TWENTYSIXS, data); This says to find the element-wise places beyond ‘Z’. Then, subtract 26 from all values. At the places of interest, use the subtracted values. At the other places, use the original values. Does subtracting from all values and then using only some seem wasteful? With SIMD, this takes no extra computer time and avoids jumps. This strategy is, thus, efficient and common. The program ends like so: let output = String::from_utf8_lossy(data.as_array()); assert_eq!(output, “HELLOWORLDIDOHOPEITSALLGOINGWELL”); println!(“{}”, output); Notice the .as_array() method. It safely transmutes a Simd struct into a regular Rust array without copying. Surprisingly to me, this program runs fine on computers without SIMD extensions. Rust nightly compiles the code to regular (non-SIMD) instructions. But we don’t just want to run “fine”, we want to run faster. That requires us to turn on our computer’s SIMD power. Rule 2: CCC: Check, Control, and Choose your computer’s SIMD capabilities. To make SIMD programs run faster on your machine, you must first discover which SIMD extensions your machine supports. If you have an Intel/AMD machine, you can use my simd-detect cargo command. Run with: rustup override set nightly cargo install cargo-simd-detect –force cargo simd-detect On my machine, it outputs: extension width available enabled sse2 128-bit/16-bytes true true avx2 256-bit/32-bytes true false avx512f 512-bit/64-bytes true false This says that my machine supports the sse2, avx2, and avx512f SIMD extensions. Of those, by default, Rust enables the ubiquitous twenty-year-old sse2 extension. The SIMD extensions form a hierarchy with avx512f above avx2 above sse2. Enabling a higher-level extension also enables the lower-level extensions. Most Intel/AMD computers also support the ten-year-old avx2 extension. You enable it by setting an environment variable: # For Windows Command Prompt set RUSTFLAGS=-C target-feature=+avx2 # For Unix-like shells (like Bash) export RUSTFLAGS=”-C target-feature=+avx2″ “Force install” and run simd-detect again and you should see that avx2 is enabled. # Force install every time to see changes to ‘enabled’ cargo install cargo-simd-detect –force cargo simd-detect extension width available enabled sse2 128-bit/16-bytes true true avx2 256-bit/32-bytes true true avx512f 512-bit/64-bytes true false Alternatively, you can turn on every SIMD extension that your machine supports: # For Windows Command Prompt set RUSTFLAGS=-C target-cpu=native # For Unix-like shells (like Bash) export RUSTFLAGS=”-C target-cpu=native” On my machine this enables avx512f, a newer SIMD extension supported by some Intel computers and a few AMD computers. You can set SIMD extensions back to their default (sse2 on Intel/AMD) with: # For Windows Command Prompt set RUSTFLAGS= # For Unix-like shells (like Bash) unset RUSTFLAGS You may wonder why target-cpu=native isn’t Rust’s default. The problem is that binaries created using avx2 or avx512f won’t run on computers missing those SIMD extensions. So, if you are compiling only for your own use, use target-cpu=native. If, however, you are compiling for others, choose your SIMD extensions thoughtfully and let people know which SIMD extension level you are assuming. Happily, whatever level of SIMD extension you pick, Rust’s SIMD support is so flexible you can easily change your decision later. Let’s next learn details of programming with SIMD in Rust. Rule 3: Learn core::simd, but selectively. To build with Rust’s new core::simd module you should learn selected building blocks. Here is a cheatsheet with the structs, methods, etc., that I’ve found most useful. Each item includes a link to its documentation. Structs Simd – a special, aligned, fixed-length array of SimdElement. We refer to a position in the array and the element stored at that position as a “lane”. By default, we copy Simd structs rather than reference them. Mask – a special Boolean array showing inclusion/exclusion on a per-lane basis. SimdElements Floating-Point Types: f32, f64 Integer Types: i8, u8, i16, u16, i32, u32, i64, u64, isize, usize — but not i128, u128 Simd constructors Simd::from_array – creates a Simd struct by copying a fixed-length array. Simd::from_slice – creates a Simd struct by copying the first LANE elements of a slice. Simd::splat – replicates a single value across all lanes of a Simd struct. slice::as_simd – without copying, safely transmutes a regular slice into an aligned slice of Simd (plus unaligned leftovers). Simd conversion Simd::as_array – without copying, safely transmutes an Simd struct into a regular array reference. Simd methods and operators simd[i] – extract a value from a lane of a Simd. simd + simd – performs element-wise addition of two Simd structs. Also, supported -, *, /, %, remainder, bitwise-and, -or, xor, -not, -shift. simd += simd – adds another Simd struct to the current one, in place. Other operators supported, too. Simd::simd_gt – compares two Simd structs, returning a Mask indicating which elements of the first are greater than those of the second. Also, supported simd_lt, simd_le, simd_ge, simd_lt, simd_eq, simd_ne. Simd::rotate_elements_left – rotates the elements of a Simd struct to the left by a specified amount. Also, rotate_elements_right. simd_swizzle!(simd, indexes) – rearranges the elements of a Simd struct based on the specified const indexes. simd == simd – checks for equality between two Simd structs, returning a regular bool result. Simd::reduce_and – performs a bitwise AND reduction across all lanes of a Simd struct. Also, supported: reduce_or, reduce_xor, reduce_max, reduce_min, reduce_sum (but noreduce_eq). Mask methods and operators Mask::select – selects elements from two Simd struct based on a mask. Mask::all – tells if the mask is all true. Mask::any – tells if the mask contains any true. All about lanes Simd::LANES – a constant indicating the number of elements (lanes) in a Simd struct. SupportedLaneCount – tells the allowed values of LANES. Use by generics. simd.lanes – const method that tells a Simd struct’s number of lanes. Low-level alignment, offsets, etc. When possible, use to_simd instead. More, perhaps of interest With these building blocks at hand, it’s time to build something. Rule 4: Brainstorm candidate algorithms. What do you want to speed up? You won’t know ahead of time which SIMD approach (of any) will work best. You should, therefore, create many algorithms that you can then analyze (Rule 5) and benchmark (Rule 7). I wanted to speed up range-set-blaze, a crate for manipulating sets of “clumpy” integers. I hoped that creating is_consecutive, a function to detect blocks of consecutive integers, would be useful. Background: Crate range-set-blaze works on “clumpy” integers. “Clumpy”, here, means that the number of ranges needed to represent the data is small compared to the number of input integers. For example, these 1002 input integers 100, 101, …, 489, 499, 501, 502, …, 998, 999, 999, 100, 0 Ultimately become three Rust ranges: 0..=0, 100..=499, 501..=999. (Internally, the RangeSetBlaze struct represents a set of integers as a sorted list of disjoint ranges stored in a cache efficient BTreeMap.) Although the input integers are allowed to be unsorted and redundant, we expect them to often be “nice”. RangeSetBlaze’s from_iter constructor already exploits this expectation by grouping up adjacent integers. For example, from_iter first turns the 1002 input integers into four ranges 100..=499, 501..=999, 100..=100, 0..=0. with minimal, constant memory usage, independent of input size. It then sorts and merges these reduced ranges. I wondered if a new from_slice method could speed construction from array-like inputs by quickly finding (some) consecutive integers. For example, could it— with minimal, constant memory — turn the 1002 inputs integers into five Rust ranges: 100..=499, 501..=999, 999..=999, 100..=100, 0..=0. If so, from_iter could then quickly finish the processing. Let’s start by writing is_consecutive with regular Rust: pub const LANES: usize = 16; pub fn is_consecutive_regular(chunk: &[u32; LANES]) – > bool { for i in 1..LANES { if chunk[i – 1].checked_add(1) != Some(chunk[i]) { return false; } } true } The algorithm just loops through the array sequentially, checking that each value is one more than its predecessor. It also avoids overflow. Looping over the items seemed so easy, I wasn’t sure if SIMD could do any better. Here was my first attempt: Splat0 use std::simd::prelude::*; const COMPARISON_VALUE_SPLAT0: Simd = Simd::from_array([15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]); pub fn is_consecutive_splat0(chunk: Simd) – > bool { if chunk[0].overflowing_add(LANES as u32 – 1) != (chunk[LANES – 1], false) { return false; } let added = chunk + COMPARISON_VALUE_SPLAT0; Simd::splat(added[0]) == added } Here is an outline of its calculations: Source: This and all following images by author. It first (needlessly) checks that the first and last items are 15 apart. It then creates added by adding 15 to the 0th item, 14 to the next, etc. Finally, to see if all items in added are the same, it creates a new Simd based on added’s 0th item and then compares. Recall that splat creates a Simd struct from one value. Splat1 & Splat2 When I mentioned the is_consecutive problem to Ben Lichtman, he independently came up with this, Splat1: const COMPARISON_VALUE_SPLAT1: Simd = Simd::from_array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]); pub fn is_consecutive_splat1(chunk: Simd) – > bool { let subtracted = chunk – COMPARISON_VALUE_SPLAT1; Simd::splat(chunk[0]) == subtracted } Splat1 subtracts the comparison value from chunk and checks if the result is the same as the first element of chunk, splatted. He also came up with a variation called Splat2 that splats the first element of subtracted rather than chunk. That would seemingly avoid one memory access. I’m sure you are wondering which of these is best, but before we discuss that let’s look at two more candidates. Swizzle Swizzle is like Splat2 but uses simd_swizzle! instead of splat. Macro simd_swizzle! creates a new Simd by rearranging the lanes of an old Simd according to an array of indexes. pub fn is_consecutive_sizzle(chunk: Simd) – > bool { let subtracted = chunk – COMPARISON_VALUE_SPLAT1; simd_swizzle!(subtracted, [0; LANES]) == subtracted } Rotate This one is different. I had high hopes for it. const COMPARISON_VALUE_ROTATE: Simd = Simd::from_array([4294967281, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]); pub fn is_consecutive_rotate(chunk: Simd) – > bool { let rotated = chunk.rotate_elements_right::(); chunk – rotated == COMPARISON_VALUE_ROTATE } The idea is to rotate all the elements one to the right. We then subtract the original chunk from rotated. If the input is consecutive, the result should be “-15” followed by all 1’s. (Using wrapped subtraction, -15 is 4294967281u32.) Now that we have candidates, let’s start to evaluate them. Rule 5: Use Godbolt and AI to understand your code’s assembly, even if you don’t know assembly language. We’ll evaluate the candidates in two ways. First, in this rule, we’ll look at the assembly language generated from our code. Second, in Rule 7, we’ll benchmark the code’s speed. Don’t worry if you don’t know assembly language, you can still get something out of looking at it. The easiest way to see the generated assembly language is with the Compiler Explorer, AKA Godbolt. It works best on short bits of code that don’t use outside crates. It looks like this: Referring to the numbers in the figure above, follow these steps to use Godbolt: Open godbolt.org with your web browser. Add a new source editor. Select Rust as your language. Paste in the code of interest. Make the functions of interest public (pub fn). Do not include a main or unneeded functions. The tool doesn’t support external crates. Add a new compiler. Set the compiler version to nightly. Set options (for now) to -C opt-level=3 -C target-feature=+avx512f. If there are errors, look at the output. If you want to share or save the state of the tool, click “Share” From the image above, you can see that Splat2 and Sizzle are exactly the same, so we can remove Sizzle from consideration. If you open up a copy of my Godbolt session, you’ll also see that most of the functions compile to about the same number of assembly operations. The exceptions are Regular — which is much longer — and Splat0 — which includes the early check. In the assembly, 512-bit registers start with ZMM. 256-bit registers start YMM. 128-bit registers start with XMM. If you want to better understand the generated assembly, use AI tools to generate annotations. For example, here I ask Bing Chat about Splat2: Try different compiler settings, including -C target-feature=+avx2 and then leaving target-feature completely off. Fewer assembly operations don’t necessarily mean faster speed. Looking at the assembly does, however, give us a sanity check that the compiler is at least trying to use SIMD operations, inlining const references, etc. Also, as with Splat1 and Swizzle, it can sometimes let us know when two candidates are the same. You may need disassembly features beyond what Godbolt offers, for example, the ability to work with code the uses external crates. B3NNY recommended the cargo tool cargo-show-asm to me. I tried it and found it reasonably easy to use. The range-set-blaze crate must handle integer types beyond u32. Moreover, we must pick a number of LANES, but we have no reason to think that 16 LANES is always best. To address these needs, in the next rule we’ll generalize the code. Rule 6: Generalize to all types and LANES with in-lined generics, (and when that doesn’t work) macros, and (when that doesn’t work) traits. Let’s first generalize Splat1 with generics. #[inline] pub fn is_consecutive_splat1_gen( chunk: Simd, comparison_value: Simd, ) – > bool where T: SimdElement + PartialEq, Simd: Sub, LaneCount: SupportedLaneCount, { let subtracted = chunk – comparison_value; Simd::splat(chunk[0]) == subtracted } First, note the #[inline] attribute. It’s important for efficiency and we’ll use it on pretty much every one of these small functions. The function defined above, is_consecutive_splat1_gen, looks great except that it needs a second input, called comparison_value, that we have yet to define. If you don’t need a generic const comparison_value, I envy you. You can skip to the next rule if you like. Likewise, if you are reading this in the future and creating a generic const comparison_value is as effortless as having your personal robot do your household chores, then I doubly envy you. We can try to create a comparison_value_splat_gen that is generic and const. Sadly, neither From nor alternative T::One are const, so this doesn’t work: // DOESN’T WORK BECAUSE From is not const pub const fn comparison_value_splat_gen() – > Simd where T: SimdElement + Default + From + AddAssign, LaneCount: SupportedLaneCount, { let mut arr: [T; N] = [T::from(0usize); N]; let mut i_usize = 0; while i_usize { #[inline] pub fn $function(chunk: Simd) – > bool where LaneCount: SupportedLaneCount, { define_comparison_value_splat!(comparison_value_splat, $type); let subtracted = chunk – comparison_value_splat(); Simd::splat(chunk[0]) == subtracted } }; } #[macro_export] macro_rules! define_comparison_value_splat { ($function:ident, $type:ty) = > { pub const fn $function() – > Simd where LaneCount: SupportedLaneCount, { let mut arr: [$type; N] = [0; N]; let mut i = 0; while i bool where Self: SimdElement, Simd: Sub, LaneCount: SupportedLaneCount; } macro_rules! impl_is_consecutive { ($type:ty) = > { impl IsConsecutive for $type { #[inline] // very important fn is_consecutive(chunk: Simd) – > bool where Self: SimdElement, Simd: Sub, LaneCount: SupportedLaneCount, { define_is_consecutive_splat1!(is_consecutive_splat1, $type); is_consecutive_splat1(chunk) } } }; } impl_is_consecutive!(i8); impl_is_consecutive!(i16); impl_is_consecutive!(i32); impl_is_consecutive!(i64); impl_is_consecutive!(isize); impl_is_consecutive!(u8); impl_is_consecutive!(u16); impl_is_consecutive!(u32); impl_is_consecutive!(u64); impl_is_consecutive!(usize); We can now call fully generic code (Rust Playground): // Works on i32 and 16 lanes let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 100 + i as i32))); let ninety_nines: Simd = black_box(Simd::from_array([99; 16])); assert!(IsConsecutive::is_consecutive(a)); assert!(!IsConsecutive::is_consecutive(ninety_nines)); // Works on i8 and 64 lanes let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 10 + i as i8))); let ninety_nines: Simd = black_box(Simd::from_array([99; 64])); assert!(IsConsecutive::is_consecutive(a)); assert!(!IsConsecutive::is_consecutive(ninety_nines)); With this technique, we can create multiple candidate algorithms that are fully generic over type and LANES. Next, it is time to benchmark and see which algorithms are fastest. Those are the first six rules for adding SIMD code to Rust. In Part 2, we look at rules 7 to 9. These rules will cover how to pick an algorithm and set LANES. Also, how to integrate SIMD operations into your existing code and (importantly) how to make it optional. Part 2 concludes with a discussion of when/if you should use SIMD and ideas for improving Rust’s SIMD experience. I hope to see you there. Please follow Carl on Medium. I write on scientific programming in Rust and Python, machine learning, and statistics. I tend to write about one article per month.

Thanks to Ben Lichtman (B3NNY) at the Seattle Rust Meetup for pointing me in the right direction on SIMD.

SIMD (Single Instruction, Multiple Data) operations have been a feature of Intel/AMD and ARM CPUs since the early 2000s. These operations enable you to, for example, add an array of eight i32 to another array of eight i32 with just one CPU operation on a single core. Using SIMD operations greatly speeds up certain tasks. If you’re not using SIMD, you may not be fully using your CPU’s capabilities.

Is this “Yet Another Rust and SIMD” article? Yes and no. Yes, I did apply SIMD to a programming problem and then feel compelled to write an article about it. No, I hope that this article also goes into enough depth that it can guide you through your project. It explains the newly available SIMD capabilities and settings in Rust nightly. It includes a Rust SIMD cheatsheet. It shows how to make your SIMD code generic without leaving safe Rust. It gets you started with tools such as Godbolt and Criterion. Finally, it introduces new cargo commands that make the process easier.


The range-set-blaze crate uses its RangeSetBlaze::from_iter method to ingest potentially long sequences of integers. When the integers are “clumpy”, it can do this 30 times faster than Rust’s standard HashSet::from_iter. Can we do even better if we use Simd operations? Yes!

See this documentation for the definition of “clumpy”. Also, what happens if the integers are not clumpy? RangeSetBlaze is 2 to 3 times slower than HashSet.

On clumpy integers, RangeSetBlaze::from_slice — a new method based on SIMD operations — is 7 times faster than RangeSetBlaze::from_iter. That makes it more than 200 times faster than HashSet::from_iter. (When the integers are not clumpy, it is still 2 to 3 times slower than HashSet.)

Over the course of implementing this speed up, I learned nine rules that can help you accelerate your projects with SIMD operations.

The rules are:

  1. Use nightly Rust and core::simd, Rust’s experimental standard SIMD module.
  2. CCC: Check, Control, and Choose your computer’s SIMD capabilities.
  3. Learn core::simd, but selectively.
  4. Brainstorm candidate algorithms.
  5. Use Godbolt and AI to understand your code’s assembly, even if you don’t know assembly language.
  6. Generalize to all types and LANES with in-lined generics, (and when that doesn’t work) macros, and (when that doesn’t work) traits.

See Part 2 for these rules:

7. Use Criterion benchmarking to pick an algorithm and to discover that LANES should (almost) always be 32 or 64.

8. Integrate your best SIMD algorithm into your project with as_simd, special code for i128/u128, and additional in-context benchmarking.

9. Extricate your best SIMD algorithm from your project (for now) with an optional cargo feature.

Aside: To avoid wishy-washiness, I call these “rules”, but they are, of course, just suggestions.

Rule 1: Use nightly Rust and core::simd, Rust’s experimental standard SIMD module.

Rust can access SIMD operations either via the stable core::arch module or via nighty’s core::simd module. Let’s compare them:

core::arch

core::simd

  • Nightly
  • Delightfully easy and portable.
  • Limits downstream users to nightly.

I decided to go with “easy”. If you decide to take the harder road, starting first with the easier path may still be worthwhile.


In either case, before we try to use SIMD operations in a larger project, let’s make sure we can get them working at all. Here are the steps:

First, create a project called simd_hello:

cargo new simd_hello
cd simd_hello

Edit src/main.rs to contain (Rust playground):

// Tell nightly Rust to enable 'portable_simd'
#![feature(portable_simd)]
use core::simd::prelude::*;

// constant Simd structs
const LANES: usize = 32;
const THIRTEENS: Simd = Simd::::from_array([13; LANES]);
const TWENTYSIXS: Simd = Simd::::from_array([26; LANES]);
const ZEES: Simd = Simd::::from_array([b'Z'; LANES]);

fn main() {
    // create a Simd struct from a slice of LANES bytes
    let mut data = Simd::::from_slice(b"URYYBJBEYQVQBUBCRVGFNYYTBVATJRYY");

    data += THIRTEENS; // add 13 to each byte

    // compare each byte to 'Z', where the byte is greater than 'Z', subtract 26
    let mask = data.simd_gt(ZEES); // compare each byte to 'Z'
    data = mask.select(data - TWENTYSIXS, data);

    let output = String::from_utf8_lossy(data.as_array());
    assert_eq!(output, "HELLOWORLDIDOHOPEITSALLGOINGWELL");
    println!("{}", output);
}

Next — full SIMD capabilities require the nightly version of Rust. Assuming you have Rust installed, install nightly (rustup install nightly). Make sure you have the latest nightly version (rustup update nightly). Finally, set this project to use nightly (rustup override set nightly).

You can now run the program with cargo run. The program applies ROT13 decryption to 32 bytes of upper-case letters. With SIMD, the program can decrypt all 32 bytes simultaneously.

Let’s look at each section of the program to see how it works. It starts with:

#![feature(portable_simd)]
use core::simd::prelude::*;

Rust nightly offers its extra capabilities (or “features”) only on request. The #![feature(portable_simd)] statement requests that Rust nightly make available the new experimental core::simd module. The use statement then imports the module’s most important types and traits.

In the code’s next section, we define useful constants:

const LANES: usize = 32;
const THIRTEENS: Simd = Simd::::from_array([13; LANES]);
const TWENTYSIXS: Simd = Simd::::from_array([26; LANES]);
const ZEES: Simd = Simd::::from_array([b'Z'; LANES]);

The Simd struct is a special kind of Rust array. (It is, for example, always memory aligned.) The constant LANES tells the length of the Simd array. The from_array constructor copies a regular Rust array to create a Simd. In this case, because we want const Simd’s, the arrays we construct from must also be const.

The next two lines copy our encrypted text into data and then adds 13 to each letter.

let mut data = Simd::::from_slice(b"URYYBJBEYQVQBUBCRVGFNYYTBVATJRYY");
data += THIRTEENS;

What if you make an error and your encrypted text isn’t exactly length LANES (32)? Sadly, the compiler won’t tell you. Instead, when you run the program, from_slice will panic. What if the encrypted text contains non-upper-case letters? In this example program, we’ll ignore that possibility.

The += operator does element-wise addition between the Simd data and Simd THIRTEENS. It puts the result in data. Recall that debug builds of regular Rust addition check for overflows. Not so with SIMD. Rust defines SIMD arithmetic operators to always wrap. Values of type u8 wrap after 255.

Coincidentally, Rot13 decryption also requires wrapping, but after ‘Z’ rather than after 255. Here is one approach to coding the needed Rot13 wrapping. It subtracts 26 from any values on beyond ‘Z’.

let mask = data.simd_gt(ZEES);
data = mask.select(data - TWENTYSIXS, data);

This says to find the element-wise places beyond ‘Z’. Then, subtract 26 from all values. At the places of interest, use the subtracted values. At the other places, use the original values. Does subtracting from all values and then using only some seem wasteful? With SIMD, this takes no extra computer time and avoids jumps. This strategy is, thus, efficient and common.

The program ends like so:

let output = String::from_utf8_lossy(data.as_array());
assert_eq!(output, "HELLOWORLDIDOHOPEITSALLGOINGWELL");
println!("{}", output);

Notice the .as_array() method. It safely transmutes a Simd struct into a regular Rust array without copying.

Surprisingly to me, this program runs fine on computers without SIMD extensions. Rust nightly compiles the code to regular (non-SIMD) instructions. But we don’t just want to run “fine”, we want to run faster. That requires us to turn on our computer’s SIMD power.

Rule 2: CCC: Check, Control, and Choose your computer’s SIMD capabilities.

To make SIMD programs run faster on your machine, you must first discover which SIMD extensions your machine supports. If you have an Intel/AMD machine, you can use my simd-detect cargo command.

Run with:

rustup override set nightly
cargo install cargo-simd-detect --force
cargo simd-detect

On my machine, it outputs:

extension       width                   available       enabled
sse2            128-bit/16-bytes        true            true
avx2            256-bit/32-bytes        true            false
avx512f         512-bit/64-bytes        true            false

This says that my machine supports the sse2avx2, and avx512f SIMD extensions. Of those, by default, Rust enables the ubiquitous twenty-year-old sse2 extension.

The SIMD extensions form a hierarchy with avx512f above avx2 above sse2. Enabling a higher-level extension also enables the lower-level extensions.

Most Intel/AMD computers also support the ten-year-old avx2 extension. You enable it by setting an environment variable:

# For Windows Command Prompt
set RUSTFLAGS=-C target-feature=+avx2

# For Unix-like shells (like Bash)
export RUSTFLAGS="-C target-feature=+avx2"

“Force install” and run simd-detect again and you should see that avx2 is enabled.

# Force install every time to see changes to 'enabled'
cargo install cargo-simd-detect --force
cargo simd-detect
extension         width                   available       enabled
sse2            128-bit/16-bytes        true            true
avx2            256-bit/32-bytes        true            true
avx512f         512-bit/64-bytes        true            false

Alternatively, you can turn on every SIMD extension that your machine supports:

# For Windows Command Prompt
set RUSTFLAGS=-C target-cpu=native

# For Unix-like shells (like Bash)
export RUSTFLAGS="-C target-cpu=native"

On my machine this enables avx512f, a newer SIMD extension supported by some Intel computers and a few AMD computers.

You can set SIMD extensions back to their default (sse2 on Intel/AMD) with:

# For Windows Command Prompt
set RUSTFLAGS=

# For Unix-like shells (like Bash)
unset RUSTFLAGS

You may wonder why target-cpu=native isn’t Rust’s default. The problem is that binaries created using avx2 or avx512f won’t run on computers missing those SIMD extensions. So, if you are compiling only for your own use, use target-cpu=native. If, however, you are compiling for others, choose your SIMD extensions thoughtfully and let people know which SIMD extension level you are assuming.

Happily, whatever level of SIMD extension you pick, Rust’s SIMD support is so flexible you can easily change your decision later. Let’s next learn details of programming with SIMD in Rust.

Rule 3: Learn core::simd, but selectively.

To build with Rust’s new core::simd module you should learn selected building blocks. Here is a cheatsheet with the structs, methods, etc., that I’ve found most useful. Each item includes a link to its documentation.

Structs

  • Simd – a special, aligned, fixed-length array of SimdElement. We refer to a position in the array and the element stored at that position as a “lane”. By default, we copy Simd structs rather than reference them.
  • Mask – a special Boolean array showing inclusion/exclusion on a per-lane basis.

SimdElements

  • Floating-Point Types: f32f64
  • Integer Types: i8u8i16u16i32u32i64u64isizeusize
  • — but not i128u128

Simd constructors

  • Simd::from_array – creates a Simd struct by copying a fixed-length array.
  • Simd::from_slice – creates a Simd struct by copying the first LANE elements of a slice.
  • Simd::splat – replicates a single value across all lanes of a Simd struct.
  • slice::as_simd – without copying, safely transmutes a regular slice into an aligned slice of Simd (plus unaligned leftovers).

Simd conversion

  • Simd::as_array – without copying, safely transmutes an Simd struct into a regular array reference.

Simd methods and operators

  • simd[i] – extract a value from a lane of a Simd.
  • simd + simd – performs element-wise addition of two Simd structs. Also, supported -*/%, remainder, bitwise-and, -or, xor, -not, -shift.
  • simd += simd – adds another Simd struct to the current one, in place. Other operators supported, too.
  • Simd::simd_gt – compares two Simd structs, returning a Mask indicating which elements of the first are greater than those of the second. Also, supported simd_ltsimd_lesimd_gesimd_ltsimd_eqsimd_ne.
  • Simd::rotate_elements_left – rotates the elements of a Simd struct to the left by a specified amount. Also, rotate_elements_right.
  • simd_swizzle!(simd, indexes) – rearranges the elements of a Simd struct based on the specified const indexes.
  • simd == simd – checks for equality between two Simd structs, returning a regular bool result.
  • Simd::reduce_and – performs a bitwise AND reduction across all lanes of a Simd struct. Also, supported: reduce_orreduce_xorreduce_maxreduce_minreduce_sum (but noreduce_eq).

Mask methods and operators

  • Mask::select – selects elements from two Simd struct based on a mask.
  • Mask::all – tells if the mask is all true.
  • Mask::any – tells if the mask contains any true.

All about lanes

  • Simd::LANES – a constant indicating the number of elements (lanes) in a Simd struct.
  • SupportedLaneCount – tells the allowed values of LANES. Use by generics.
  • simd.lanes – const method that tells a Simd struct’s number of lanes.

Low-level alignment, offsets, etc.

When possible, use to_simd instead.

More, perhaps of interest

With these building blocks at hand, it’s time to build something.

Rule 4: Brainstorm candidate algorithms.

What do you want to speed up? You won’t know ahead of time which SIMD approach (of any) will work best. You should, therefore, create many algorithms that you can then analyze (Rule 5) and benchmark (Rule 7).

I wanted to speed up range-set-blaze, a crate for manipulating sets of “clumpy” integers. I hoped that creating is_consecutive, a function to detect blocks of consecutive integers, would be useful.

Background: Crate range-set-blaze works on “clumpy” integers. “Clumpy”, here, means that the number of ranges needed to represent the data is small compared to the number of input integers. For example, these 1002 input integers

100, 101, …, 489, 499, 501, 502, …, 998, 999, 999, 100, 0

Ultimately become three Rust ranges:

0..=0, 100..=499, 501..=999.

(Internally, the RangeSetBlaze struct represents a set of integers as a sorted list of disjoint ranges stored in a cache efficient BTreeMap.)

Although the input integers are allowed to be unsorted and redundant, we expect them to often be “nice”. RangeSetBlaze’s from_iter constructor already exploits this expectation by grouping up adjacent integers. For example, from_iter first turns the 1002 input integers into four ranges

100..=499, 501..=999, 100..=100, 0..=0.

with minimal, constant memory usage, independent of input size. It then sorts and merges these reduced ranges.

I wondered if a new from_slice method could speed construction from array-like inputs by quickly finding (some) consecutive integers. For example, could it— with minimal, constant memory — turn the 1002 inputs integers into five Rust ranges:

100..=499, 501..=999, 999..=999, 100..=100, 0..=0.

If so, from_iter could then quickly finish the processing.

Let’s start by writing is_consecutive with regular Rust:

pub const LANES: usize = 16;
pub fn is_consecutive_regular(chunk: &[u32; LANES]) -> bool {
    for i in 1..LANES {
        if chunk[i - 1].checked_add(1) != Some(chunk[i]) {
            return false;
        }
    }
    true
}

The algorithm just loops through the array sequentially, checking that each value is one more than its predecessor. It also avoids overflow.

Looping over the items seemed so easy, I wasn’t sure if SIMD could do any better. Here was my first attempt:

Splat0

use std::simd::prelude::*;

const COMPARISON_VALUE_SPLAT0: Simd =
    Simd::from_array([15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]);

pub fn is_consecutive_splat0(chunk: Simd) -> bool {
    if chunk[0].overflowing_add(LANES as u32 - 1) != (chunk[LANES - 1], false) {
        return false;
    }
    let added = chunk + COMPARISON_VALUE_SPLAT0;
    Simd::splat(added[0]) == added
}

Here is an outline of its calculations:

Source: This and all following images by author.

It first (needlessly) checks that the first and last items are 15 apart. It then creates added by adding 15 to the 0th item, 14 to the next, etc. Finally, to see if all items in added are the same, it creates a new Simd based on added’s 0th item and then compares. Recall that splat creates a Simd struct from one value.

Splat1 & Splat2

When I mentioned the is_consecutive problem to Ben Lichtman, he independently came up with this, Splat1:

const COMPARISON_VALUE_SPLAT1: Simd =
    Simd::from_array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);

pub fn is_consecutive_splat1(chunk: Simd) -> bool {
    let subtracted = chunk - COMPARISON_VALUE_SPLAT1;
    Simd::splat(chunk[0]) == subtracted
}

Splat1 subtracts the comparison value from chunk and checks if the result is the same as the first element of chunk, splatted.

He also came up with a variation called Splat2 that splats the first element of subtracted rather than chunk. That would seemingly avoid one memory access.

I’m sure you are wondering which of these is best, but before we discuss that let’s look at two more candidates.

Swizzle

Swizzle is like Splat2 but uses simd_swizzle! instead of splat. Macro simd_swizzle! creates a new Simd by rearranging the lanes of an old Simd according to an array of indexes.

pub fn is_consecutive_sizzle(chunk: Simd) -> bool {
    let subtracted = chunk - COMPARISON_VALUE_SPLAT1;
    simd_swizzle!(subtracted, [0; LANES]) == subtracted
}

Rotate

This one is different. I had high hopes for it.

const COMPARISON_VALUE_ROTATE: Simd =
    Simd::from_array([4294967281, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]);

pub fn is_consecutive_rotate(chunk: Simd) -> bool {
    let rotated = chunk.rotate_elements_right::();
    chunk - rotated == COMPARISON_VALUE_ROTATE
}

The idea is to rotate all the elements one to the right. We then subtract the original chunk from rotated. If the input is consecutive, the result should be “-15” followed by all 1’s. (Using wrapped subtraction, -15 is 4294967281u32.)

Now that we have candidates, let’s start to evaluate them.

Rule 5: Use Godbolt and AI to understand your code’s assembly, even if you don’t know assembly language.

We’ll evaluate the candidates in two ways. First, in this rule, we’ll look at the assembly language generated from our code. Second, in Rule 7, we’ll benchmark the code’s speed.

Don’t worry if you don’t know assembly language, you can still get something out of looking at it.

The easiest way to see the generated assembly language is with the Compiler Explorer, AKA Godbolt. It works best on short bits of code that don’t use outside crates. It looks like this:

Referring to the numbers in the figure above, follow these steps to use Godbolt:

  1. Open godbolt.org with your web browser.
  2. Add a new source editor.
  3. Select Rust as your language.
  4. Paste in the code of interest. Make the functions of interest public (pub fn). Do not include a main or unneeded functions. The tool doesn’t support external crates.
  5. Add a new compiler.
  6. Set the compiler version to nightly.
  7. Set options (for now) to -C opt-level=3 -C target-feature=+avx512f.
  8. If there are errors, look at the output.
  9. If you want to share or save the state of the tool, click “Share”

From the image above, you can see that Splat2 and Sizzle are exactly the same, so we can remove Sizzle from consideration. If you open up a copy of my Godbolt session, you’ll also see that most of the functions compile to about the same number of assembly operations. The exceptions are Regular — which is much longer — and Splat0 — which includes the early check.

In the assembly, 512-bit registers start with ZMM. 256-bit registers start YMM. 128-bit registers start with XMM. If you want to better understand the generated assembly, use AI tools to generate annotations. For example, here I ask Bing Chat about Splat2:

Try different compiler settings, including -C target-feature=+avx2 and then leaving target-feature completely off.

Fewer assembly operations don’t necessarily mean faster speed. Looking at the assembly does, however, give us a sanity check that the compiler is at least trying to use SIMD operations, inlining const references, etc. Also, as with Splat1 and Swizzle, it can sometimes let us know when two candidates are the same.

You may need disassembly features beyond what Godbolt offers, for example, the ability to work with code the uses external crates. B3NNY recommended the cargo tool cargo-show-asm to me. I tried it and found it reasonably easy to use.

The range-set-blaze crate must handle integer types beyond u32. Moreover, we must pick a number of LANES, but we have no reason to think that 16 LANES is always best. To address these needs, in the next rule we’ll generalize the code.

Rule 6: Generalize to all types and LANES with in-lined generics, (and when that doesn’t work) macros, and (when that doesn’t work) traits.

Let’s first generalize Splat1 with generics.

#[inline]
pub fn is_consecutive_splat1_gen(
    chunk: Simd,
    comparison_value: Simd,
) -> bool
where
    T: SimdElement + PartialEq,
    Simd: Sub, Output = Simd>,
    LaneCount: SupportedLaneCount,
{
    let subtracted = chunk - comparison_value;
    Simd::splat(chunk[0]) == subtracted
}

First, note the #[inline] attribute. It’s important for efficiency and we’ll use it on pretty much every one of these small functions.

The function defined above, is_consecutive_splat1_gen, looks great except that it needs a second input, called comparison_value, that we have yet to define.

If you don’t need a generic const comparison_value, I envy you. You can skip to the next rule if you like. Likewise, if you are reading this in the future and creating a generic const comparison_value is as effortless as having your personal robot do your household chores, then I doubly envy you.

We can try to create a comparison_value_splat_gen that is generic and const. Sadly, neither From nor alternative T::One are const, so this doesn’t work:

// DOESN'T WORK BECAUSE From is not const
pub const fn comparison_value_splat_gen() -> Simd
where
    T: SimdElement + Default + From + AddAssign,
    LaneCount: SupportedLaneCount,
{
    let mut arr: [T; N] = [T::from(0usize); N];
    let mut i_usize = 0;
    while i_usize < N {
        arr[i_usize] = T::from(i_usize);
        i_usize += 1;
    }
    Simd::from_array(arr)
}

Macros are the last refuge of scoundrels. So, let’s use macros:

#[macro_export]
macro_rules! define_is_consecutive_splat1 {
    ($function:ident, $type:ty) => {
        #[inline]
        pub fn $function(chunk: Simd) -> bool
        where
            LaneCount: SupportedLaneCount,
        {
            define_comparison_value_splat!(comparison_value_splat, $type);

            let subtracted = chunk - comparison_value_splat();
            Simd::splat(chunk[0]) == subtracted
        }
    };
}
#[macro_export]
macro_rules! define_comparison_value_splat {
    ($function:ident, $type:ty) => {
        pub const fn $function() -> Simd
        where
            LaneCount: SupportedLaneCount,
        {
            let mut arr: [$type; N] = [0; N];
            let mut i = 0;
            while i < N {
                arr[i] = i as $type;
                i += 1;
            }
            Simd::from_array(arr)
        }
    };
}

This lets us run on any particular element type and all number of LANES (Rust Playground):

define_is_consecutive_splat1!(is_consecutive_splat1_i32, i32);

let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 100 + i as i32)));
let ninety_nines: Simd = black_box(Simd::from_array([99; 16]));
assert!(is_consecutive_splat1_i32(a));
assert!(!is_consecutive_splat1_i32(ninety_nines));

Sadly, this still isn’t enough for range-set-blaze. It needs to run on all element types (not just one) and (ideally) all LANES (not just one).

Happily, there’s a workaround, that again depends on macros. It also exploits the fact that we only need to support a finite list of types, namely: i8i16i32i64isizeu8u16u32u64, and usize. If you need to also (or instead) support f32 and f64, that’s fine.

If, on the other hand, you need to support i128 and u128, you may be out of luck. The core::simd module doesn’t support them. We’ll see in Rule 8 how range-set-blaze gets around that at a performance cost.

The workaround defines a new trait, here called IsConsecutive. We then use a macro (that calls a macro, that calls a macro) to implement the trait on the 10 types of interest.

pub trait IsConsecutive {
    fn is_consecutive(chunk: Simd) -> bool
    where
        Self: SimdElement,
        Simd: Sub, Output = Simd>,
        LaneCount: SupportedLaneCount;
}

macro_rules! impl_is_consecutive {
    ($type:ty) => {
        impl IsConsecutive for $type {
            #[inline] // very important
            fn is_consecutive(chunk: Simd) -> bool
            where
                Self: SimdElement,
                Simd: Sub, Output = Simd>,
                LaneCount: SupportedLaneCount,
            {
                define_is_consecutive_splat1!(is_consecutive_splat1, $type);
                is_consecutive_splat1(chunk)
            }
        }
    };
}

impl_is_consecutive!(i8);
impl_is_consecutive!(i16);
impl_is_consecutive!(i32);
impl_is_consecutive!(i64);
impl_is_consecutive!(isize);
impl_is_consecutive!(u8);
impl_is_consecutive!(u16);
impl_is_consecutive!(u32);
impl_is_consecutive!(u64);
impl_is_consecutive!(usize);

We can now call fully generic code (Rust Playground):

// Works on i32 and 16 lanes
let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 100 + i as i32)));
let ninety_nines: Simd = black_box(Simd::from_array([99; 16]));

assert!(IsConsecutive::is_consecutive(a));
assert!(!IsConsecutive::is_consecutive(ninety_nines));

// Works on i8 and 64 lanes
let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 10 + i as i8)));
let ninety_nines: Simd = black_box(Simd::from_array([99; 64]));

assert!(IsConsecutive::is_consecutive(a));
assert!(!IsConsecutive::is_consecutive(ninety_nines));

With this technique, we can create multiple candidate algorithms that are fully generic over type and LANES. Next, it is time to benchmark and see which algorithms are fastest.


Those are the first six rules for adding SIMD code to Rust. In Part 2, we look at rules 7 to 9. These rules will cover how to pick an algorithm and set LANES. Also, how to integrate SIMD operations into your existing code and (importantly) how to make it optional. Part 2 concludes with a discussion of when/if you should use SIMD and ideas for improving Rust’s SIMD experience. I hope to see you there.

Please follow Carl on Medium. I write on scientific programming in Rust and Python, machine learning, and statistics. I tend to write about one article per month.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

NextDecade names Mott as interim CFO

NextDecade Corp. has appointed company senior vice-president Michael (Mike) Mott as interim chief financial officer, effective Oct. 20, 2025. Mott will take over from Brent Wahl, who resigns from the company as chief financial officer, effective Oct. 20. Wahl was named chief financial officer of NextDecade in 2021 after having served

Read More »

Three options for wireless power in the enterprise

Sensors such as these can be attached to pallets to track its location, says Srivastava. “People in Europe are very conscious about where their food is coming from and, to comply with regulations, companies need to have sensors on the pallets,” he says. “Or they might need to know that

Read More »

IBM unveils advanced quantum computer in Spain

IBM executives and officials from the Basque Government and regional councils in front of Europe’s first IBM Quantum System Two, located at the IBM-Euskadi Quantum Computational Center in San Sebastián, Spain. The Basque Government and IBM unveil the first IBM Quantum System Two in Europe at the IBM-Euskadi Quantum Computational

Read More »

Oil Ends Third Straight Weekly Loss

Oil limped to a third week of losses, weighed down by signs the market is tipping into the surplus analysts have been awaiting. West Texas Intermediate ended the day little changed near $57 a barrel, down 2.3% this week in its longest losing streak since March. News on oversupply kept piling up this week. The International Energy Agency raised its estimate for next year’s global overhang by about 18%. And a US storage broker reported a surge in bids for securing tank capacity at the country’s key crude hub in Cushing, Oklahoma, underscoring that traders are positioning themselves for the glut. Prices for flagship US oil grades have also weakened. Crude traders are also following the on-again, off-again tensions between Washington and Beijing. President Donald Trump on Friday said higher tariffs against China were not “sustainable” expressed optimism that an upcoming meeting with Xi could yield a lasting trade peace after last week threatening an additional 100% tariff on Beijing’s goods. The shift in sentiment eased some concerns that the ongoing tit-for-tat between the two biggest crude consumers could cripple energy consumption. Trump, meanwhile, said he would hold a second meeting with Russian counterpart Vladimir Putin “within two weeks or so” aimed at ending the war in Ukraine, a scenario that stands to send prices toward $50 a barrel, according to Citigroup Inc. The US president on Friday shrugged off concerns Putin may be manipulating him. “Crude is weighing Trump’s meeting with Putin against building evidence of oversupply,” said Joe DeLaura, global energy strategist at Rabobank. “With oil markets in contango from the second quarter of next year onward, the next path is down for crude unless demand surprises to the upside, which we view as unlikely.” Western nations are turning the screws on Russia’s energy sector in a bid

Read More »

Colombia Warms to Fossil Fuels

Colombia, the world’s only significant oil producer to join a bloc of nations vowing to quit fossil fuels, is poised for an about face. As President Gustavo Petro’s term winds down, most prominent candidates running to replace him are promising to put oil and gas back at the forefront of the nation’s energy policy. Even moderate and center-left candidates are calling for Colombia to start using hydraulic fracturing, or fracking, which Petro has pushed to make illegal. “If God gave us oil, coal and gas — we’ll use oil, coal and gas,” Claudia López, a former Bogotá mayor with a record of supporting progressive causes, said in an X post. Petro, Colombia’s first leftist president, made global headlines in 2023 by announcing the nation would sign the “fossil fuel non-proliferation treaty” to end oil, gas and coal production. It made the one-time Marxist guerrilla fighter a celebrity at international climate gatherings and brought heft to the non-proliferation movement, which until then was made up of mostly small island nations. The effort, however, backfired in Colombia. Petro’s refusal to allow new drilling contracts exacerbated a domestic gas shortage, forcing the nation to import fuel at higher prices. Now with Petro’s popularity waning amid rising violence and a ballooning fiscal deficit, presidential hopefuls are distancing themselves from his policies, including energy. During a recent event in Bogotá, five candidates held up “YES” signs when asked if they would authorize fracking in Colombia. Only one, Petro’s former environment minister Susana Muhamad, said no.  The shift in Colombia reflects a broader retreat from the fight against climate change globally. In the US, President Donald Trump is gutting Joe Biden’s climate policies, blocking renewable energy projects and withdrawing from the Paris climate accord. And the European Union is struggling to pass laws requiring further ambitious emissions cuts and walking back many of

Read More »

Phillips 66 progresses California refinery shuttering plan, Chevron El Segundo fire adds to state’s refining uncertainty

Capacity crash Phillips 66’s announcement of its progress on the refinery shuttering comes amid growing concerns by California’s in-state refiners that unfavorable market conditions alongside regulatory pressure stemming from aggressive state legislation aimed at refiners will prevent long-term viability and competitiveness of crude processing activities in the region. While state officials recently voted to delay enactment of legislation blamed for decisions by California refiners to end operations at their conventional processing sites—which, in addition to Phillips 66 in Los Angeles, includes Valero Energy Corp.’s planned closure of its 145,000-b/d Benicia refinery, just northeast of San Francisco, by midyear 2026—refiners have yet to indicate any signs of backing down from advancing existing shutdown plans. With the state’s loss of about 20% of its traditional crude processing capacity since 2020—and nearly another estimated 20% at threat with the closure of the Los Angeles and Benicia sites—California’s refining capacity may face an additional short-term blow to its consumer fuels market following a late-evening fire on Oct. 2 at Chevron 290,000-b/d refinery in El Segundo. Chevron El Segundo refinery fire “Chevron fire department personnel, including emergency responders from the City of El Segundo and Manhattan Beach [were] actively responding to an isolated fire inside the Chevron El Segundo [r]efinery [on Oct. 2]. All refinery personnel and contractors have been accounted for and there are no injuries,” Chevron said in a statement posted to its official website. “No evacuation orders for area residents [were] put in place by emergency response agencies monitoring the incident, and no exceedances have been detected by the facilities fence line monitoring system,” the operator said. In a separate early morning statement on Oct. 3, the city of El Segundo said the fire “originated at a process unit at the southeast corner of the refinery.” Details of the impacted unit, however,

Read More »

OPEC+ to hike oil production by 137,000 b/d starting in November

The Organization of the Petroleum Exporting Countries, along with Russia and a few smaller producers, known as OPEC+, on Oct. 5 announced plans to increase oil production by 137,000 b/d starting in November, maintaining the same increase as in October in response to ongoing worries about a potential supply surplus. OPEC+ described its decision as a reaction to stable global economic outlooks and current healthy market fundamentals, as reflected in low oil inventories. The group also indicates that adjustments to production could be halted or reversed if circumstances warrant. By end-September 2025, OPEC+ had fully phased out the 2.2 million b/d voluntary production cuts introduced in late 2023, completing the process months earlier than initially signaled. They also confirmed their intention to fully compensate for any overproduced volume since January 2024. The next meeting will be hold on Nov. 2. “The market was expecting a somewhat larger increase from OPEC+ as shown in the structure last week. However, the modest 137,000 b/d bloats the already-oversupplied balance for the fourth quarter of 2025 and 2026,” said Janiv Shah, an analyst at Rystad Energy. Global crude oil markets are experiencing a significant bearish trend, with ICE Brent prices dropping from last week’s peak of $70/bbl to $65/bbl on Oct. 5, following OPEC’s announcement to increase production. Rystad Energy forecasts that under these circumstances, price dynamics will likely shift substantially downward. It is unlikely that ICE Brent will remain above $60-65/bbl in 2026 unless OPEC+ alters its approach or sanctions on Russia and Iran drastically curtail exports from those countries.

Read More »

Oversupply concerns tank oil prices

November 2025 WTI NYMEX futures prices retreated rapidly after having breached the Upper-Bollinger Band limit the prior week. They are now trading below the 8-, 13- and 20-day Moving Averages this week and managed to breach the Lower-Bollinger Band limit, a Buy signal. Small gains occurred Friday on this technical buying. Volume is around the recent average at 208,000. The Relative Strength Indicator (RSI), a momentum indicator, is now in oversold territory at 41, another potential Buy signal. Resistance is now pegged at $61.50 while near-term critical Support is $60.35 (Lower-Bollinger Band).   Looking ahead Traders will enter next week with the prospect of OPEC+ output increases baked into prices after this week’s declines. Hard data as to the actual increases achieved with these decisions will set the tone going forward but the current bear market for oil will be hard to shake as we’ve exited the driving season. The Tropics continue to show no real threat to the Gulf of Mexico as we move towards the end of the peak of the hurricane season on Oct. 15. A prolonged government shutdown will set a negative tone for the US economy which is always negative for energy demand. Natural gas, fundamental analysis NYMEX natural gas futures got a boost this week on the change from October to November, the first winter month on the energy calendar. Additionally, 90-degree temperatures remained for much of the southern tier of states. A lower-than-forecasted storage injection added some bullish sentiment midweek as well. The week’s High was $3.585 /MMbtu on Thursday while the week’s Low was $3.13 on Monday. Supply/demand data was not available this week due to the government shutdown.   Dutch TTF prices fell slightly to $10.92/MMbtu while Asia’s JKM was quoted at $11.05/MMbtu. The EIA’s Weekly Natural Gas Storage Report indicated an

Read More »

The Business: Mitigating and Eliminating Corrosion Under Insulation

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: #c19a06; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; background-color: undefined !important; } <!–> In this episode of the Oil & Gas Journal ReEnterprised podcast, Chris Smith, editor-in-chief, talks with Kristin Leonard, North American Marketing Director for the Energy Market at Sherwin-Williams. They discuss the causes and consequences of CUI, including its impact on asset integrity and operational safety. This episode offers insights into how evolving engineering solutions are addressing long-standing challenges in energy operations.  ]–> Podcast Guest: <!–> [–> <!–> Kristin LeonardNorth American Marketing DirectorSherwin-Williams –> <!–> Kristin Leonard is the North American Marketing Director for the Energy Market for Sherwin-Williams Protective & Marine Division. With extensive experience in the coatings industry, she has held roles at Bechtel Corporation EPC and ExxonMobil Downstream. Kristin holds a bachelor’s degree in Polymer Science and High-Performance Materials from The University of Southern Mississippi and has received several industry awards. She is also a past Chair for the AMPP Global Center Board of Directors. ]–> <!–> This content is sponsored by: ]–> <!–> –> <!–> ]–> <!–> ]–>

Read More »

Roundup: Digital Realty Marks Major Milestones in AI, Quantum Computing, Data Center Development

Key features of the DRIL include: • High-Density AI and HPC Testing. The DRIL supports AI and high-performance computing (HPC) workloads with high-density colocation, accommodating workloads up to 150 kW per cabinet. • AI Infrastructure Optimization. The ePlus AI Experience Center lets businesses explore AI-specific power, cooling, and GPU resource requirements in an environment optimized for AI infrastructure. • Hybrid Cloud Validation. With direct cloud connectivity, users can refine hybrid strategies and onboard through cross connects. • AI Workload Orchestration. Customers can orchestrate AI workloads across Digital Realty’s Private AI Exchange (AIPx) for seamless integration and performance. • Latency Testing Across Locations. Enterprises can test latency scenarios for seamless performance across multiple locations and cloud destinations. The firm’s Northern Virginia campus is the primary DRIL location, but companies can also test latency scenarios between there and other remote locations. DRIL rollout to other global locations is already in progress, and London is scheduled to go live in early 2026. Digital Realty, Redeployable Launch Pathway for Veteran Technical Careers As new data centers are created, they need talented workers. To that end, Digital Realty has partnered with Redeployable, an AI-powered career platform for veterans, to expand access to technical careers in the United Kingdom and United States. The collaboration launched a Site Engineer Pathway, now live on the Redeployable platform. It helps veterans explore, prepare for, and transition into roles at Digital Realty. Nearly half of veterans leave their first civilian role within a year, often due to unclear expectations, poor skill translation, and limited support, according to Redeployable. The Site Engineer Pathway uses real-world relevance and replaces vague job descriptions with an experience-based view of technical careers. Veterans can engage in scenario-based “job drops” simulating real facility and system challenges so they can assess their fit for the role before applying. They

Read More »

BlackRock’s $40B data center deal opens a new infrastructure battle for CIOs

Everest Group partner Yugal Joshi said, “CIOs are under significant pressure to clearly define their data center strategy beyond traditional one-off leases. Given most of the capacity is built and delivered by fewer players, CIOs need to prepare for a higher-price market with limited negotiation power.” The numbers bear this out. Global data center costs rose to $217.30 per kilowatt per month in the first quarter of 2025, with major markets seeing increases of 17-18% year-over-year, according to CBRE. Those prices are at levels last seen in 2011-2012, and analysts expect them to remain elevated. Gogia said, “The combination of AI demand, energy scarcity, and environmental regulation has permanently rewritten the economics of running workloads. Prices that once looked extraordinary have now become baseline.” Hyperscalers get first dibs The consolidation problem is compounded by the way capacity is being allocated. North America’s data center vacancy rate fell to 1.6% in the first half of 2025, with Northern Virginia posting just 0.76%, according to CBRE Research. More troubling for enterprises: 74.3% of capacity currently under construction is already preleased, primarily to cloud and AI providers. “The global compute market is no longer governed by open supply and demand,” Gogia said. “It is increasingly shaped by pre-emptive control. Hyperscalers and AI majors are reserving capacity years in advance, often before the first trench for power is dug. This has quietly created a two-tier world: one in which large players guarantee their future and everyone else competes for what remains.” That dynamic forces enterprises into longer planning cycles. “CIOs must forecast their infrastructure requirements with the same precision they apply to financial budgets and talent pipelines,” Gogia said. “The planning horizon must stretch to three or even five years.”

Read More »

Nvidia, Infineon partner for AI data center power overhaul

The solution is to convert power right at the GPU on the server board and to upgrade the backbone to 800 volts. That should squeeze more reliability and efficiency out of the system while dealing with the heat, Infineon stated.   Nvidia announced the 800 Volt direct current (VDC) power architecture at Computex 2025 as a much-needed replacement for the 54 Volt backbone currently in use, which is overwhelmed by the demand of AI processors and increasingly prone to failure. “This makes sense with the power needs of AI and how it is growing,” said Alvin Nguyen, senior analyst with Forrester Research. “This helps mitigate power losses seen from lower voltage and AC systems, reduces the need for materials like copper for wiring/bus bars, better reliability, and better serviceability.” Infineon says a shift to a centralized 800 VDC architecture allows for reduced power losses, higher efficiency and reliability. However, the new architecture requires new power conversion solutions and safety mechanisms to prevent potential hazards and costly server downtimes such as service and maintenance.

Read More »

Meta details cutting-edge networking technologies for AI infrastructure

ESUN initiative As part of its standardization efforts, Meta said it would be a key player in the new Ethernet for Scale-Up Networking (ESUN) initiative that brings together AMD, Arista, ARM, Broadcom, Cisco, HPE Networking, Marvell, Microsoft, NVIDIA, OpenAI and Oracle to advance the networking technology to handle the growing scale-up domain for AI systems. ESUN will focus solely on open, standards-based Ethernet switching and framing for scale-up networking—excluding host-side stacks, non-Ethernet protocols, application-layer solutions, and proprietary technologies. The group will focus on the development and interoperability of XPU network interfaces and Ethernet switch ASICs for scale-up networks, the OCP wrote in a blog. ESUN will actively engage with other organizations such as Ultra-Ethernet Consortium (UEC) and long-standing IEEE 802.3 Ethernet to align open standards, incorporate best practices, and accelerate innovation, the OCP stated. Data center networking milestones The launch of ESUN is just one of the AI networking developments Meta shared at the event. Meta engineers also announced three data center networking innovations aimed at making its infrastructure more flexible, scalable, and efficient: The evolution of Meta’s Disaggregated Scheduled Fabric (DSF) to support scale-out interconnect for large AI clusters that span entire data center buildings. A new Non-Scheduled Fabric (NSF) architecture based entirely on shallow-buffer, disaggregated Ethernet switches that will support our largest AI clusters like Prometheus. The addition of Minipack3N, based on Nvidia’s Ethernet Spectrum-4 ASIC, to Meta’s portfolio of 51Tbps OCP switches that use OCP’s Switch Abstraction Interface and Meta’s Facebook Open Switching System (FBOSS) software stack. DSF is Meta’s open networking fabric that completely separates switch hardware, NICs, endpoints, and other networking components from the underlying network and uses OCP-SAI and FBOSS to achieve that, according to Meta. It supports Ethernet-based RoCE RDMA over Converged Ethernet (RoCE/RDMA)) to endpoints, accelerators and NICs from multiple vendors, such as Nvidia,

Read More »

Arm joins Open Compute Project to build next-generation AI data center silicon

Keeping up with the demand comes down to performance, and more specifically, performance per watt. With power limited, OEMs have become much more involved in all aspects of the system design, rather than pulling silicon off the shelf or pulling servers or racks off the shelf. “They’re getting much more specific about what that silicon looks like, which is a big departure from where the data center was ten or 15 years ago. The point here being is that they look to create a more optimized system design to bring the acceleration closer to the compute, and get much better performance per watt,” said Awad. The Open Compute Project is a global industry organization dedicated to designing and sharing open-source hardware configurations for data center technologies and infrastructure. It covers everything from silicon products to rack and tray design.  It is hosting its 2025 OCP Global Summit this week in San Jose, Calif. Arm also was part of the Ethernet for Scale-Up Networking (ESUN) initiative announced this week at the Summit that included AMD, Arista, Broadcom, Cisco, HPE Networking, Marvell, Meta, Microsoft, and Nvidia. ESUN promises to advance Ethernet networking technology to handle scale-up connectivity across accelerated AI infrastructures. Arm’s goal by joining OCP is to encourage knowledge sharing and collaboration between companies and users to share ideas, specifications and intellectual property. It is known for focusing on modular rather than monolithic designs, which is where chiplets come in. For example, customers might have multiple different companies building a 64-core CPU and then choose IO to pair it with, whether like PCIe or an NVLink. They then choose their own memory subsystem, deciding whether to go HBM, LPDDR, or DDR. It’s all mix and match like Legos, Awad said.

Read More »

BlackRock-Led Consortium to Acquire Aligned Data Centers in $40 Billion AI Infrastructure Deal

Capital Strategy and Infrastructure Readiness The AIP consortium has outlined an initial $30 billion in equity, with potential to scale toward $100 billion including debt over time as part of a broader AI infrastructure buildout. The Aligned acquisition represents a cornerstone investment within that capital roadmap. Aligned’s “ready-to-scale” platform – encompassing land, permits, interconnects, and power roadmaps – is far more valuable today than a patchwork of single-site developments. The consortium framed the transaction as a direct response to the global AI buildout crunch, targeting critical land, energy, and equipment bottlenecks that continue to constrain hyperscale expansion. Platform Overview: Aligned’s Evolution and Strategic Fit Aligned Data Centers has rapidly emerged as a scale developer and operator purpose-built for high-density, quick-turn capacity demanded by hyperscalers and AI platforms. Beyond the U.S., Aligned extended its reach across the Americas through its acquisition of ODATA in Latin America, creating a Pan-American presence that now spans more than 50 campuses and over 5 GW of capacity. The company has repeatedly accessed both public and private capital markets, most recently securing more than $12 billion in new equity and debt financing to accelerate expansion. Aligned’s U.S.–LATAM footprint provides geographic diversification and proximity to fast-growing AI regions. The buyer consortium’s global relationships – spanning utilities, OEMs, and sovereign-fund partners – help address power, interconnect, and supply-chain constraints, all of which are critical to sustaining growth in the AI data-center ecosystem. Macquarie Asset Management built Aligned from a niche U.S. operator into a 5 GW-plus, multi-market platform, the kind of asset infrastructure investors covet as AI demand outpaces grid and supply-chain capacity. Its sale at this stage reflects a broader wave of industry consolidation among large-scale digital-infrastructure owners. Since its own acquisition by BlackRock in early 2024, GIP has strengthened its position as one of the world’s top owners

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »