Stay Ahead, Stay ONMINE

Nine Rules for SIMD Acceleration of Your Rust Code (Part 1)

Thanks to Ben Lichtman (B3NNY) at the Seattle Rust Meetup for pointing me in the right direction on SIMD. SIMD (Single Instruction, Multiple Data) operations have been a feature of Intel/AMD and ARM CPUs since the early 2000s. These operations enable you to, for example, add an array of eight i32 to another array of eight i32 with just one CPU operation on a single core. Using SIMD operations greatly speeds up certain tasks. If you’re not using SIMD, you may not be fully using your CPU’s capabilities. Is this “Yet Another Rust and SIMD” article? Yes and no. Yes, I did apply SIMD to a programming problem and then feel compelled to write an article about it. No, I hope that this article also goes into enough depth that it can guide you through your project. It explains the newly available SIMD capabilities and settings in Rust nightly. It includes a Rust SIMD cheatsheet. It shows how to make your SIMD code generic without leaving safe Rust. It gets you started with tools such as Godbolt and Criterion. Finally, it introduces new cargo commands that make the process easier. The range-set-blaze crate uses its RangeSetBlaze::from_iter method to ingest potentially long sequences of integers. When the integers are “clumpy”, it can do this 30 times faster than Rust’s standard HashSet::from_iter. Can we do even better if we use Simd operations? Yes! See this documentation for the definition of “clumpy”. Also, what happens if the integers are not clumpy? RangeSetBlaze is 2 to 3 times slower than HashSet. On clumpy integers, RangeSetBlaze::from_slice — a new method based on SIMD operations — is 7 times faster than RangeSetBlaze::from_iter. That makes it more than 200 times faster than HashSet::from_iter. (When the integers are not clumpy, it is still 2 to 3 times slower than HashSet.) Over the course of implementing this speed up, I learned nine rules that can help you accelerate your projects with SIMD operations. The rules are: Use nightly Rust and core::simd, Rust’s experimental standard SIMD module. CCC: Check, Control, and Choose your computer’s SIMD capabilities. Learn core::simd, but selectively. Brainstorm candidate algorithms. Use Godbolt and AI to understand your code’s assembly, even if you don’t know assembly language. Generalize to all types and LANES with in-lined generics, (and when that doesn’t work) macros, and (when that doesn’t work) traits. See Part 2 for these rules: 7. Use Criterion benchmarking to pick an algorithm and to discover that LANES should (almost) always be 32 or 64. 8. Integrate your best SIMD algorithm into your project with as_simd, special code for i128/u128, and additional in-context benchmarking. 9. Extricate your best SIMD algorithm from your project (for now) with an optional cargo feature. Aside: To avoid wishy-washiness, I call these “rules”, but they are, of course, just suggestions. Rule 1: Use nightly Rust and core::simd, Rust’s experimental standard SIMD module. Rust can access SIMD operations either via the stable core::arch module or via nighty’s core::simd module. Let’s compare them: core::arch core::simd Nightly Delightfully easy and portable. Limits downstream users to nightly. I decided to go with “easy”. If you decide to take the harder road, starting first with the easier path may still be worthwhile. In either case, before we try to use SIMD operations in a larger project, let’s make sure we can get them working at all. Here are the steps: First, create a project called simd_hello: cargo new simd_hello cd simd_hello Edit src/main.rs to contain (Rust playground): // Tell nightly Rust to enable ‘portable_simd’ #![feature(portable_simd)] use core::simd::prelude::*; // constant Simd structs const LANES: usize = 32; const THIRTEENS: Simd = Simd::::from_array([13; LANES]); const TWENTYSIXS: Simd = Simd::::from_array([26; LANES]); const ZEES: Simd = Simd::::from_array([b’Z’; LANES]); fn main() { // create a Simd struct from a slice of LANES bytes let mut data = Simd::::from_slice(b”URYYBJBEYQVQBUBCRVGFNYYTBVATJRYY”); data += THIRTEENS; // add 13 to each byte // compare each byte to ‘Z’, where the byte is greater than ‘Z’, subtract 26 let mask = data.simd_gt(ZEES); // compare each byte to ‘Z’ data = mask.select(data – TWENTYSIXS, data); let output = String::from_utf8_lossy(data.as_array()); assert_eq!(output, “HELLOWORLDIDOHOPEITSALLGOINGWELL”); println!(“{}”, output); } Next — full SIMD capabilities require the nightly version of Rust. Assuming you have Rust installed, install nightly (rustup install nightly). Make sure you have the latest nightly version (rustup update nightly). Finally, set this project to use nightly (rustup override set nightly). You can now run the program with cargo run. The program applies ROT13 decryption to 32 bytes of upper-case letters. With SIMD, the program can decrypt all 32 bytes simultaneously. Let’s look at each section of the program to see how it works. It starts with: #![feature(portable_simd)] use core::simd::prelude::*; Rust nightly offers its extra capabilities (or “features”) only on request. The #![feature(portable_simd)] statement requests that Rust nightly make available the new experimental core::simd module. The use statement then imports the module’s most important types and traits. In the code’s next section, we define useful constants: const LANES: usize = 32; const THIRTEENS: Simd = Simd::::from_array([13; LANES]); const TWENTYSIXS: Simd = Simd::::from_array([26; LANES]); const ZEES: Simd = Simd::::from_array([b’Z’; LANES]); The Simd struct is a special kind of Rust array. (It is, for example, always memory aligned.) The constant LANES tells the length of the Simd array. The from_array constructor copies a regular Rust array to create a Simd. In this case, because we want const Simd’s, the arrays we construct from must also be const. The next two lines copy our encrypted text into data and then adds 13 to each letter. let mut data = Simd::::from_slice(b”URYYBJBEYQVQBUBCRVGFNYYTBVATJRYY”); data += THIRTEENS; What if you make an error and your encrypted text isn’t exactly length LANES (32)? Sadly, the compiler won’t tell you. Instead, when you run the program, from_slice will panic. What if the encrypted text contains non-upper-case letters? In this example program, we’ll ignore that possibility. The += operator does element-wise addition between the Simd data and Simd THIRTEENS. It puts the result in data. Recall that debug builds of regular Rust addition check for overflows. Not so with SIMD. Rust defines SIMD arithmetic operators to always wrap. Values of type u8 wrap after 255. Coincidentally, Rot13 decryption also requires wrapping, but after ‘Z’ rather than after 255. Here is one approach to coding the needed Rot13 wrapping. It subtracts 26 from any values on beyond ‘Z’. let mask = data.simd_gt(ZEES); data = mask.select(data – TWENTYSIXS, data); This says to find the element-wise places beyond ‘Z’. Then, subtract 26 from all values. At the places of interest, use the subtracted values. At the other places, use the original values. Does subtracting from all values and then using only some seem wasteful? With SIMD, this takes no extra computer time and avoids jumps. This strategy is, thus, efficient and common. The program ends like so: let output = String::from_utf8_lossy(data.as_array()); assert_eq!(output, “HELLOWORLDIDOHOPEITSALLGOINGWELL”); println!(“{}”, output); Notice the .as_array() method. It safely transmutes a Simd struct into a regular Rust array without copying. Surprisingly to me, this program runs fine on computers without SIMD extensions. Rust nightly compiles the code to regular (non-SIMD) instructions. But we don’t just want to run “fine”, we want to run faster. That requires us to turn on our computer’s SIMD power. Rule 2: CCC: Check, Control, and Choose your computer’s SIMD capabilities. To make SIMD programs run faster on your machine, you must first discover which SIMD extensions your machine supports. If you have an Intel/AMD machine, you can use my simd-detect cargo command. Run with: rustup override set nightly cargo install cargo-simd-detect –force cargo simd-detect On my machine, it outputs: extension width available enabled sse2 128-bit/16-bytes true true avx2 256-bit/32-bytes true false avx512f 512-bit/64-bytes true false This says that my machine supports the sse2, avx2, and avx512f SIMD extensions. Of those, by default, Rust enables the ubiquitous twenty-year-old sse2 extension. The SIMD extensions form a hierarchy with avx512f above avx2 above sse2. Enabling a higher-level extension also enables the lower-level extensions. Most Intel/AMD computers also support the ten-year-old avx2 extension. You enable it by setting an environment variable: # For Windows Command Prompt set RUSTFLAGS=-C target-feature=+avx2 # For Unix-like shells (like Bash) export RUSTFLAGS=”-C target-feature=+avx2″ “Force install” and run simd-detect again and you should see that avx2 is enabled. # Force install every time to see changes to ‘enabled’ cargo install cargo-simd-detect –force cargo simd-detect extension width available enabled sse2 128-bit/16-bytes true true avx2 256-bit/32-bytes true true avx512f 512-bit/64-bytes true false Alternatively, you can turn on every SIMD extension that your machine supports: # For Windows Command Prompt set RUSTFLAGS=-C target-cpu=native # For Unix-like shells (like Bash) export RUSTFLAGS=”-C target-cpu=native” On my machine this enables avx512f, a newer SIMD extension supported by some Intel computers and a few AMD computers. You can set SIMD extensions back to their default (sse2 on Intel/AMD) with: # For Windows Command Prompt set RUSTFLAGS= # For Unix-like shells (like Bash) unset RUSTFLAGS You may wonder why target-cpu=native isn’t Rust’s default. The problem is that binaries created using avx2 or avx512f won’t run on computers missing those SIMD extensions. So, if you are compiling only for your own use, use target-cpu=native. If, however, you are compiling for others, choose your SIMD extensions thoughtfully and let people know which SIMD extension level you are assuming. Happily, whatever level of SIMD extension you pick, Rust’s SIMD support is so flexible you can easily change your decision later. Let’s next learn details of programming with SIMD in Rust. Rule 3: Learn core::simd, but selectively. To build with Rust’s new core::simd module you should learn selected building blocks. Here is a cheatsheet with the structs, methods, etc., that I’ve found most useful. Each item includes a link to its documentation. Structs Simd – a special, aligned, fixed-length array of SimdElement. We refer to a position in the array and the element stored at that position as a “lane”. By default, we copy Simd structs rather than reference them. Mask – a special Boolean array showing inclusion/exclusion on a per-lane basis. SimdElements Floating-Point Types: f32, f64 Integer Types: i8, u8, i16, u16, i32, u32, i64, u64, isize, usize — but not i128, u128 Simd constructors Simd::from_array – creates a Simd struct by copying a fixed-length array. Simd::from_slice – creates a Simd struct by copying the first LANE elements of a slice. Simd::splat – replicates a single value across all lanes of a Simd struct. slice::as_simd – without copying, safely transmutes a regular slice into an aligned slice of Simd (plus unaligned leftovers). Simd conversion Simd::as_array – without copying, safely transmutes an Simd struct into a regular array reference. Simd methods and operators simd[i] – extract a value from a lane of a Simd. simd + simd – performs element-wise addition of two Simd structs. Also, supported -, *, /, %, remainder, bitwise-and, -or, xor, -not, -shift. simd += simd – adds another Simd struct to the current one, in place. Other operators supported, too. Simd::simd_gt – compares two Simd structs, returning a Mask indicating which elements of the first are greater than those of the second. Also, supported simd_lt, simd_le, simd_ge, simd_lt, simd_eq, simd_ne. Simd::rotate_elements_left – rotates the elements of a Simd struct to the left by a specified amount. Also, rotate_elements_right. simd_swizzle!(simd, indexes) – rearranges the elements of a Simd struct based on the specified const indexes. simd == simd – checks for equality between two Simd structs, returning a regular bool result. Simd::reduce_and – performs a bitwise AND reduction across all lanes of a Simd struct. Also, supported: reduce_or, reduce_xor, reduce_max, reduce_min, reduce_sum (but noreduce_eq). Mask methods and operators Mask::select – selects elements from two Simd struct based on a mask. Mask::all – tells if the mask is all true. Mask::any – tells if the mask contains any true. All about lanes Simd::LANES – a constant indicating the number of elements (lanes) in a Simd struct. SupportedLaneCount – tells the allowed values of LANES. Use by generics. simd.lanes – const method that tells a Simd struct’s number of lanes. Low-level alignment, offsets, etc. When possible, use to_simd instead. More, perhaps of interest With these building blocks at hand, it’s time to build something. Rule 4: Brainstorm candidate algorithms. What do you want to speed up? You won’t know ahead of time which SIMD approach (of any) will work best. You should, therefore, create many algorithms that you can then analyze (Rule 5) and benchmark (Rule 7). I wanted to speed up range-set-blaze, a crate for manipulating sets of “clumpy” integers. I hoped that creating is_consecutive, a function to detect blocks of consecutive integers, would be useful. Background: Crate range-set-blaze works on “clumpy” integers. “Clumpy”, here, means that the number of ranges needed to represent the data is small compared to the number of input integers. For example, these 1002 input integers 100, 101, …, 489, 499, 501, 502, …, 998, 999, 999, 100, 0 Ultimately become three Rust ranges: 0..=0, 100..=499, 501..=999. (Internally, the RangeSetBlaze struct represents a set of integers as a sorted list of disjoint ranges stored in a cache efficient BTreeMap.) Although the input integers are allowed to be unsorted and redundant, we expect them to often be “nice”. RangeSetBlaze’s from_iter constructor already exploits this expectation by grouping up adjacent integers. For example, from_iter first turns the 1002 input integers into four ranges 100..=499, 501..=999, 100..=100, 0..=0. with minimal, constant memory usage, independent of input size. It then sorts and merges these reduced ranges. I wondered if a new from_slice method could speed construction from array-like inputs by quickly finding (some) consecutive integers. For example, could it— with minimal, constant memory — turn the 1002 inputs integers into five Rust ranges: 100..=499, 501..=999, 999..=999, 100..=100, 0..=0. If so, from_iter could then quickly finish the processing. Let’s start by writing is_consecutive with regular Rust: pub const LANES: usize = 16; pub fn is_consecutive_regular(chunk: &[u32; LANES]) – > bool { for i in 1..LANES { if chunk[i – 1].checked_add(1) != Some(chunk[i]) { return false; } } true } The algorithm just loops through the array sequentially, checking that each value is one more than its predecessor. It also avoids overflow. Looping over the items seemed so easy, I wasn’t sure if SIMD could do any better. Here was my first attempt: Splat0 use std::simd::prelude::*; const COMPARISON_VALUE_SPLAT0: Simd = Simd::from_array([15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]); pub fn is_consecutive_splat0(chunk: Simd) – > bool { if chunk[0].overflowing_add(LANES as u32 – 1) != (chunk[LANES – 1], false) { return false; } let added = chunk + COMPARISON_VALUE_SPLAT0; Simd::splat(added[0]) == added } Here is an outline of its calculations: Source: This and all following images by author. It first (needlessly) checks that the first and last items are 15 apart. It then creates added by adding 15 to the 0th item, 14 to the next, etc. Finally, to see if all items in added are the same, it creates a new Simd based on added’s 0th item and then compares. Recall that splat creates a Simd struct from one value. Splat1 & Splat2 When I mentioned the is_consecutive problem to Ben Lichtman, he independently came up with this, Splat1: const COMPARISON_VALUE_SPLAT1: Simd = Simd::from_array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]); pub fn is_consecutive_splat1(chunk: Simd) – > bool { let subtracted = chunk – COMPARISON_VALUE_SPLAT1; Simd::splat(chunk[0]) == subtracted } Splat1 subtracts the comparison value from chunk and checks if the result is the same as the first element of chunk, splatted. He also came up with a variation called Splat2 that splats the first element of subtracted rather than chunk. That would seemingly avoid one memory access. I’m sure you are wondering which of these is best, but before we discuss that let’s look at two more candidates. Swizzle Swizzle is like Splat2 but uses simd_swizzle! instead of splat. Macro simd_swizzle! creates a new Simd by rearranging the lanes of an old Simd according to an array of indexes. pub fn is_consecutive_sizzle(chunk: Simd) – > bool { let subtracted = chunk – COMPARISON_VALUE_SPLAT1; simd_swizzle!(subtracted, [0; LANES]) == subtracted } Rotate This one is different. I had high hopes for it. const COMPARISON_VALUE_ROTATE: Simd = Simd::from_array([4294967281, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]); pub fn is_consecutive_rotate(chunk: Simd) – > bool { let rotated = chunk.rotate_elements_right::(); chunk – rotated == COMPARISON_VALUE_ROTATE } The idea is to rotate all the elements one to the right. We then subtract the original chunk from rotated. If the input is consecutive, the result should be “-15” followed by all 1’s. (Using wrapped subtraction, -15 is 4294967281u32.) Now that we have candidates, let’s start to evaluate them. Rule 5: Use Godbolt and AI to understand your code’s assembly, even if you don’t know assembly language. We’ll evaluate the candidates in two ways. First, in this rule, we’ll look at the assembly language generated from our code. Second, in Rule 7, we’ll benchmark the code’s speed. Don’t worry if you don’t know assembly language, you can still get something out of looking at it. The easiest way to see the generated assembly language is with the Compiler Explorer, AKA Godbolt. It works best on short bits of code that don’t use outside crates. It looks like this: Referring to the numbers in the figure above, follow these steps to use Godbolt: Open godbolt.org with your web browser. Add a new source editor. Select Rust as your language. Paste in the code of interest. Make the functions of interest public (pub fn). Do not include a main or unneeded functions. The tool doesn’t support external crates. Add a new compiler. Set the compiler version to nightly. Set options (for now) to -C opt-level=3 -C target-feature=+avx512f. If there are errors, look at the output. If you want to share or save the state of the tool, click “Share” From the image above, you can see that Splat2 and Sizzle are exactly the same, so we can remove Sizzle from consideration. If you open up a copy of my Godbolt session, you’ll also see that most of the functions compile to about the same number of assembly operations. The exceptions are Regular — which is much longer — and Splat0 — which includes the early check. In the assembly, 512-bit registers start with ZMM. 256-bit registers start YMM. 128-bit registers start with XMM. If you want to better understand the generated assembly, use AI tools to generate annotations. For example, here I ask Bing Chat about Splat2: Try different compiler settings, including -C target-feature=+avx2 and then leaving target-feature completely off. Fewer assembly operations don’t necessarily mean faster speed. Looking at the assembly does, however, give us a sanity check that the compiler is at least trying to use SIMD operations, inlining const references, etc. Also, as with Splat1 and Swizzle, it can sometimes let us know when two candidates are the same. You may need disassembly features beyond what Godbolt offers, for example, the ability to work with code the uses external crates. B3NNY recommended the cargo tool cargo-show-asm to me. I tried it and found it reasonably easy to use. The range-set-blaze crate must handle integer types beyond u32. Moreover, we must pick a number of LANES, but we have no reason to think that 16 LANES is always best. To address these needs, in the next rule we’ll generalize the code. Rule 6: Generalize to all types and LANES with in-lined generics, (and when that doesn’t work) macros, and (when that doesn’t work) traits. Let’s first generalize Splat1 with generics. #[inline] pub fn is_consecutive_splat1_gen( chunk: Simd, comparison_value: Simd, ) – > bool where T: SimdElement + PartialEq, Simd: Sub, LaneCount: SupportedLaneCount, { let subtracted = chunk – comparison_value; Simd::splat(chunk[0]) == subtracted } First, note the #[inline] attribute. It’s important for efficiency and we’ll use it on pretty much every one of these small functions. The function defined above, is_consecutive_splat1_gen, looks great except that it needs a second input, called comparison_value, that we have yet to define. If you don’t need a generic const comparison_value, I envy you. You can skip to the next rule if you like. Likewise, if you are reading this in the future and creating a generic const comparison_value is as effortless as having your personal robot do your household chores, then I doubly envy you. We can try to create a comparison_value_splat_gen that is generic and const. Sadly, neither From nor alternative T::One are const, so this doesn’t work: // DOESN’T WORK BECAUSE From is not const pub const fn comparison_value_splat_gen() – > Simd where T: SimdElement + Default + From + AddAssign, LaneCount: SupportedLaneCount, { let mut arr: [T; N] = [T::from(0usize); N]; let mut i_usize = 0; while i_usize { #[inline] pub fn $function(chunk: Simd) – > bool where LaneCount: SupportedLaneCount, { define_comparison_value_splat!(comparison_value_splat, $type); let subtracted = chunk – comparison_value_splat(); Simd::splat(chunk[0]) == subtracted } }; } #[macro_export] macro_rules! define_comparison_value_splat { ($function:ident, $type:ty) = > { pub const fn $function() – > Simd where LaneCount: SupportedLaneCount, { let mut arr: [$type; N] = [0; N]; let mut i = 0; while i bool where Self: SimdElement, Simd: Sub, LaneCount: SupportedLaneCount; } macro_rules! impl_is_consecutive { ($type:ty) = > { impl IsConsecutive for $type { #[inline] // very important fn is_consecutive(chunk: Simd) – > bool where Self: SimdElement, Simd: Sub, LaneCount: SupportedLaneCount, { define_is_consecutive_splat1!(is_consecutive_splat1, $type); is_consecutive_splat1(chunk) } } }; } impl_is_consecutive!(i8); impl_is_consecutive!(i16); impl_is_consecutive!(i32); impl_is_consecutive!(i64); impl_is_consecutive!(isize); impl_is_consecutive!(u8); impl_is_consecutive!(u16); impl_is_consecutive!(u32); impl_is_consecutive!(u64); impl_is_consecutive!(usize); We can now call fully generic code (Rust Playground): // Works on i32 and 16 lanes let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 100 + i as i32))); let ninety_nines: Simd = black_box(Simd::from_array([99; 16])); assert!(IsConsecutive::is_consecutive(a)); assert!(!IsConsecutive::is_consecutive(ninety_nines)); // Works on i8 and 64 lanes let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 10 + i as i8))); let ninety_nines: Simd = black_box(Simd::from_array([99; 64])); assert!(IsConsecutive::is_consecutive(a)); assert!(!IsConsecutive::is_consecutive(ninety_nines)); With this technique, we can create multiple candidate algorithms that are fully generic over type and LANES. Next, it is time to benchmark and see which algorithms are fastest. Those are the first six rules for adding SIMD code to Rust. In Part 2, we look at rules 7 to 9. These rules will cover how to pick an algorithm and set LANES. Also, how to integrate SIMD operations into your existing code and (importantly) how to make it optional. Part 2 concludes with a discussion of when/if you should use SIMD and ideas for improving Rust’s SIMD experience. I hope to see you there. Please follow Carl on Medium. I write on scientific programming in Rust and Python, machine learning, and statistics. I tend to write about one article per month.

Thanks to Ben Lichtman (B3NNY) at the Seattle Rust Meetup for pointing me in the right direction on SIMD.

SIMD (Single Instruction, Multiple Data) operations have been a feature of Intel/AMD and ARM CPUs since the early 2000s. These operations enable you to, for example, add an array of eight i32 to another array of eight i32 with just one CPU operation on a single core. Using SIMD operations greatly speeds up certain tasks. If you’re not using SIMD, you may not be fully using your CPU’s capabilities.

Is this “Yet Another Rust and SIMD” article? Yes and no. Yes, I did apply SIMD to a programming problem and then feel compelled to write an article about it. No, I hope that this article also goes into enough depth that it can guide you through your project. It explains the newly available SIMD capabilities and settings in Rust nightly. It includes a Rust SIMD cheatsheet. It shows how to make your SIMD code generic without leaving safe Rust. It gets you started with tools such as Godbolt and Criterion. Finally, it introduces new cargo commands that make the process easier.


The range-set-blaze crate uses its RangeSetBlaze::from_iter method to ingest potentially long sequences of integers. When the integers are “clumpy”, it can do this 30 times faster than Rust’s standard HashSet::from_iter. Can we do even better if we use Simd operations? Yes!

See this documentation for the definition of “clumpy”. Also, what happens if the integers are not clumpy? RangeSetBlaze is 2 to 3 times slower than HashSet.

On clumpy integers, RangeSetBlaze::from_slice — a new method based on SIMD operations — is 7 times faster than RangeSetBlaze::from_iter. That makes it more than 200 times faster than HashSet::from_iter. (When the integers are not clumpy, it is still 2 to 3 times slower than HashSet.)

Over the course of implementing this speed up, I learned nine rules that can help you accelerate your projects with SIMD operations.

The rules are:

  1. Use nightly Rust and core::simd, Rust’s experimental standard SIMD module.
  2. CCC: Check, Control, and Choose your computer’s SIMD capabilities.
  3. Learn core::simd, but selectively.
  4. Brainstorm candidate algorithms.
  5. Use Godbolt and AI to understand your code’s assembly, even if you don’t know assembly language.
  6. Generalize to all types and LANES with in-lined generics, (and when that doesn’t work) macros, and (when that doesn’t work) traits.

See Part 2 for these rules:

7. Use Criterion benchmarking to pick an algorithm and to discover that LANES should (almost) always be 32 or 64.

8. Integrate your best SIMD algorithm into your project with as_simd, special code for i128/u128, and additional in-context benchmarking.

9. Extricate your best SIMD algorithm from your project (for now) with an optional cargo feature.

Aside: To avoid wishy-washiness, I call these “rules”, but they are, of course, just suggestions.

Rule 1: Use nightly Rust and core::simd, Rust’s experimental standard SIMD module.

Rust can access SIMD operations either via the stable core::arch module or via nighty’s core::simd module. Let’s compare them:

core::arch

core::simd

  • Nightly
  • Delightfully easy and portable.
  • Limits downstream users to nightly.

I decided to go with “easy”. If you decide to take the harder road, starting first with the easier path may still be worthwhile.


In either case, before we try to use SIMD operations in a larger project, let’s make sure we can get them working at all. Here are the steps:

First, create a project called simd_hello:

cargo new simd_hello
cd simd_hello

Edit src/main.rs to contain (Rust playground):

// Tell nightly Rust to enable 'portable_simd'
#![feature(portable_simd)]
use core::simd::prelude::*;

// constant Simd structs
const LANES: usize = 32;
const THIRTEENS: Simd = Simd::::from_array([13; LANES]);
const TWENTYSIXS: Simd = Simd::::from_array([26; LANES]);
const ZEES: Simd = Simd::::from_array([b'Z'; LANES]);

fn main() {
    // create a Simd struct from a slice of LANES bytes
    let mut data = Simd::::from_slice(b"URYYBJBEYQVQBUBCRVGFNYYTBVATJRYY");

    data += THIRTEENS; // add 13 to each byte

    // compare each byte to 'Z', where the byte is greater than 'Z', subtract 26
    let mask = data.simd_gt(ZEES); // compare each byte to 'Z'
    data = mask.select(data - TWENTYSIXS, data);

    let output = String::from_utf8_lossy(data.as_array());
    assert_eq!(output, "HELLOWORLDIDOHOPEITSALLGOINGWELL");
    println!("{}", output);
}

Next — full SIMD capabilities require the nightly version of Rust. Assuming you have Rust installed, install nightly (rustup install nightly). Make sure you have the latest nightly version (rustup update nightly). Finally, set this project to use nightly (rustup override set nightly).

You can now run the program with cargo run. The program applies ROT13 decryption to 32 bytes of upper-case letters. With SIMD, the program can decrypt all 32 bytes simultaneously.

Let’s look at each section of the program to see how it works. It starts with:

#![feature(portable_simd)]
use core::simd::prelude::*;

Rust nightly offers its extra capabilities (or “features”) only on request. The #![feature(portable_simd)] statement requests that Rust nightly make available the new experimental core::simd module. The use statement then imports the module’s most important types and traits.

In the code’s next section, we define useful constants:

const LANES: usize = 32;
const THIRTEENS: Simd = Simd::::from_array([13; LANES]);
const TWENTYSIXS: Simd = Simd::::from_array([26; LANES]);
const ZEES: Simd = Simd::::from_array([b'Z'; LANES]);

The Simd struct is a special kind of Rust array. (It is, for example, always memory aligned.) The constant LANES tells the length of the Simd array. The from_array constructor copies a regular Rust array to create a Simd. In this case, because we want const Simd’s, the arrays we construct from must also be const.

The next two lines copy our encrypted text into data and then adds 13 to each letter.

let mut data = Simd::::from_slice(b"URYYBJBEYQVQBUBCRVGFNYYTBVATJRYY");
data += THIRTEENS;

What if you make an error and your encrypted text isn’t exactly length LANES (32)? Sadly, the compiler won’t tell you. Instead, when you run the program, from_slice will panic. What if the encrypted text contains non-upper-case letters? In this example program, we’ll ignore that possibility.

The += operator does element-wise addition between the Simd data and Simd THIRTEENS. It puts the result in data. Recall that debug builds of regular Rust addition check for overflows. Not so with SIMD. Rust defines SIMD arithmetic operators to always wrap. Values of type u8 wrap after 255.

Coincidentally, Rot13 decryption also requires wrapping, but after ‘Z’ rather than after 255. Here is one approach to coding the needed Rot13 wrapping. It subtracts 26 from any values on beyond ‘Z’.

let mask = data.simd_gt(ZEES);
data = mask.select(data - TWENTYSIXS, data);

This says to find the element-wise places beyond ‘Z’. Then, subtract 26 from all values. At the places of interest, use the subtracted values. At the other places, use the original values. Does subtracting from all values and then using only some seem wasteful? With SIMD, this takes no extra computer time and avoids jumps. This strategy is, thus, efficient and common.

The program ends like so:

let output = String::from_utf8_lossy(data.as_array());
assert_eq!(output, "HELLOWORLDIDOHOPEITSALLGOINGWELL");
println!("{}", output);

Notice the .as_array() method. It safely transmutes a Simd struct into a regular Rust array without copying.

Surprisingly to me, this program runs fine on computers without SIMD extensions. Rust nightly compiles the code to regular (non-SIMD) instructions. But we don’t just want to run “fine”, we want to run faster. That requires us to turn on our computer’s SIMD power.

Rule 2: CCC: Check, Control, and Choose your computer’s SIMD capabilities.

To make SIMD programs run faster on your machine, you must first discover which SIMD extensions your machine supports. If you have an Intel/AMD machine, you can use my simd-detect cargo command.

Run with:

rustup override set nightly
cargo install cargo-simd-detect --force
cargo simd-detect

On my machine, it outputs:

extension       width                   available       enabled
sse2            128-bit/16-bytes        true            true
avx2            256-bit/32-bytes        true            false
avx512f         512-bit/64-bytes        true            false

This says that my machine supports the sse2avx2, and avx512f SIMD extensions. Of those, by default, Rust enables the ubiquitous twenty-year-old sse2 extension.

The SIMD extensions form a hierarchy with avx512f above avx2 above sse2. Enabling a higher-level extension also enables the lower-level extensions.

Most Intel/AMD computers also support the ten-year-old avx2 extension. You enable it by setting an environment variable:

# For Windows Command Prompt
set RUSTFLAGS=-C target-feature=+avx2

# For Unix-like shells (like Bash)
export RUSTFLAGS="-C target-feature=+avx2"

“Force install” and run simd-detect again and you should see that avx2 is enabled.

# Force install every time to see changes to 'enabled'
cargo install cargo-simd-detect --force
cargo simd-detect
extension         width                   available       enabled
sse2            128-bit/16-bytes        true            true
avx2            256-bit/32-bytes        true            true
avx512f         512-bit/64-bytes        true            false

Alternatively, you can turn on every SIMD extension that your machine supports:

# For Windows Command Prompt
set RUSTFLAGS=-C target-cpu=native

# For Unix-like shells (like Bash)
export RUSTFLAGS="-C target-cpu=native"

On my machine this enables avx512f, a newer SIMD extension supported by some Intel computers and a few AMD computers.

You can set SIMD extensions back to their default (sse2 on Intel/AMD) with:

# For Windows Command Prompt
set RUSTFLAGS=

# For Unix-like shells (like Bash)
unset RUSTFLAGS

You may wonder why target-cpu=native isn’t Rust’s default. The problem is that binaries created using avx2 or avx512f won’t run on computers missing those SIMD extensions. So, if you are compiling only for your own use, use target-cpu=native. If, however, you are compiling for others, choose your SIMD extensions thoughtfully and let people know which SIMD extension level you are assuming.

Happily, whatever level of SIMD extension you pick, Rust’s SIMD support is so flexible you can easily change your decision later. Let’s next learn details of programming with SIMD in Rust.

Rule 3: Learn core::simd, but selectively.

To build with Rust’s new core::simd module you should learn selected building blocks. Here is a cheatsheet with the structs, methods, etc., that I’ve found most useful. Each item includes a link to its documentation.

Structs

  • Simd – a special, aligned, fixed-length array of SimdElement. We refer to a position in the array and the element stored at that position as a “lane”. By default, we copy Simd structs rather than reference them.
  • Mask – a special Boolean array showing inclusion/exclusion on a per-lane basis.

SimdElements

  • Floating-Point Types: f32f64
  • Integer Types: i8u8i16u16i32u32i64u64isizeusize
  • — but not i128u128

Simd constructors

  • Simd::from_array – creates a Simd struct by copying a fixed-length array.
  • Simd::from_slice – creates a Simd struct by copying the first LANE elements of a slice.
  • Simd::splat – replicates a single value across all lanes of a Simd struct.
  • slice::as_simd – without copying, safely transmutes a regular slice into an aligned slice of Simd (plus unaligned leftovers).

Simd conversion

  • Simd::as_array – without copying, safely transmutes an Simd struct into a regular array reference.

Simd methods and operators

  • simd[i] – extract a value from a lane of a Simd.
  • simd + simd – performs element-wise addition of two Simd structs. Also, supported -*/%, remainder, bitwise-and, -or, xor, -not, -shift.
  • simd += simd – adds another Simd struct to the current one, in place. Other operators supported, too.
  • Simd::simd_gt – compares two Simd structs, returning a Mask indicating which elements of the first are greater than those of the second. Also, supported simd_ltsimd_lesimd_gesimd_ltsimd_eqsimd_ne.
  • Simd::rotate_elements_left – rotates the elements of a Simd struct to the left by a specified amount. Also, rotate_elements_right.
  • simd_swizzle!(simd, indexes) – rearranges the elements of a Simd struct based on the specified const indexes.
  • simd == simd – checks for equality between two Simd structs, returning a regular bool result.
  • Simd::reduce_and – performs a bitwise AND reduction across all lanes of a Simd struct. Also, supported: reduce_orreduce_xorreduce_maxreduce_minreduce_sum (but noreduce_eq).

Mask methods and operators

  • Mask::select – selects elements from two Simd struct based on a mask.
  • Mask::all – tells if the mask is all true.
  • Mask::any – tells if the mask contains any true.

All about lanes

  • Simd::LANES – a constant indicating the number of elements (lanes) in a Simd struct.
  • SupportedLaneCount – tells the allowed values of LANES. Use by generics.
  • simd.lanes – const method that tells a Simd struct’s number of lanes.

Low-level alignment, offsets, etc.

When possible, use to_simd instead.

More, perhaps of interest

With these building blocks at hand, it’s time to build something.

Rule 4: Brainstorm candidate algorithms.

What do you want to speed up? You won’t know ahead of time which SIMD approach (of any) will work best. You should, therefore, create many algorithms that you can then analyze (Rule 5) and benchmark (Rule 7).

I wanted to speed up range-set-blaze, a crate for manipulating sets of “clumpy” integers. I hoped that creating is_consecutive, a function to detect blocks of consecutive integers, would be useful.

Background: Crate range-set-blaze works on “clumpy” integers. “Clumpy”, here, means that the number of ranges needed to represent the data is small compared to the number of input integers. For example, these 1002 input integers

100, 101, …, 489, 499, 501, 502, …, 998, 999, 999, 100, 0

Ultimately become three Rust ranges:

0..=0, 100..=499, 501..=999.

(Internally, the RangeSetBlaze struct represents a set of integers as a sorted list of disjoint ranges stored in a cache efficient BTreeMap.)

Although the input integers are allowed to be unsorted and redundant, we expect them to often be “nice”. RangeSetBlaze’s from_iter constructor already exploits this expectation by grouping up adjacent integers. For example, from_iter first turns the 1002 input integers into four ranges

100..=499, 501..=999, 100..=100, 0..=0.

with minimal, constant memory usage, independent of input size. It then sorts and merges these reduced ranges.

I wondered if a new from_slice method could speed construction from array-like inputs by quickly finding (some) consecutive integers. For example, could it— with minimal, constant memory — turn the 1002 inputs integers into five Rust ranges:

100..=499, 501..=999, 999..=999, 100..=100, 0..=0.

If so, from_iter could then quickly finish the processing.

Let’s start by writing is_consecutive with regular Rust:

pub const LANES: usize = 16;
pub fn is_consecutive_regular(chunk: &[u32; LANES]) -> bool {
    for i in 1..LANES {
        if chunk[i - 1].checked_add(1) != Some(chunk[i]) {
            return false;
        }
    }
    true
}

The algorithm just loops through the array sequentially, checking that each value is one more than its predecessor. It also avoids overflow.

Looping over the items seemed so easy, I wasn’t sure if SIMD could do any better. Here was my first attempt:

Splat0

use std::simd::prelude::*;

const COMPARISON_VALUE_SPLAT0: Simd =
    Simd::from_array([15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]);

pub fn is_consecutive_splat0(chunk: Simd) -> bool {
    if chunk[0].overflowing_add(LANES as u32 - 1) != (chunk[LANES - 1], false) {
        return false;
    }
    let added = chunk + COMPARISON_VALUE_SPLAT0;
    Simd::splat(added[0]) == added
}

Here is an outline of its calculations:

Source: This and all following images by author.

It first (needlessly) checks that the first and last items are 15 apart. It then creates added by adding 15 to the 0th item, 14 to the next, etc. Finally, to see if all items in added are the same, it creates a new Simd based on added’s 0th item and then compares. Recall that splat creates a Simd struct from one value.

Splat1 & Splat2

When I mentioned the is_consecutive problem to Ben Lichtman, he independently came up with this, Splat1:

const COMPARISON_VALUE_SPLAT1: Simd =
    Simd::from_array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);

pub fn is_consecutive_splat1(chunk: Simd) -> bool {
    let subtracted = chunk - COMPARISON_VALUE_SPLAT1;
    Simd::splat(chunk[0]) == subtracted
}

Splat1 subtracts the comparison value from chunk and checks if the result is the same as the first element of chunk, splatted.

He also came up with a variation called Splat2 that splats the first element of subtracted rather than chunk. That would seemingly avoid one memory access.

I’m sure you are wondering which of these is best, but before we discuss that let’s look at two more candidates.

Swizzle

Swizzle is like Splat2 but uses simd_swizzle! instead of splat. Macro simd_swizzle! creates a new Simd by rearranging the lanes of an old Simd according to an array of indexes.

pub fn is_consecutive_sizzle(chunk: Simd) -> bool {
    let subtracted = chunk - COMPARISON_VALUE_SPLAT1;
    simd_swizzle!(subtracted, [0; LANES]) == subtracted
}

Rotate

This one is different. I had high hopes for it.

const COMPARISON_VALUE_ROTATE: Simd =
    Simd::from_array([4294967281, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]);

pub fn is_consecutive_rotate(chunk: Simd) -> bool {
    let rotated = chunk.rotate_elements_right::();
    chunk - rotated == COMPARISON_VALUE_ROTATE
}

The idea is to rotate all the elements one to the right. We then subtract the original chunk from rotated. If the input is consecutive, the result should be “-15” followed by all 1’s. (Using wrapped subtraction, -15 is 4294967281u32.)

Now that we have candidates, let’s start to evaluate them.

Rule 5: Use Godbolt and AI to understand your code’s assembly, even if you don’t know assembly language.

We’ll evaluate the candidates in two ways. First, in this rule, we’ll look at the assembly language generated from our code. Second, in Rule 7, we’ll benchmark the code’s speed.

Don’t worry if you don’t know assembly language, you can still get something out of looking at it.

The easiest way to see the generated assembly language is with the Compiler Explorer, AKA Godbolt. It works best on short bits of code that don’t use outside crates. It looks like this:

Referring to the numbers in the figure above, follow these steps to use Godbolt:

  1. Open godbolt.org with your web browser.
  2. Add a new source editor.
  3. Select Rust as your language.
  4. Paste in the code of interest. Make the functions of interest public (pub fn). Do not include a main or unneeded functions. The tool doesn’t support external crates.
  5. Add a new compiler.
  6. Set the compiler version to nightly.
  7. Set options (for now) to -C opt-level=3 -C target-feature=+avx512f.
  8. If there are errors, look at the output.
  9. If you want to share or save the state of the tool, click “Share”

From the image above, you can see that Splat2 and Sizzle are exactly the same, so we can remove Sizzle from consideration. If you open up a copy of my Godbolt session, you’ll also see that most of the functions compile to about the same number of assembly operations. The exceptions are Regular — which is much longer — and Splat0 — which includes the early check.

In the assembly, 512-bit registers start with ZMM. 256-bit registers start YMM. 128-bit registers start with XMM. If you want to better understand the generated assembly, use AI tools to generate annotations. For example, here I ask Bing Chat about Splat2:

Try different compiler settings, including -C target-feature=+avx2 and then leaving target-feature completely off.

Fewer assembly operations don’t necessarily mean faster speed. Looking at the assembly does, however, give us a sanity check that the compiler is at least trying to use SIMD operations, inlining const references, etc. Also, as with Splat1 and Swizzle, it can sometimes let us know when two candidates are the same.

You may need disassembly features beyond what Godbolt offers, for example, the ability to work with code the uses external crates. B3NNY recommended the cargo tool cargo-show-asm to me. I tried it and found it reasonably easy to use.

The range-set-blaze crate must handle integer types beyond u32. Moreover, we must pick a number of LANES, but we have no reason to think that 16 LANES is always best. To address these needs, in the next rule we’ll generalize the code.

Rule 6: Generalize to all types and LANES with in-lined generics, (and when that doesn’t work) macros, and (when that doesn’t work) traits.

Let’s first generalize Splat1 with generics.

#[inline]
pub fn is_consecutive_splat1_gen(
    chunk: Simd,
    comparison_value: Simd,
) -> bool
where
    T: SimdElement + PartialEq,
    Simd: Sub, Output = Simd>,
    LaneCount: SupportedLaneCount,
{
    let subtracted = chunk - comparison_value;
    Simd::splat(chunk[0]) == subtracted
}

First, note the #[inline] attribute. It’s important for efficiency and we’ll use it on pretty much every one of these small functions.

The function defined above, is_consecutive_splat1_gen, looks great except that it needs a second input, called comparison_value, that we have yet to define.

If you don’t need a generic const comparison_value, I envy you. You can skip to the next rule if you like. Likewise, if you are reading this in the future and creating a generic const comparison_value is as effortless as having your personal robot do your household chores, then I doubly envy you.

We can try to create a comparison_value_splat_gen that is generic and const. Sadly, neither From nor alternative T::One are const, so this doesn’t work:

// DOESN'T WORK BECAUSE From is not const
pub const fn comparison_value_splat_gen() -> Simd
where
    T: SimdElement + Default + From + AddAssign,
    LaneCount: SupportedLaneCount,
{
    let mut arr: [T; N] = [T::from(0usize); N];
    let mut i_usize = 0;
    while i_usize < N {
        arr[i_usize] = T::from(i_usize);
        i_usize += 1;
    }
    Simd::from_array(arr)
}

Macros are the last refuge of scoundrels. So, let’s use macros:

#[macro_export]
macro_rules! define_is_consecutive_splat1 {
    ($function:ident, $type:ty) => {
        #[inline]
        pub fn $function(chunk: Simd) -> bool
        where
            LaneCount: SupportedLaneCount,
        {
            define_comparison_value_splat!(comparison_value_splat, $type);

            let subtracted = chunk - comparison_value_splat();
            Simd::splat(chunk[0]) == subtracted
        }
    };
}
#[macro_export]
macro_rules! define_comparison_value_splat {
    ($function:ident, $type:ty) => {
        pub const fn $function() -> Simd
        where
            LaneCount: SupportedLaneCount,
        {
            let mut arr: [$type; N] = [0; N];
            let mut i = 0;
            while i < N {
                arr[i] = i as $type;
                i += 1;
            }
            Simd::from_array(arr)
        }
    };
}

This lets us run on any particular element type and all number of LANES (Rust Playground):

define_is_consecutive_splat1!(is_consecutive_splat1_i32, i32);

let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 100 + i as i32)));
let ninety_nines: Simd = black_box(Simd::from_array([99; 16]));
assert!(is_consecutive_splat1_i32(a));
assert!(!is_consecutive_splat1_i32(ninety_nines));

Sadly, this still isn’t enough for range-set-blaze. It needs to run on all element types (not just one) and (ideally) all LANES (not just one).

Happily, there’s a workaround, that again depends on macros. It also exploits the fact that we only need to support a finite list of types, namely: i8i16i32i64isizeu8u16u32u64, and usize. If you need to also (or instead) support f32 and f64, that’s fine.

If, on the other hand, you need to support i128 and u128, you may be out of luck. The core::simd module doesn’t support them. We’ll see in Rule 8 how range-set-blaze gets around that at a performance cost.

The workaround defines a new trait, here called IsConsecutive. We then use a macro (that calls a macro, that calls a macro) to implement the trait on the 10 types of interest.

pub trait IsConsecutive {
    fn is_consecutive(chunk: Simd) -> bool
    where
        Self: SimdElement,
        Simd: Sub, Output = Simd>,
        LaneCount: SupportedLaneCount;
}

macro_rules! impl_is_consecutive {
    ($type:ty) => {
        impl IsConsecutive for $type {
            #[inline] // very important
            fn is_consecutive(chunk: Simd) -> bool
            where
                Self: SimdElement,
                Simd: Sub, Output = Simd>,
                LaneCount: SupportedLaneCount,
            {
                define_is_consecutive_splat1!(is_consecutive_splat1, $type);
                is_consecutive_splat1(chunk)
            }
        }
    };
}

impl_is_consecutive!(i8);
impl_is_consecutive!(i16);
impl_is_consecutive!(i32);
impl_is_consecutive!(i64);
impl_is_consecutive!(isize);
impl_is_consecutive!(u8);
impl_is_consecutive!(u16);
impl_is_consecutive!(u32);
impl_is_consecutive!(u64);
impl_is_consecutive!(usize);

We can now call fully generic code (Rust Playground):

// Works on i32 and 16 lanes
let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 100 + i as i32)));
let ninety_nines: Simd = black_box(Simd::from_array([99; 16]));

assert!(IsConsecutive::is_consecutive(a));
assert!(!IsConsecutive::is_consecutive(ninety_nines));

// Works on i8 and 64 lanes
let a: Simd = black_box(Simd::from_array(array::from_fn(|i| 10 + i as i8)));
let ninety_nines: Simd = black_box(Simd::from_array([99; 64]));

assert!(IsConsecutive::is_consecutive(a));
assert!(!IsConsecutive::is_consecutive(ninety_nines));

With this technique, we can create multiple candidate algorithms that are fully generic over type and LANES. Next, it is time to benchmark and see which algorithms are fastest.


Those are the first six rules for adding SIMD code to Rust. In Part 2, we look at rules 7 to 9. These rules will cover how to pick an algorithm and set LANES. Also, how to integrate SIMD operations into your existing code and (importantly) how to make it optional. Part 2 concludes with a discussion of when/if you should use SIMD and ideas for improving Rust’s SIMD experience. I hope to see you there.

Please follow Carl on Medium. I write on scientific programming in Rust and Python, machine learning, and statistics. I tend to write about one article per month.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Chronosphere unveils logging package with cost control features

According to a study by Chronosphere, enterprise log data is growing at 250% year-over-year, and Chronosphere Logs helps engineers and observability teams to resolve incidents faster while controlling costs. The usage and volume analysis and proactive recommendations can help reduce data before it’s stored, the company says. “Organizations are drowning

Read More »

Cisco CIO on the future of IT: AI, simplicity, and employee power

AI can democratize access to information to deliver a “white-glove experience” once reserved for senior executives, Previn said. That might include, for example, real-time information retrieval and intelligent process execution for every employee. “Usually, in a large company, you’ve got senior executives, and you’ve got early career hires, and it’s

Read More »

AMI MegaRAC authentication bypass flaw is being exploitated, CISA warns

The spoofing attack works by manipulating HTTP request headers sent to the Redfish interface. Attackers can add specific values to headers like “X-Server-Addr” to make their external requests appear as if they’re coming from inside the server itself. Since the system automatically trusts internal requests as authenticated, this spoofing technique

Read More »

Israeli Gas Flows to Egypt Return to Normal as Iran Truce Holds

Israeli natural gas flows to Egypt returned to normal levels after a truce with Iran allowed the Jewish state to reopen facilities shuttered by the 12-day conflict. Daily exports have climbed to 1 billion cubic feet per day, according to two people with direct knowledge of the situation. That’s up from 260 million cubic feet when Israel’s Leviathan gas field, the country’s biggest, restarted on Wednesday, they said, declining to be identified because they’re not authorized to speak to the media.  The increased flows have let Egyptian authorities resume supplies to some factories that had been halted because of the shortages. Israel temporarily closed two of its three gas fields – Chevron-operated Leviathan and Energean’s Karish – shortly after launching attacks on Iran on June 13. The facilities that provided the bulk of exports to Egypt and Jordan resumed operations last week after a US-brokered ceasefire with the Islamic Republic took hold. The ramped-up supplies are a relief for Cairo, which has swung from a net exporter to importer of natural gas in recent years. As Israel and Iran traded blows, Egypt enacted contingency plans that included seeking alternative fuel purchases, limiting gas to some industries and switching power stations to fuel oil and diesel to maintain electricity output. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

California Regulator Wants to Pause Newsom Refinery Profit Cap

California’s energy market regulator is backing off a plan to place a profit cap on oil refiners in the state.  Siva Gunda, vice chair of the California Energy Commission, said during a Friday briefing that the cap would “serve as a deterrent” to refiners boosting investments in the state. Gunda said the commission wants to increase gasoline supply in California after two refineries announced plans to close in the next year, accounting for about one-fifth of the state’s crude-processing capacity. The recommendation marks a reversal from years of regulatory scrutiny by Governor Gavin Newsom and the California Energy Commission that contributed to plans by Phillips 66 and Valero Energy Corp. to shut their refineries. The closings prompted Newsom to adjust course in April and urge the energy regulator to collaborate with fuel makers to ensure affordable and reliable supply. Gunda wrote in a Friday letter to Newsom that the commission should pause implementation of a profit margin cap and focus on fuel resupply strategies instead. It comes more than two years after Newsom and state lawmakers gave the energy commission authority to determine a profit margin on refiners and impose financial penalties for violations. The state will be looking to increase fuel imports to make up for the loss of refining capacity, Gunda said. In the short term, California gas prices could rise 15 to 30 cents a gallon because of the loss of production, he said. A spokesperson for the energy commission said the estimated price increases would be mitigated by the plan presented on Friday. Californians already pay the highest gasoline prices in the country. Wade Crowfoot, secretary of the California Natural Resources Agency, said residents want the state to transition away from oil and gas yet they need to prevent cost spikes. “We get it,” he said.

Read More »

State utility regulators urge FERC to slash ROE transmission incentive

Utility regulators from about 35 states are urging the Federal Energy Regulatory Commission to sharply limit a 0.5% return on equity incentive the agency gives to utilities that join regional transmission organizations. “The time has come for the Commission to eliminate its policy of granting the RTO Participation Adder in perpetuity, if not to eliminate this incentive altogether,” the Organization of PJM States, the Organization of MISO States, the New England States Committee on Electricity and the Southwest Power Pool Regional State Committee said in a Friday letter to FERC. The state regulators and others contends the RTO incentive adds millions to ratepayer costs to encourage behavior — being an RTO member — that they would likely do anyway. In 2021, FERC proposed limiting its ROE adder to three years. FERC Chairman Mark Christie supports the proposal as well as limiting other incentives aimed at encouraging utilities to build transmission lines. However, it appears he has been unable to convince a majority of FERC commissioners to reduce those incentives. Christie’s term ends today, although he plans to stay at the agency until at least FERC’s next open meeting on July 24. Limiting the ROE incentive could reduce utility income. Public Service Enterprise Group, for example, estimates that removing the incentive would cut annual net income and cash inflows by about $40 million for its Public Service Electric & Gas subsidiary, according to a Feb. 25 filing at the U.S. Securities and Exchange Commission. The utility earned about $1.5 billion in 2024. Ending the incentive would reduce American Electric Power’s pretax income by $35 million to $50 million a year, the utility company said in its 2023 annual report with the SEC. In April, WIRES, a transmission-focused trade group, the Edison Electric Institute, which represents investor-owned utilities, and GridWise Alliance, a

Read More »

Affordability a ‘formidable challenge’ as load shifts to tech, industrial customers: ICF

Dive Brief: Keeping electricity affordable for consumers is a “formidable challenge” amid projections of declining generation capacity reserves and persistent uncertainty around the scale and pace of future load growth, ICF International Vice President of Energy Markets Maria Scheller said Thursday.  Meanwhile, broad policy uncertainty and an increasingly shaky regulatory environment give utilities and capital markets pause about expensive new infrastructure investments that could become stranded assets, Scheller said in a webinar on ICF’s “Powering the Future: Addressing Surging U.S. Electricity Demand” report. Policy conversations around import tariffs, federal energy tax credits and permitting reform are unfolding as the balance of electricity demand shifts from residential and business consumers to technology and industrial customers, which tend to require around-the-clock power, Scheller added. Dive Insight: The coming shift in U.S. electricity consumption represents less of a new paradigm than a return to the industrial-driven demand the country saw from the 1950s into the 1980s, after which deindustrialization and consumer-centric trends like the widespread adoption of air conditioning, electric resistance heating and personal computing shifted the balance toward the residential segment, Scheller said. The shift is important because unlike residential loads, which show considerable seasonal and intraday variation, industrial loads are flatter, less weather-dependent and more sensitive to voltage fluctuations, Scheller said. By 2035, ICF expects nearly 40% of total U.S. load will have a “flat, power-quality-sensitive profile,” and that overall load will grow faster than peak load, she said. In 2030, ICF projects more than 3% annual power consumption growth, compared with less than 2% annual peak load growth, according to a webinar slide. That’s not to say residential demand won’t also grow in the next few years as consumers electrify home heating and buy more electric vehicles — only that data centers and other industrial demand will “dwarf” it, Scheller

Read More »

Trump attacks on NRC independence pose health, safety risks

Edwin Lyman is director of nuclear power safety at the Union of Concerned Scientists. A White House executive order issued last month targeting the independence of the Nuclear Regulatory Commission, the federal agency that oversees the safety and security of U.S. commercial nuclear facilities and materials, as well as the possibly illegal firing earlier this month of Commissioner Christopher Hanson by President Donald Trump, are raising serious concerns about the agency’s effectiveness as a regulator going forward. While I’ve often been a critic of the NRC for taking actions favoring the nuclear industry at the expense of public health and safety, preserving the NRC in its current form is the best hope for heading off a U.S. nuclear plant disaster like the 2011 Fukushima Daiichi reactor meltdowns in Japan. My long-standing beef with the NRC has primarily been with its political leadership, not with the rank-and-file staff of highly knowledgeable inspectors, analysts and researchers committed to helping ensure that nuclear power remains safe and secure. These professionals are well aware how quickly things can go south at a nuclear power plant without rigorous oversight. They know from experience what obscure corners to look in and what questions to ask. And they can tell — and are not afraid to push back — when they are getting sold snake oil by fly-by-night startups looking to make easy money by capitalizing on the current nuclear power craze. Technical rigor and expert judgment form the bedrock of this work. But when staff are compelled to sweep legitimate safety concerns under the rug in the interest of political expediency, many will leave rather than compromise their scientific integrity. So there is little wonder that a wave of experienced personnel is headed out the door in the wake of the executive order on NRC “reform,”

Read More »

North America Loses Rigs Week on Week

North America dropped six rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on June 27. Although the U.S. dropped seven rigs week on week, Canada added one rig during the same timeframe, taking the total North America rig count down to 687, comprising 547 rigs from the U.S. and 140 rigs from Canada, the count outlined. Of the total U.S. rig count of 547, 533 rigs are categorized as land rigs, 12 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 432 oil rigs, 109 gas rigs, and six miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 496 horizontal rigs, 38 directional rigs, and 13 vertical rigs. Week on week, the U.S. land rig count reduced by five, its offshore rig count decreased by two, and its inland water rig count remained unchanged, the count highlighted. The country’s oil rig count dropped by six, its gas rig count dropped by two, and its miscellaneous rig count increased by one, week on week, the count showed. The U.S. horizontal rig count dropped by six, its directional rig count dropped by two, and its vertical rig count increased by one, week on week, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Wyoming dropped five rigs, and Oklahoma, Louisiana, and Colorado each dropped one rig. A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Granite Wash basin dropped one rig and the Permian basin dropped one rig. Canada’s total rig count of 140 is made up of 94 oil rigs and 46 gas rigs, Baker Hughes pointed

Read More »

Datacenter industry calls for investment after EU issues water consumption warning

CISPE’s response to the European Commission’s report warns that the resulting regulatory uncertainty could hurt the region’s economy. “Imposing new, standalone water regulations could increase costs, create regulatory fragmentation, and deter investment. This risks shifting infrastructure outside the EU, undermining both sustainability and sovereignty goals,” CISPE said in its latest policy recommendation, Advancing water resilience through digital innovation and responsible stewardship. “Such regulatory uncertainty could also reduce Europe’s attractiveness for climate-neutral infrastructure investment at a time when other regions offer clear and stable frameworks for green data growth,” it added. CISPE’s recommendations are a mix of regulatory harmonization, increased investment, and technological improvement. Currently, water reuse regulation is directed towards agriculture. Updated regulation across the bloc would encourage more efficient use of water in industrial settings such as datacenters, the asosciation said. At the same time, countries struggling with limited public sector budgets are not investing enough in water infrastructure. This could only be addressed by tapping new investment by encouraging formal public-private partnerships (PPPs), it suggested: “Such a framework would enable the development of sustainable financing models that harness private sector innovation and capital, while ensuring robust public oversight and accountability.” Nevertheless, better water management would also require real-time data gathered through networks of IoT sensors coupled to AI analytics and prediction systems. To that end, cloud datacenters were less a drain on water resources than part of the answer: “A cloud-based approach would allow water utilities and industrial users to centralize data collection, automate operational processes, and leverage machine learning algorithms for improved decision-making,” argued CISPE.

Read More »

HPE-Juniper deal clears DOJ hurdle, but settlement requires divestitures

In HPE’s press release following the court’s decision, the vendor wrote that “After close, HPE will facilitate limited access to Juniper’s advanced Mist AIOps technology.” In addition, the DOJ stated that the settlement requires HPE to divest its Instant On business and mandates that the merged firm license critical Juniper software to independent competitors. Specifically, HPE must divest its global Instant On campus and branch WLAN business, including all assets, intellectual property, R&D personnel, and customer relationships, to a DOJ-approved buyer within 180 days. Instant On is aimed primarily at the SMB arena and offers a cloud-based package of wired and wireless networking gear that’s designed for so-called out-of-the-box installation and minimal IT involvement, according to HPE. HPE and Juniper focused on the positive in reacting to the settlement. “Our agreement with the DOJ paves the way to close HPE’s acquisition of Juniper Networks and preserves the intended benefits of this deal for our customers and shareholders, while creating greater competition in the global networking market,” HPE CEO Antonio Neri said in a statement. “For the first time, customers will now have a modern network architecture alternative that can best support the demands of AI workloads. The combination of HPE Aruba Networking and Juniper Networks will provide customers with a comprehensive portfolio of secure, AI-native networking solutions, and accelerate HPE’s ability to grow in the AI data center, service provider and cloud segments.” “This marks an exciting step forward in delivering on a critical customer need – a complete portfolio of modern, secure networking solutions to connect their organizations and provide essential foundations for hybrid cloud and AI,” said Juniper Networks CEO Rami Rahim. “We look forward to closing this transaction and turning our shared vision into reality for enterprise, service provider and cloud customers.”

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Cisco backs quantum networking startup Qunnect

In partnership with Deutsche Telekom’s T-Labs, Qunnect has set up quantum networking testbeds in New York City and Berlin. “Qunnect understands that quantum networking has to work in the real world, not just in pristine lab conditions,” Vijoy Pandey, general manager and senior vice president of Outshift by Cisco, stated in a blog about the investment. “Their room-temperature approach aligns with our quantum data center vision.” Cisco recently announced it is developing a quantum entanglement chip that could ultimately become part of the gear that will populate future quantum data centers. The chip operates at room temperature, uses minimal power, and functions using existing telecom frequencies, according to Pandey.

Read More »

HPE announces GreenLake Intelligence, goes all-in with agentic AI

Like a teammate who never sleeps Agentic AI is coming to Aruba Central as well, with an autonomous supervisory module talking to multiple specialized models to, for example, determine the root cause of an issue and provide recommendations. David Hughes, SVP and chief product officer, HPE Aruba Networking, said, “It’s like having a teammate who can work while you’re asleep, work on problems, and when you arrive in the morning, have those proposed answers there, complete with chain of thought logic explaining how they got to their conclusions.” Several new services for FinOps and sustainability in GreenLake Cloud are also being integrated into GreenLake Intelligence, including a new workload and capacity optimizer, extended consumption analytics to help organizations control costs, and predictive sustainability forecasting and a managed service mode in the HPE Sustainability Insight Center. In addition, updates to the OpsRamp operations copilot, launched in 2024, will enable agentic automation including conversational product help, an agentic command center that enables AI/ML-based alerts, incident management, and root cause analysis across the infrastructure when it is released in the fourth quarter of 2025. It is now a validated observability solution for the Nvidia Enterprise AI Factory. OpsRamp will also be part of the new HPE CloudOps software suite, available in the fourth quarter, which will include HPE Morpheus Enterprise and HPE Zerto. HPE said the new suite will provide automation, orchestration, governance, data mobility, data protection, and cyber resilience for multivendor, multi cloud, multi-workload infrastructures. Matt Kimball, principal analyst for datacenter, compute, and storage at Moor Insights & strategy, sees HPE’s latest announcements aligning nicely with enterprise IT modernization efforts, using AI to optimize performance. “GreenLake Intelligence is really where all of this comes together. I am a huge fan of Morpheus in delivering an agnostic orchestration plane, regardless of operating stack

Read More »

MEF goes beyond metro Ethernet, rebrands as Mplify with expanded scope on NaaS and AI

While MEF is only now rebranding, Vachon said that the scope of the organization had already changed by 2005. Instead of just looking at metro Ethernet, the organization at the time had expanded into carrier Ethernet requirements.  The organization has also had a growing focus on solving the challenge of cross-provider automation, which is where the LSO framework fits in. LSO provides the foundation for an automation framework that allows providers to more efficiently deliver complex services across partner networks, essentially creating a standardized language for service integration.  NaaS leadership and industry blueprint Building on the LSO automation framework, the organization has been working on efforts to help providers with network-as-a-service (NaaS) related guidance and specifications. The organization’s evolution toward NaaS reflects member-driven demands for modern service delivery models. Vachon noted that MEF member organizations were asking for help with NaaS, looking for direction on establishing common definitions and some standard work. The organization responded by developing comprehensive industry guidance. “In 2023 we launched the first blueprint, which is like an industry North Star document. It includes what we think about NaaS and the work we’re doing around it,” Vachon said. The NaaS blueprint encompasses the complete service delivery ecosystem, with APIs including last mile, cloud, data center and security services. (Read more about its vision for NaaS, including easy provisioning and integrated security across a federated network of providers)

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »