Stay Ahead, Stay ONMINE

I Tried Making my Own (Bad) LLM Benchmark to Cheat in Escape Rooms

Recently, DeepSeek announced their latest model, R1, and article after article came out praising its performance relative to cost, and how the release of such open-source models could genuinely change the course of LLMs forever. That is really exciting! And also, too big of a scope to write about… but when a model like DeepSeek […]

Recently, DeepSeek announced their latest model, R1, and article after article came out praising its performance relative to cost, and how the release of such open-source models could genuinely change the course of LLMs forever. That is really exciting! And also, too big of a scope to write about… but when a model like DeepSeek comes out of nowhere with a steel chair, boasting similar performance levels to other models, what does performance really mean in this context?

If you follow AI releases, you’ve seen this dance before. Every new model drops with its graphs showing how it’s somehow simultaneously better than GPT-4 on math problems while being smaller and more efficient. But what exactly are these benchmarks measuring? How are they created? And more importantly, how can we cut through the hype to create our own benchmarks for specific use cases?

I wanted to learn more about LLM Benchmarking.

Part 1: What is a Benchmark? (in 3 seconds)

TL:DR — The SATs (multiple, actually) for LLMs.

Part 1.1: What is a Benchmark? (in more than 3 seconds)

Before we dive into the nitty-gritty of specific benchmarks, let’s take a moment to unpack what we even mean by “LLM Benchmark.” Because calling them the “SATs for AI” feels both right and also slightly oversimplified.

LLM benchmarks are, at their core, structured tests used to measure how well large language models perform on certain tasks. These tasks can be anything from identifying if a statement is true or false, to summarizing a legal document, to generating valid Python functions. Think of them as curated obstacle courses specially designed by AI researchers to test every relevant muscle these models might have. These frameworks typically provide a dataset of inputs with known correct outputs, allowing for consistent comparison between models.

Modern benchmarks employ various evaluation methodologies. Classification metrics like accuracy work for tasks with discrete correct answers, while overlap-based metrics (BLEU, ROUGE) evaluate free-form text generation. Some benchmarks use functional testing for code generation, or employ other LLMs as judges to evaluate response quality.

A typical benchmark usually comes packaged as:

  • A standardized dataset of questions, prompts, or tasks (with correct or reference answers).
  • An evaluation protocol specifying how to measure success, like accuracy, F1 score, BLEU/ROUGE for text generation, or pass/fail rates for coding tasks.
  • A leaderboard or some form of comparative scoreboard, often with big flashy graphs.

Some really famous benchmarks include MMLU for testing multitask language understanding, TruthfulQA for assessing factual accuracy, and HumanEval for measuring coding capabilities. Results are pretty often published on public leaderboards, which let’s people perform some transparent comparison between different models.

From the DeepSeek paper: DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

What Makes a Good Benchmark?

  1. A Clear Task Definition: We want tasks that are unambiguous. The more straightforward and well-specified the challenge, the easier it is to trust the results.
  2. Data Integrity: The test set shouldn’t be floating around in the training data. Because if the model’s seen the exact same question 50 times before, the evaluation is about as useful as giving a math quiz to someone who already has the answer key.
  3. Quantifiable Metrics: You need a standard for scoring performance — like how many times the model’s code passes test cases or how close the generated summary is to a “ground-truth” summary.
  4. Task Diversity & Difficulty: If a benchmark is too easy, everyone just ACES it on day one, and we learn… well, nothing. If it’s too niche (like “We test only the model’s ability to count the digits of Pi for 20 minutes”), that’s also not so helpful.

Life Ain’t All about The Grades

Benchmarks capture only a slice of what LLMs can do. In the real world, your chatbot might need to juggle domain knowledge, keep track of conversation context, abide by your company’s policies, and produce fluent, non-offensive replies. No single standardized test out there fully covers that. As we’ll see in the upcoming case studies, the design and execution of a benchmark can heavily shape the picture you get of your model’s performance… and sometimes lead you astray if you’re not careful with how you measure success.

Now that we have a sense of what Llm Benchmarks are designed to accomplish (and where they might fall short), let’s explore a couple of examples to see how people actually build and use them in practice — with mixed results!

Case Study #1: Leetcode as an LLM Benchmark

As a student in the tech space, the word “Leetcode” popping up during my search for cool benchmarks raised by blood pressure by a statistically significant amount. Unlike Leetcode, which sucks, the paper “Performance Study of LLM-Generated Code on Leetcode” was very interesting — it asks a deceptively simple question: can we use Leetcode to benchmark LLM code generation? Their findings reveal both the promise and pitfalls of this approach.

The Benchmark Design

The researchers built a three-stage validation system. Local tests catch basic errors, Leetcode’s judge verifies correctness, and a custom benchmarking setup measures performance. This setup revealed something critical: benchmarking code performance is harder than it looks.

When they compared local measurements to Leetcode’s metrics, they found only a 0.28 correlation. Leetcode’s measurements showed much higher variation (0.089 vs 0.035 locally). Even worse, Leetcode’s rankings proved unstable — identical solutions could drop from the 77th to 54th percentile just based on submission timing.

A Performance Study of LLM-Generated Code on Leetcode,” In 28th International Conference on Evaluation and Assessment in Software Engineering (EASE 2024), Salerno, Italy (2024)

The Real Problems

Three major issues emerged that challenge Leetcode’s viability as a benchmark:

Data Contamination: Using public problems risks LLMs having seen the solutions during training. The researchers had to use only problems from 2023 to mitigate this.

Platform Instability: Leetcode’s metrics drift over time — memory measurements showed a -0.24 correlation with test date. This makes reproducible benchmarking nearly impossible.

Measurement Reliability: The weak correlation between local and platform measurements raises questions about what we’re actually testing.

What It Means for LLM Benchmarking

This study doesn’t just critique Leetcode — it highlights what we need in a code generation benchmark: reproducible measurements, reliable performance metrics, and guaranteed training-test separation. Until we have platforms built specifically for this purpose, we need to be extremely cautious about using competition platforms as benchmarks.

So! We know that not all benchmarks are viable benchmarks — what about a more mainstream one?

Case Study #2: SuperGLUE — Building a Better Language Understanding Benchmark

The SuperGLUE paper tackles a fascinating problem in AI benchmarking: what do you do when models get too good at your tests? When GLUE became insufficient (with models surpassing human performance), the researchers had to rethink how we measure language understanding.

The Benchmark Design

SuperGLUE’s core innovation is its task selection methodology. The researchers collected task proposals from the NLP community and filtered them through a rigorous process: each task needed clear evaluation metrics, public training data, and — most importantly — significant headroom between machine and human performance.

This resulted in eight tasks (I’ve simplified the table from the document here, it’s a little less readable but you should get the sense of what the questions are asking):

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems, In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada (2019)

What makes these tasks special is their diversity in format. Unlike GLUE’s focus on sentence classification, SuperGLUE includes coreference resolution, reading comprehension, and more com plex reasoning tasks. Each task measures different aspects of language understanding while maintaining clear, quantifiable metrics.


Part 2: Let’s Build a Physical Reasoning Benchmark: To Cheat at Escape Rooms

After looking at some benchmarks like SuperGLUE and Leetcode, I had an idea: what if we tested LLMs on something completely different — physical reasoning… through escape room puzzles?

It’s a pretty valid idea — escape rooms poses possibilities and consequences for failure — screw up one too many puzzles, and your friends will think you’re pretty stupid, and relegate you to spectator duty. Luckily for us however, they (or the poor employees) don’t know that you can sneak a phone into an escape room — and you know just who to ask for the answers. Today, LLMs face off against the puzzles of a physical escape room.

Note: This is NOT a rigorous academic benchmark (please don’t cite this in papers, why would you even want to do that?), or even close to it, and it’s just supposed to be a fun way to test LLM benchmarking and evaluation. Please do not destroy my prompts, I am aware they are bad.

Why Physical Reasoning?

For real, though… most LLM benchmarks focus on linguistic tasks (like SuperGLUE) or code generation (like Leetcode). And for good reason — these are well-defined domains with clear evaluation metrics. But real-world problem solving often requires understanding physical principles and their interactions. The famous “Can GPT-4 do physics?” debates usually center around mathematical problem-solving, not practical physical reasoning.

Looking at existing benchmarks taught me a few key principles:

  1. Clear evaluation metrics are crucial (from SuperGLUE’s task-specific scores)
  2. Problems should have unambiguous solutions (from HumanEval’s test cases)
  3. The benchmark should test distinct capabilities (from MMLU’s subject categories)

Designing the Problems

I settled on escape room puzzles for two reasons. First, they naturally combine physical reasoning with clear goals. Second, they have unambiguous success conditions — either you solve it through the intended way, or you don’t. Third, and most importantly, they let me include “red herrings” — irrelevant items that test if the LLM can identify what matters physically. Fourth, I just really like doing escape rooms (did I mention that already?),

I am aware that this is more than two reasons, but if LLMs can’t count how many rs’ there are in strawberry, I’m allowed to mess up once in a while too.

Here’s how I structured the five core problems:

Fluid Dynamics (FLUID_001) (Ping pong ball stuck in a tube)

  • Tests understanding of buoyancy and fluid displacement
  • Inspired by classic physics problems but in practical context
  • Includes intentionally irrelevant items (like squishy food models)

Light Properties (UV_001) (UV light on a push numebr lock)

  • Tests understanding of UV fluorescence and material properties
  • Combines multiple physical principles (light, material science)
  • Requires understanding of environmental conditions

Mechanical Understanding (CIPHER_001) (A cipher ring)

  • Tests spatial reasoning and mechanical alignment
  • No red herrings — tests for correlating a dial to a cypher wheel
  • Requires understanding rotational symmetry

Force Application (VAC_001) (Can stuck in hole)

  • Tests understanding of vacuum forces and surface adhesion
  • Multiple possible solution approaches
  • Requires understanding force multiplication

Collaborative Physics (COLLAB_001) (Can two people shimmy a key?)

  • Tests understanding of physical constraints in multi-agent scenarios
  • Requires combining multiple physical principles
  • Tests understanding of tool creation and friction

Sounds really fancy… but it’s just some basic physical puzzles. You can access them on my GitHub.

The Technical Part

The benchmark implementation has three main components:

Problem Definition Layer

Problems are defined in a structured JSON format that enforces consistent evaluation:

{
    "problem_id": "FLUID_001",
    "setup": {
        "scenario": "A ping pong ball is at the bottom of a narrow tube...",
        "available_items": ["bottle of water", "squishy food models"...],
        "constraints": ["tube too narrow for manual retrieval"]
    },
    "physical_principles": ["buoyancy", "fluid displacement"],
    "red_herrings": ["squishy food models", "milk carton"],
    "solution": {
        "steps": ["pour water into tube", "allow ball to float"],
        "key_insights": ["water displaces air", "ping pong ball less dense"]
    }
}

This structure draws from SuperGLUE’s design — each component is clearly separated and machine-readable. The physical_principles field explicitly lists what’s being tested, while red_herrings helps in scoring the LLM’s ability to ignore irrelevant information.

2. Evaluation Framework

The evaluation system uses Python’s asyncio for concurrent testing, with retry logic for a little bit more API stability:

@retry(stop=stop_after_attempt(3), wait=wait_exponential(min=1, max=10))
async def evaluate_response(self, criteria: JudgingCriteria) -> Dict:
    """Evaluate a model's response using GPT-4 as judge."""
    async with aiohttp.ClientSession() as session:
        # ... evaluation logic

The scoring system looks at three components:

Physical Understanding Score (PUS) ∈ [0,2]

  • Measures understanding of relevant physical principles
  • Calculated as normalized sum of demonstrated principles

Solution Path Score (SPS) ∈ [0,2]

  • Evaluates completeness and correctness of solution steps
  • Considers practical feasibility of proposed solutions

Red Herring Handling (RHH) ∈ {0,1}

  • A Binary score for avoiding irrelevant items
  • Tests ability to focus on physically relevant factors

And yes, there are also so many other scoring methods, better and worse, that could be used! For example, RHH could be about how many irrelevant items are used in the solution, or it could be a measure of how viable the use is… the point is that picking these metrics are often times pretty arbitrary, but are very very important to making your benchmark is credible, which mine is very much not.

Additionally, I did not want to rewrite any code after. Sue me.

3. Model Interface Layer

The benchmark supports multiple LLM backends through a common interface:

class ModelInterface:
    """Interface for different LLM APIs."""
    async def generate_response(self, prompt: str) -> str:
        raise NotImplementedError
class GPT4Interface(ModelInterface):
    async def generate_response(self, prompt: str) -> str:
        # GPT-4 specific implementation
class ClaudeInterface(ModelInterface):
    async def generate_response(self, prompt: str) -> str:
        # Claude specific implementation

Two models… I can’t really afford any more, please understand.

Let’s Test It!

So after some refilling of API balances, I was ready to go. I ran the benchmark, and after totally not wasting credits on like, 5 different errors, the results were in. And they were… interesting! I used GPT4 and Claude 3.5 Haiku (I am aware these are not similarly spec-ed models) as our victims.

The Challenge: A can is stuck in a hole that almost exactly matches its volume. A combination is written on the bottom of the can that needs to be retrieved.

The inspiration
"VAC_001": {
        "problem_id": "VAC_001",
        "setup": {
            "available_items": [
                "ruler",
                "notebook",
                "suction cup",
                "UV light"
            ],
            "constraints": [
                "can fits too tightly for finger grip",
                "hole is same diameter as can",
                "can is clean and dry"
            ]
        },
        "physical_principles": [
            "vacuum suction",
            "surface adhesion",
            "force multiplication"
        ],
        "red_herrings": [
            "ruler",
            "notebook",
            "UV light"
        ],
        "solution": {
            "steps": [
                "inspect suction cup and can surface",
                "ensure can surface is clean",
                "firmly press suction cup onto can top",
                "create vacuum seal",
                "pull can out using suction cup"
            ],
            "key_insights": [
                "suction cup creates vacuum seal",
                "vacuum force allows extraction",
                "other items don't provide necessary force"
            ]
        }

GPT-4’s Solution

Physical Principles: The main physical principle involved in this solution is the concept of air pressure differential, which is the same principle that allows a suction cup to stick to a surface. Air pressure is higher on one side of the suction cup (outside) compared to the other (inside), and this difference in pressure causes the cup to stick to the surface.

Solution Steps:

  1. Attach the suction cup to the top of the can. The flat surface of the can should provide a good contact for the suction cup to adhere to.
  2. Once the suction cup is firmly attached, gently pull upwards. The air pressure differential should cause the suction cup (and thus the can) to be lifted out of the hole.
  3. Once the can is out of the hole, remove the suction cup from the top of the can.
  4. Use the UV light to illuminate the combination written on the bottom of the can. Some inks only become visible under UV light, which is why this step is necessary.
  5. Write down the combination on the notebook to ensure it’s not forgotten.

Required Items:

  • Suction cup: This is required to adhere to the can and pull it out of the hole, as the can fits too tightly for finger grip.
  • UV light: This is required to reveal the combination written on the bottom of the can, as it may be written in ink that only becomes visible under UV light.
  • Notebook: This is required to record the combination once it’s visible, ensuring it’s not forgotten.”

Not bad at all! GPT-4 manages to figure out the physical principle behind the solution (even if it is very wordy), and manages to step-by-step describe how to solve the problem. It does end up using the UV light and notebook, but that’s more of a problem on my end, not punishing the use of unnecessary items in the prompt enough. Also, the uses for these items seem decently reasonable… so why not!

Haiku-3.5’s Solution

Physical Principles: The key physical principles involved in this scenario are the concept of volume and the use of suction to extract the can from the hole. Since the can fits almost exactly into the hole, it is likely that the can is stuck due to the tight fit and the lack of any air gaps or space between the can and the hole. This tight fit creates a vacuum-like effect, making it difficult to grip and pull the can out.

Solution Steps:

  1. Measure the diameter of the can using the ruler. This will help determine the size of the hole and confirm that the can fits tightly within it.
  2. Inspect the bottom of the can to locate the combination written on it. Use the UV light to illuminate the bottom of the can and make the combination visible, if it is not already clearly visible.
  3. Attach the suction cup to the top of the can. The suction cup will create a temporary seal on the can’s surface, allowing you to grip and pull it out of the hole.
  4. Gently pull the can upwards using the suction cup. The suction force should be strong enough to overcome the vacuum-like effect and extract the can from the hole.
  5. Once the can is out, record the combination written on the bottom of the can in the notebook.

Required Items:

  • Ruler: To measure the diameter of the can and confirm the tight fit in the hole.
  • UV light: To illuminate the bottom of the can and make the combination visible, if it is not already clearly visible.
  • Suction cup: To create a temporary seal on the can’s surface and provide the necessary grip to pull the can out of the hole.
  • Notebook: To record the combination written on the bottom of the can.

On the other hand, Haiku’s solution was.. certainly a solution of all time.

First, it wants to measure the diameter of the can using the ruler??? To determine the size of the hole and confirm that the can fits tightly within it? Why would we need to do this? And do we need a ruler for that?

Second, it tells us to inspect the bottom of the can to locate the combination, when the entire problem is about not being able to pull the can out of the hole conventionally. This might just be an issue of order, but now I truly understand my friends’ feelings whenever I would tell them “just fix it man” to their numerous problems.

But it eventually does get the solution. So… not the worst.

Here’s a fancy radar graph of the results!

We see that both models are pretty similar in their capabilities, with GPT-4 being slightly better in physical understanding and solution path, and Haiku being slightly better in red herring handling. Overall though, both models kind of suck. Dang.

There are also only… 5 questions.

If you’d like to see the full breadth of questions, they’re on my GitHub.

LLM-as-a-Judge

By the way, the method I used to generate the evaluations, LLM-as-a-judge, has gained significant traction in the AI community, particularly after the work of Zheng et al. in their 2023 paper “Judging LLM-as-a-Judge.” The technique has proven remarkably effective, achieving over 80% agreement with human evaluators in tasks ranging from code assessment to dialogue quality evaluation!

Here’s where my experiment gets kind of cool (arguably, maybe, subjectively) — I used this methodology and had GPT-4 judge other LLMs’ physical reasoning abilities. Yes, I’m using an AI to judge other AIs.

Why does this work? Well, judging a response is actually a simpler task than generating one. When GPT-4 generates a solution to a physical puzzle, it needs to:

  • Understand the physical principles involved
  • Plan a sequence of steps
  • Consider all constraints
  • Generate a coherent explanation

But when judging, it only needs to check if specific criteria are met in an existing solution. The evaluation prompt is very focused:

def _create_evaluation_prompt(self, criteria: JudgingCriteria) -> str:
    return f"""You are an expert judge evaluating an LLM's understanding of physical reasoning puzzles.
Evaluate based on three criteria:
2. Physical Understanding Score (0-2): Does the solution correctly apply relevant physical principles?
3. Solution Path Score (0-2): Are the steps complete and feasible?
4. Red Herring Handling (0-1): Does it avoid using irrelevant items?
Scenario: {criteria.scenario}
Physical Principles Required: {criteria.correct_principles}
Solution Given: {criteria.model_response}
"""

To validate this approach, I followed the validation framework suggested by Zheng et al., performing spot-checks of GPT-4’s evaluations against my own judgments. Surprisingly (or perhaps unsurprisingly, given the broader research on LLM evaluation), it was remarkably consistent in identifying both correct physical understanding and flawed reasoning.

Is this perfect? Absolutely not. There’s something philosophically weird about using one LLM to evaluate another. But in practice, it can work surprisingly well — just like how I moan and groan about the visual presentation of a dish on Masterchef, while setting my kitchen aflame trying to microwave a hot dog.

What I Learned

Building this benchmark taught me several things about benchmark design:

Clear Metrics Matter: Even for complex tasks like physical reasoning, you need unambiguous scoring criteria.

Red Herrings Are Powerful: Including irrelevant items reveals a lot about an LLM’s reasoning process.

Context Control is Hard: Ensuring LLMs don’t “hallucinate” additional physical context is challenging.

Is this a perfect benchmark? Not even close. Please don’t rub it in. Is it scientifically rigorous? Definitely not. But it’s been a fascinating exploration into an aspect of LLM capabilities, and sometimes the best we can learn can come from just trying things out and seeing what happens.

Now, if you’ll excuse me, I will be sneaking in a phone with an internet connection into my next escape room, for reasons that I am legally unmotivated to disclose.

[1] L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li, D. Li, E. P. Xing, H. Zhang, J. E. Gonzalez, I. Stoica, “Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena,” Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), Datasets and Benchmarks Track (2023)

[2] T. Coignion, C. Quinton, R. Rouvoy, “A Performance Study of LLM-Generated Code on Leetcode,” In 28th International Conference on Evaluation and Assessment in Software Engineering (EASE 2024), Salerno, Italy (2024)

[3] A. Wang, Y. Pruksachatkun, N. Nangia, A. Singh, J. Michael, F. Hill, O. Levy, S. R. Bowman, “SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems,” In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada (2019)

[5] DeepSeek-AI, D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, X. Zhang, X. Yu, Y. Wu, Z.F. Wu, Z. Gou, Z. Shao, Z. Li, Z. Gao et al., “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” arXiv preprint arXiv:2501.12948 (2025)

[6] Unless otherwise stated, all images are created by the author.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Observability platforms gain AI capabilities

LogicMonitor also announced Oracle Infrastructure (OCI) Monitoring to expand its multi-cloud coverage, provide visibility across AWS, Azure, GCP, and OCI, and offer observability capabilities across several cloud platforms. The company also made its LM Uptime and Dynamic Service Insights capabilities generally available to help enterprise IT organizations find issues sooner

Read More »

Cisco strengthens integrated IT/OT network and security controls

Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along

Read More »

ICYMI: Secretary Wright Advances President Trump’s Energy Dominance Agenda in Europe

Secretary Wright participated in the 2025 GasTech Conference in Milan, met with EU leaders in Brussels, and delivered the U.S. National Statement at the International Atomic Energy Agency’s 69th General Conference in Vienna  WASHINGTON— This week, U.S. Secretary of Energy Chris Wright concluded a 10-day trip across Europe with stops in Milan, Brussels, and Vienna, where he built upon President Trump’s bold energy agenda, strengthened long-term partnerships with European allies, and encouraged nations to join the United States in building a secure and prosperous energy future. The trip highlighted progress made in President Trump’s recent historic trade deal with the EU, which included an agreement from the EU to purchase $750 billion in U.S. energy and invest $600 billion in the United States by 2028.  Watch: Secretary Wright Joins Brian Sullivan for GasTech 2025 Fireside Chat — September 10, 2025  Secretary Wright participated in a keynote fireside chat and press conference with energy officials and natural gas providers at the 2025 GasTech Conference in Milan, Italy. He highlighted President Trump’s commitment to growing gas exports and how U.S. gas strengthens global stability, lowers prices, and provides a reliable alternative to adversarial energy sources. Thanks to President Trump’s reversal of the Biden administration’s reckless pause on LNG exports, the United States has already approved more LNG export capacity than the volume exported by the world’s second-largest LNG supplier.  In Brussels, Belgium, Secretary Wright met with members of the European Parliament and Commission, stressing the benefits of U.S.-E.U. energy partnerships, ending Europe’s reliance on Russian oil and gas, and the need to shift away from policies that lead to more expensive energy and inhibit long-term energy agreements in the EU.  In Vienna, Austria, Secretary Wright delivered the U.S. National Statement at the International Atomic Energy Agency’s (IAEA) 69th General Conference, where he

Read More »

Oil Drops as Trump Says Low Prices Will End RUS-UKR War

Oil edged down in a choppy session after US President Donald Trump implied that he favored low prices over sanctions as a means of pressuring Russia to end its war in Ukraine.  West Texas Intermediate fell 0.7% to trade below $64 a barrel after swinging in a roughly $1 range as Trump reiterated a commitment to low oil prices, limiting investors’ conviction that global efforts to squeeze Russian flows will pan out. Washington has signaled that the US wouldn’t follow through with threats to penalize Moscow’s crude unless Europe also acts.  Futures slid further after Trump told reporters that “if we get oil down, the war ends,” a sign of his preferred strategy to halt the flow of petrodollars that fund Russia’s war effort. He also repeated his calls for countries to stop buying Russian oil.  The commodity also followed fluctuations in US Treasury yields, with the optimism over monetary loosening after Wednesday’s quarter-point reduction in US interest rates tempered by the Fed’s cautious tone.  After the Fed’s cut, “we are back focusing on sanctions and geopolitics versus weak fundamentals,” said Arne Lohmann Rasmussen, chief analyst at A/S Global Risk Management.  Traders have honed in on Russian flows over recent weeks amid intensifying Ukrainian attacks on the country’s energy infrastructure and as the European Union unveils a fresh package of sanctions on Moscow. Two more Russian oil refineries were attacked on Thursday as Ukraine stepped up strikes, and further closures threaten to tighten global oil balances and dent the Kremlin’s war chest.  As a result of the repeated Ukrainian strikes, Russian refining runs have now dropped below 5 million barrels a day, the lowest since April 2022, according to estimates from JPMorgan Chase & Co.  In the US, meanwhile, inventories of distillates — a group of fuels that includes diesel — reached

Read More »

Octopus Energy Plans to Spin Off Technology Arm

Octopus Energy Group Ltd. plans to spin off Kraken Technologies Ltd., a software platform that helps utilities manage the transition to cleaner energy.  Kraken has been key to Octopus Energy’s growth into the UK’s largest electricity supplier, leapfrogging industry incumbents to serve more than 7 million customers in the country. The software allows it to balance out power flows to households as energy-transition technologies like electric vehicles, home batteries, solar panels and heat pumps become more widespread. The software platform is already being licensed to other energy providers such as Electricite de France SA, serving more than 70 million household and business accounts worldwide. Committed annual revenue has increased fourfold to $500 million in just three years and the spinoff will accelerate the expansion, Octopus said in a statement on Thursday. “Kraken is now a globally successful business in its own right,” Chief Executive Officer Amir Orad said in the statement. “Completing our journey to full independence is a strategic and inevitable next step.” Tim Wan has joined Kraken as its chief financial officer, the same role he previously held at US software firm Asana Inc., according to the statement. He was involved in Asana’s US listing in New York in 2020.  Kraken could be valued at as much as $14 billion, Sky News reported in July, citing a person familiar with the matter, who also said the spinoff could be part of plans for Octopus Energy to sell a stake in Kraken to external investors. The demerger and any stake sale could “bring transparency to the value of Kraken,” said Martin Young, founder of consulting firm Aquaicity Ltd. He said that could be a precursor to further sales in the future, and possibly an initial public offering. “Separation offers a cleaner structure and puts to bed the question: ‘Is

Read More »

Ukraine Hits 2 Russian Oil Refineries

Two Russian oil refineries were attacked on Thursday as Ukraine stepped up strikes on its enemy’s energy infrastructure. Gazprom’s Neftekhim Salavat petrochemical facility in the Bashkortostan region was set on fire after being hit by drones, local governor Radiy Khabirov said. The site is more than 1,300 kilometers (800 miles) from territory under Ukraine’s control, making it one of Kyiv’s deepest strikes inside Russian territory. Ukraine’s Special Operations Forces also claimed an attack on Lukoil PJSC’s major Volgograd refinery in the Volga region. As a result of the attack, the facility, which has a capacity of around 300,000 barrels a day, halted operations, Ukraine’s Special Operations Forces said. Bloomberg couldn’t independently verify the claim, and Lukoil didn’t immediately respond to an emailed request for comment.  Since last month, Ukrainian military forces have intensified drone attacks on Russian energy infrastructure, including oil refineries, aiming to curb fuel supplies to the front lines. In August, at least 13 strikes were made, the largest monthly number since the start of the invasion in Ukraine. So far in September there have been at least six attacks. Last week, drones also hit Russia’s largest Baltic oil terminal in Primorsk, and Ukraine claimed strikes on pumping stations feeding another Baltic hub, the Ust-Luga terminal.  Ukrainian drones hit one of the primary processing units at the Salavat facility, according to a person familiar with the matter. The unit has a design capacity to process 4 million tons of condensate per year, which is equivalent to about 80,000 barrels a day, according to the website of the refinery. The entire facility is designed to have a crude-oil-processing capacity of around 200,000 barrels a day. Meanwhile, the press service for governor Khabirov said in a separate statement that the Salavat refinery continues normal operations and that the fire has been localized. Neither claim could be independently verified. As a

Read More »

Energy Department Launches Speed to Power Initiative, Accelerating Large-Scale Grid Infrastructure Projects

WASHINGTON—The U.S. Department of Energy (DOE) announced today the Speed to Power initiative, to accelerate the speed of large-scale grid infrastructure project development for both transmission and generation. The Speed to Power initiative will help ensure the United States has the power needed to win the global artificial intelligence (AI) race while continuing to meet growing demand for affordable, reliable and secure energy. DOE analysis shows that the current rate of project development is inadequate to support the country’s rapidly expanding manufacturing needs and the reindustrialization of the U.S. economy. DOE is committed to collaborating with stakeholders to identify large-scale grid infrastructure projects that can bring speed to power and overcome the complex challenges facing the grid.   “In the coming years, Americans will require more energy to power their homes and businesses – and with President Trump’s leadership, the Department of Energy is ensuring we can meet this growing demand while fueling AI and data center development with affordable, reliable and secure sources,” said Energy Secretary Chris Wright. “With the Speed to Power initiative, we’re leveraging the expertise of the private sector to harness all forms of energy that are affordable, reliable and secure to ensure the United States is able to win the AI race.”   To kickstart the Speed to Power initiative, DOE is issuing a Request for Information focused on large-scale grid infrastructure projects, both transmission and generation, that can accelerate the United States speed to power. This includes input on near-term investment opportunities, project readiness, load growth expectations, and infrastructure constraints that DOE can address. The DOE is requesting stakeholder input on how to best leverage its funding programs and authorities to rapidly expand energy generation and transmission grid capacity.  President Trump’s Executive Order, Declaring a National Energy Emergency, signed on his first day in office asserted that the integrity

Read More »

Regulators approve demand charge, net metering changes for NV Energy

Beginning in April 2026, NV Energy will add a daily demand charge for residential and small business customers that could add more than $30 to some monthly bills, consumer advocates warned in the wake of a Tuesday decision by state utility regulators. The Public Utility Commission of Nevada unanimously approved a new rate design for customers in the southern portion of the state, along with changing the utility’s net metering design in ways that solar advocates say will weaken customer protections and set back Nevada’s clean energy goals. The decision cut “more than a third” from NV Energy’s $224 million rate request, regulators said. The full customer impact remains unclear. “At the end of the day, my goal in drafting this order was to find a way to make sure that folks were paying for the cost of the service that’s provided to them,” Commissioner Tammy Cordova said at the PUC hearing. “We can disagree on whether this draft order achieves that, but that was my goal.” The order was approved 3-0, without modifications. Several members of the public spoke before the commission opposing the order. Janet Carter, vice chair of Sierra Club’s Toiyabe Chapter, told regulators her organization opposed NV Energy’s changes, and in particular shifts to the net metering program in the utility’s northern service territory, where it will calculate credits for energy returned to the grid every 15 minutes, rather than monthly as it does now. “This makes it confusing to the public and difficult to look at the energy bill and see if the charges are correct,” Carter said. “Already, people are cutting down on their usage of air conditioning because of the high rates they are experiencing — and for many people, this may increase their rates and make it more difficult to pay their utility

Read More »

OpenAI and Oracle’s $300B Stargate Deal: Building AI’s National-Scale Infrastructure

Oracle’s ‘Astonishing’ Quarter Stuns Wall Street, Targeting Cloud Growth and Global Data Center Expansion Oracle’s FY Q1 2026 earnings report on September 9 — along with its massive cloud backlog — stunned Wall Street with its blow-out Q1 earnings. The market reacted positively to the huge growth in infrastructure revenue and performance obligations (RPO), a measure of future revenue from customer contracts, which indicates significant growth potential and Oracle’s increasing role in AI technology—even as earnings and revenue missed estimates. After the earnings announcement, Oracle stock soared more than 36%, marking its biggest daily gain since December 1992 and adding more than $250 billion in market value to the company. The company’s stock surge came even as the software giant’s earnings and lower-than-expected revenue. Leaders reported company’s RPO jumped about 360% in the quarter to $455 billion, indicating its potential growth and demand for its cloud services and infrastructure. As a result, Oracle CEO Safra Catz projects that its GPU‑heavy Oracle Cloud Infrastructure (OCI) business will grow 77% to $18 billion in its current fiscal year (2026) and soar to $144 billion in 2030. The earnings announcement also made Oracle’s Co-Founder, Chairman and CTO Larry Ellison the richest person in the world briefly, with shares of Oracle surging as much as 43%. By the end of the trading day, his wealth increased nearly $90 billion to $383 billion, just shy of Tesla CEO Elon Musk’s $384 billion fortune. Also on the earnings call, Ellison announced that in October at the Oracle AI World event, the company will introduce the Oracle AI Database OCI for customers to use the Large Language Model (LLM) of their choice—including Google’s Gemini, OpenAI’s ChatGPT, xAI’s Grok, etc.—directly on top of the Oracle Database to easily access and analyze all existing database data. Capital Expenditure Strategy These astonishing numbers are due

Read More »

Ethernet, InfiniBand, and Omni-Path battle for the AI-optimized data center

IEEE 802.3df-2024. The IEEE 802.3df-2024 standard, completed in February 2024 marked a watershed moment for AI data center networking. The 800 Gigabit Ethernet specification provides the foundation for next-generation AI clusters. It uan 8-lane parallel structure that enables flexible port configurations from a single 800GbE port: 2×400GbE, 4×200GbE or 8×100GbE depending on workload requirements. The standard maintains backward compatibility with existing 100Gb/s electrical and optical signaling. This protects existing infrastructure investments while enabling seamless migration paths. UEC 1.0. The Ultra Ethernet Consortium represents the industry’s most ambitious attempt to optimize Ethernet for AI workloads. The consortium released its UEC 1.0 specification in 2025, marking a critical milestone for AI networking. The specification introduces modern RDMA implementations, enhanced transport protocols and advanced congestion control mechanisms that eliminate the need for traditional lossless networks. UEC 1.0 enables packet spraying at the switch level with reordering at the NIC, delivering capabilities previously available only in proprietary systems The UEC specification also includes Link Level Retry (LLR) for lossless transmission without traditional Priority Flow Control, addressing one of Ethernet’s historical weaknesses versus InfiniBand.LLR operates at the link layer to detect and retransmit lost packets locally, avoiding expensive recovery mechanisms at higher layers. Packet Rate Improvement (PRI) with header compression reduces protocol overhead, while network probes provide real-time congestion visibility. InfiniBand extends architectural advantages to 800Gb/s InfiniBand emerged in the late 1990s as a high-performance interconnect designed specifically for server-to-server communication in data centers. Unlike Ethernet, which evolved from local area networking,InfiniBand was purpose-built for the demanding requirements of clustered computing. The technology provides lossless, ultra-low latency communication through hardware-based flow control and specialized network adapters. The technology’s key advantage lies in its credit-based flow control. Unlike Ethernet’s packet-based approach, InfiniBand prevents packet loss by ensuring receiving buffers have space before transmission begins. This eliminates

Read More »

Land and Expand: CleanArc Data Centers, Google, Duke Energy, Aligned’s ODATA, Fermi America

Land and Expand is a monthly feature at Data Center Frontier highlighting the latest data center development news, including new sites, land acquisitions and campus expansions. Here are some of the new and notable developments from hyperscale and colocation data center operators about which we’ve been reading lately. Caroline County, VA, Approves 650-Acre Data Center Campus from CleanArc Caroline County, Virginia, has approved redevelopment of the former Virginia Bazaar property in Ruther Glen into a 650-acre data center campus in partnership with CleanArc Data Centers Operating, LLC. On September 9, 2025, the Caroline County Board of Supervisors unanimously approved an economic development performance agreement with CleanArc to transform the long-vacant flea market site just off I-95. The agreement allows for the phased construction of three initial data center buildings, each measuring roughly 500,000 square feet, which CleanArc plans to lease to major operators. The project represents one of the county’s largest-ever private investments. While CleanArc has not released a final capital cost, county filings suggest the development could reach into the multi-billion-dollar range over its full buildout. Key provisions include: Local hiring: At least 50 permanent jobs at no less than 150% of the prevailing county wage. Revenue sharing: Caroline County will provide annual incentive grants equal to 25% of incremental tax revenue generated by the campus. Water stewardship: CleanArc is prohibited from using potable county water for data center cooling, requiring the developer to pursue alternative technologies such as non-potable sources, recycled water, or advanced liquid cooling systems. Local officials have emphasized the deal’s importance for diversifying the county’s tax base, while community observers will be watching closely to see which cooling strategies CleanArc adopts in order to comply with the water-use restrictions. Google to Build $10 Billion Data Center Campus in Arkansas Moses Tucker Partners, one of Arkansas’

Read More »

Hyperion and Alice & Bob Call on HPC Centers to Prepare Now for Early Fault-Tolerant Quantum Computing

As the data center industry continues to chase greater performance for AI and scientific workloads, a new joint report from Hyperion Research and Alice & Bob is urging high performance computing (HPC) centers to take immediate steps toward integrating early fault-tolerant quantum computing (eFTQC) into their infrastructure. The report, “Seizing Quantum’s Edge: Why and How HPC Should Prepare for eFTQC,” paints a clear picture: the next five years will demand hybrid HPC-quantum workflows if institutions want to stay at the forefront of computational science. According to the analysis, up to half of current HPC workloads at U.S. government research labs—Los Alamos National Laboratory, the National Energy Research Scientific Computing Center, and Department of Energy leadership computing facilities among them—could benefit from the speedups and efficiency gains of eFTQC. “Quantum technologies are a pivotal opportunity for the HPC community, offering the potential to significantly accelerate a wide range of critical science and engineering applications in the near-term,” said Bob Sorensen, Senior VP and Chief Analyst for Quantum Computing at Hyperion Research. “However, these machines won’t be plug-and-play, so HPC centers should begin preparing for integration now, ensuring they can influence system design and gain early operational expertise.” The HPC Bottleneck: Why Quantum is Urgent The report underscores a familiar challenge for the HPC community: classical performance gains have slowed as transistor sizes approach physical limits and energy efficiency becomes increasingly difficult to scale. Meanwhile, the threshold for useful quantum applications is drawing nearer. Advances in qubit stability and error correction, particularly Alice & Bob’s cat qubit technology, have compressed the resource requirements for algorithms like Shor’s by an estimated factor of 1,000. Within the next five years, the report projects that quantum computers with 100–1,000 logical qubits and logical error rates between 10⁻⁶ and 10⁻¹⁰ will accelerate applications across materials science, quantum

Read More »

Google Partners With Utilities to Ease AI Data Center Grid Strain

Transmission and Power Strategy These agreements build on Google’s growing set of strategies to manage electricity needs. In June of 2025, Google announced a deal with CTC Global to upgrade transmission lines with high-capacity composite conductors that increase throughput without requiring new towers. In July 2025, Google and Brookfield Asset Management unveiled a hydropower framework agreement worth up to $3 billion, designed to secure firm clean energy for data centers in PJM and Eastern markets. Alongside renewable deals, Google has signed nuclear supply agreements as well, most notably a landmark contract with Kairos Power for small modular reactor capacity. Each of these moves reflects Google’s effort to create more headroom on the grid while securing firm, carbon-free power. Workload Flexibility and Grid Innovation The demand-response strategy is uniquely suited to AI data centers because of workload diversity. Machine learning training runs can sometimes be paused or rescheduled, unlike latency-sensitive workloads. This flexibility allows Google to throttle certain compute-heavy processes in coordination with utilities. In practice, Google can preemptively pause or shift workloads when notified of peak events, ensuring critical services remain uninterrupted while still creating significant grid relief. Local Utility Impact For utilities like I&M and TVA, partnering with hyperscale customers has a dual benefit: stabilizing the grid while keeping large customers satisfied and growing within their service territories. It also signals to regulators and ratepayers that data centers, often criticized for their heavy energy footprint, can actively contribute to reliability. These agreements may help avoid contentious rate cases or delays in permitting new power plants. Policy, Interconnection Queues, and the Economics of Speed One of the biggest hurdles for data center development today is the long wait in interconnection queues. In regions like PJM Interconnection, developers often face waits of three to five years before new projects can connect

Read More »

Generators, Gas, and Grid Strategy: Inside Generac’s Data Center Play

A Strategic Leap Generac’s entry represents a strategic leap. Long established as a leader in residential, commercial, and industrial generation—particularly in the sub-2 megawatt range—the company has now expanded into mission-critical applications with new products spanning 2.2 to 3.5 megawatts. Navarro said the timing was deliberate, citing market constraints that have slowed hyperscale and colocation growth. “The current OEMs serving this market are actually limiting the ability to produce and to grow the data center market,” he noted. “Having another player … with enough capacity to compensate those shortfalls has been received very, very well.” While Generac isn’t seeking to reinvent the wheel, it is intent on differentiation. Customers, Navarro explained, want a good quality product, uneventful deployment, and a responsive support network. On top of those essentials, Generac is leveraging its ongoing transformation from generator manufacturer to energy technology company, a shift accelerated by a series of acquisitions in areas like telemetry, monitoring, and energy management. “We’ve made several acquisitions to move away from being just a generator manufacturer to actually being an energy technology company,” Navarro said. “So we are entering this space of energy efficiency, energy management—monitoring, telemetrics, everything that improves the experience and improves the usage of those generators and the energy management at sites.” That foundation positions Generac to meet the newest challenge reshaping backup generation: the rise of AI-centric workloads. Natural Gas Interest—and the Race to Shorter Lead Times As the industry looks beyond diesel, customer interest in natural gas generation is rising. Navarro acknowledged the shift, but noted that diesel still retains an edge. “We’ve seen an increase on gas requests,” he said. “But the power density of diesel is more convenient than gas today.” That tradeoff, however, could narrow. Navarro pointed to innovations such as industrial storage paired with gas units, which

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »