Stay Ahead, Stay ONMINE

Learnings from a Machine Learning Engineer — Part 3: The Evaluation

In this third part of my series, I will explore the evaluation process which is a critical piece that will lead to a cleaner data set and elevate your model performance. We will see the difference between evaluation of a trained model (one not yet in production), and evaluation of a deployed model (one making real-world predictions). In Part 1, […]

In this third part of my series, I will explore the evaluation process which is a critical piece that will lead to a cleaner data set and elevate your model performance. We will see the difference between evaluation of a trained model (one not yet in production), and evaluation of a deployed model (one making real-world predictions).

In Part 1, I discussed the process of labelling your image data that you use in your Image Classification project. I showed how to define “good” images and create sub-classes. In Part 2, I went over various data sets, beyond the usual train-validation-test sets, such as benchmark sets, plus how to handle synthetic data and duplicate images.

Evaluation of the trained model

As machine learning engineers we look at accuracy, F1, log loss, and other metrics to decide if a model is ready to move to production. These are all important measures, but from my experience, these scores can be deceiving especially as the number of classes grows.

Although it can be time consuming, I find it very important to manually review the images that the model gets wrong, as well as the images that the model gives a low softmax “confidence” score to. This means adding a step immediately after your training run completes to calculate scores for all images — training, validation, test, and the benchmark sets. You only need to bring up for manual review the ones that the model had problems with. This should only be a small percentage of the total number of images. See the Double-check process below

What you do during the manual evaluation is to put yourself in a “training mindset” to ensure that the labelling standards are being followed that you setup in Part 1. Ask yourself:

  • “Is this a good image?” Is the subject front and center, and can you clearly see all the features?
  • “Is this the correct label?” Don’t be surprised if you find wrong labels.

You can either remove the bad images or fix the labels if they are wrong. Otherwise you can keep them in the data set and force the model to do better next time. Other questions I ask are:

  • “Why did the model get this wrong?”
  • “Why did this image get a low score?”
  • “What is it about the image that caused confusion?”

Sometimes the answer has nothing to do with that specific image. Frequently, it has to do with the other images, either in the ground truth class or in the predicted class. It is worth the effort to Double-check all images in both sets if you see a consistently bad guess. Again, don’t be surprised if you find poor images or wrong labels.

Weighted evaluation

When doing the evaluation of the trained model (above), we apply a lot of subjective analysis — “Why did the model get this wrong?” and “Is this a good image?” From these, you may only get a gut feeling.

Frequently, I will decide to hold off moving a model forward to production based on that gut feel. But how can you justify to your manager that you want to hit the brakes? This is where putting a more objective analysis comes in by creating a weighted average of the softmax “confidence” scores.

In order to apply a weighted evaluation, we need to identify sets of classes that deserve adjustments to the score. Here is where I create a list of “commonly confused” classes.

Commonly confused classes

Certain animals at our zoo can easily be mistaken. For example, African elephants and Asian elephants have different ear shapes. If your model gets these two mixed up, that is not as bad as guessing a giraffe! So perhaps you give partial credit here. You and your subject matter experts (SMEs) can come up with a list of these pairs and a weighted adjustment for each.

Photo by Matt Bango on Unsplash
Photo by Mathew Krizmanich on Unsplash

This weight can be factored into a modified cross-entropy loss function in the equation below. The back half of this equation will reduce the impact of being wrong for specific pairs of ground truth and prediction by using the “weight” function as a lookup. By default, the weighted adjustment would be 1 for all pairings, and the commonly confused classes would get something like 0.5.

In other words, it’s better to be unsure (have a lower confidence score) when you are wrong, compared to being super confident and wrong.

Modified cross-entropy loss function, image by author

Once this weighted log loss is calculated, I can compare to previous training runs to see if the new model is ready for production.

Confidence threshold report

Another valuable measure that incorporates the confidence threshold (in my example, 95) is to report on accuracy and false positive rates. Recall that when we apply the confidence threshold before presenting results, we help reduce false positives from being shown to the end user.

In this table, we look at the breakdown of “true positive above 95” for each data set. We get a sense that when a “good” picture comes through (like the ones from our train-validation-test set) it is very likely to surpass the threshold, thus the user is “happy” with the outcome. Conversely, the “false positive above 95” is extremely low for good pictures, thus only a small number of our users will be “sad” about the results.

Example Confidence Threshold Report, image by author

We expect the train-validation-test set results to be exceptional since our data is curated. So, as long as people take “good” pictures, the model should do very well. But to get a sense of how it does on extreme situations, let’s take a look at our benchmarks.

The “difficult” benchmark has more modest true positive and false positive rates, which reflects the fact that the images are more challenging. These values are much easier to compare across training runs, so that lets me set a min/max target. So for example, if I target a minimum of 80% for true positive, and maximum of 5% for false positive on this benchmark, then I can feel confident moving this to production.

The “out-of-scope” benchmark has no true positive rate because none of the images belong to any class the model can identify. Remember, we picked things like a bag of popcorn, etc., that are not zoo animals, so there cannot be any true positives. But we do get a false positive rate, which means the model gave a confident score to that bag of popcorn as some animal. And if we set a target maximum of 10% for this benchmark, then we may not want to move it to production.

Photo by Linus Mimietz on Unsplash

Right now, you may be thinking, “Well, what animal did it pick for the bag of popcorn?” Excellent question! Now you understand the importance of doing a manual review of the images that get bad results.

Evaluation of the deployed model

The evaluation that I described above applies to a model immediately after training. Now, you want to evaluate how your model is doing in the real world. The process is similar, but requires you to shift to a “production mindset” and asking yourself, “Did the model get this correct?” and “Should it have gotten this correct?” and “Did we tell the user the right thing?”

So, imagine that you are logging in for the morning — after sipping on your cold brew coffee, of course — and are presented with 500 images that your zoo guests took yesterday of different animals. Your job is to determine how satisfied the guests were using your model to identify the zoo animals.

Using the softmax “confidence” score for each image, we have a threshold before presenting results. Above the threshold, we tell the guest what the model predicted. I’ll call this the “happy path”. And below the threshold is the “sad path” where we ask them to try again.

Your review interface will first show you all the “happy path” images one at a time. This is where you ask yourself, “Did we get this right?” Hopefully, yes!

But if not, this is where things get tricky. So now you have to ask, “Why not?” Here are some things that it could be:

  • “Bad” picture — Poor lighting, bad angle, zoomed out, etc — refer to your labelling standards.
  • Out-of-scope — It’s a zoo animal, but unfortunately one that isn’t found in this zoo. Maybe it belongs to another zoo (your guest likes to travel and try out your app). Consider adding these to your data set.
  • Out-of-scope — It’s not a zoo animal. It could be an animal in your zoo, but not one typically contained there, like a neighborhood sparrow or mallard duck. This might be a candidate to add.
  • Out-of-scope — It’s something found in the zoo. A zoo usually has interesting trees and shrubs, so people might try to identify those. Another candidate to add.
  • Prankster — Completely out-of-scope. Because people like to play with technology, there’s the possibility you have a prankster that took a picture of a bag of popcorn, or a soft drink cup, or even a selfie. These are hard to prevent, but hopefully get a low enough score (below the threshold) so the model did not identify it as a zoo animal. If you see enough pattern in these, consider creating a class with special handling on the front-end.

After reviewing the “happy path” images, you move on to the “sad path” images — the ones that got a low confidence score and the app gave a “sorry, try again” message. This time you ask yourself, “Should the model have given this image a higher score?” which would have put it in the “happy path”. If so, then you want to ensure these images are added to the training set so next time it will do better. But most of time, the low score reflects many of the “bad” or out-of-scope situations mentioned above.

Perhaps your model performance is suffering and it has nothing to do with your model. Maybe it is the ways you users interacting with the app. Keep an eye out of non-technical problems and share your observations with the rest of your team. For example:

  • Are your users using the application in the ways you expected?
  • Are they not following the instructions?
  • Do the instructions need to be stated more clearly?
  • Is there anything you can do to improve the experience?

Collect statistics and new images

Both of the manual evaluations above open a gold mine of data. So, be sure to collect these statistics and feed them into a dashboard — your manager and your future self will thank you!

Photo by Justin Morgan on Unsplash

Keep track of these stats and generate reports that you and your can reference:

  • How often the model is being called?
  • What times of the day, what days of the week is it used?
  • Are your system resources able to handle the peak load?
  • What classes are the most common?
  • After evaluation, what is the accuracy for each class?
  • What is the breakdown for confidence scores?
  • How many scores are above and below the confidence threshold?

The single best thing you get from a deployed model is the additional real-world images! You can add these now images to improve coverage of your existing zoo animals. But more importantly, they provide you insight on other classes to add. For example, let’s say people enjoy taking a picture of the large walrus statue at the gate. Some of these may make sense to incorporate into your data set to provide a better user experience.

Creating a new class, like the walrus statue, is not a huge effort, and it avoids the false positive responses. It would be more embarrassing to identify a walrus statue as an elephant! As for the prankster and the bag of popcorn, you can configure your front-end to quietly handle these. You might even get creative and have fun with it like, “Thank you for visiting the food court.”

Double-check process

It is a good idea to double-check your image set when you suspect there may be problems with your data. I’m not suggesting a top-to-bottom check, because that would a monumental effort! Rather specific classes that you suspect could contain bad data that is degrading your model performance.

Immediately after my training run completes, I have a script that will use this new model to generate predictions for my entire data set. When this is complete, it will take the list of incorrect identifications, as well as the low scoring predictions, and automatically feed that list into the Double-check interface.

This interface will show, one at a time, the image in question, alongside an example image of the ground truth and an example image of what the model predicted. I can visually compare the three, side-by-side. The first thing I do is ensure the original image is a “good” picture, following my labelling standards. Then I check if the ground-truth label is indeed correct, or if there is something that made the model think it was the predicted label.

At this point I can:

  • Remove the original image if the image quality is poor.
  • Relabel the image if it belongs in a different class.

During this manual evaluation, you might notice dozens of the same wrong prediction. Ask yourself why the model made this mistake when the images seem perfectly fine. The answer may be some incorrect labels on images in the ground truth, or even in the predicted class!

Don’t hesitate to add those classes and sub-classes back into the Double-check interface and step through them all. You may have 100–200 pictures to review, but there is a good chance that one or two of the images will stand out as being the culprit.

Up next…

With a different mindset for a trained model versus a deployed model, we can now evaluate performances to decide which models are ready for production, and how well a production model is going to serve the public. This relies on a solid Double-check process and a critical eye on your data. And beyond the “gut feel” of your model, we can rely on the benchmark scores to support us.

In Part 4, we kick off the training run, but there are some subtle techniques to get the most out of the process and even ways to leverage throw-away models to expand your library image data.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Will Google throw gasoline on the AI chip arms race?

The Nvidia processors, he explains, are for processing massive, large language models (LLMs), while the Google TPU is used for inferencing, the next step after processing the LLM. So the two chips don’t compete with each other, they complement each other, according to Gold. Selling and supporting processors may not

Read More »

Nvidia moves deeper into AI infrastructure with SchedMD acquisition

“Slurm excels at orchestrating multi-node distributed training, where jobs span hundreds or thousands of GPUs,” said Lian Jye Su, chief analyst at Omdia. “The software can optimize data movement within servers by deciding where jobs should be placed based on resource availability. With strong visibility into the network topology, Slurm

Read More »

ExxonMobil bumps up 2030 target for Permian production

ExxonMobil Corp., Houston, is looking to grow production in the Permian basin to about 2.5 MMboe/d by 2030, an increase of 200,000 boe/d from executives’ previous forecasts and a jump of more than 45% from this year’s output. Helping drive that higher target is an expected 2030 cost profile that

Read More »

Oil Sinks as Oversupply Pressures Intensify

West Texas Intermediate oil fell below $55 a barrel for the first time since February 2021 on signs that supply is outpacing demand, while progress in Ukraine peace talks could lead to a deal that may allow more Russian oil to flow onto global markets. US crude futures pared some losses, settling down 2.7% to $55.27. Brent, the global benchmark, fell 2.7% to settle at $58.92. Signs of weakness are proliferating across the supply side of the oil market, with Middle Eastern crude prices entering a bearish pattern known as contango early on Tuesday. The same already had happened with some barrels sold on the US Gulf Coast, with near-dated prices cheaper than contracts for delivery further out. On the WTI futures curve, the front-month contract was trading as little as 9 cents higher than the following month. The demand side looks similarly fragile. Elevated premiums for fuels like gasoline and diesel relative to crude, which supported prices last month, have eased. Meanwhile, weak job growth in the US signaled a potential slowdown in demand, adding further downward price pressure. While markets have been in a period of oversupply, a steady stream of geopolitical risks, and the fact that significant oil supply has gone to stockpiles at sea or in China, has kept markets tight, said Rory Johnston, oil market researcher and founder of Commodity Context. “The market has been trending this way,” Johnston said. “It’s been wanting to sell off, flip into contango for six months now, but it just keeps being delayed from doing so.” Trend-following commodity advisers remained 100% short in both Brent and WTI on Tuesday, according to data from Bridgeton Research Group. Widespread short positioning means that bullish news could push markets higher as automated traders cover positions, Johnston said. “My base case expectation is

Read More »

Energy Department Grants Woodside Louisiana LNG Project Additional Time to Commence Exports

WASHINGTON – U.S. Secretary of Energy Chris Wright today signed an amendment order granting an additional 44 months for Woodside Energy to commence exports of liquefied natural gas (LNG) to non-free trade agreement (non-FTA) countries from the Woodside Louisiana LNG Project under construction in Calcasieu Parish, LA. Once fully constructed, the project will be capable of exporting up to 3.88 billion cubic feet per day (Bcf/d) of natural gas as LNG.    Woodside Louisiana took final investment decision on its first phase earlier this year and has off-take agreements with Germany’s Uniper as well as U.S. pipeline operator Williams who will be marketing natural gas through the Woodside Louisiana LNG project.  “It is exciting to take this action to provide the needed runway for this project to fully take off and realize its potential in providing reliable and secure energy to the world,” said Kyle Haustveit, Assistant Secretary of the Office of Hydrocarbons and Geothermal Energy. “Thanks to President Trump’s leadership, the Department of Energy is redefining what it means to unleash American energy to strengthen energy reliability and affordability for American families, businesses, and our allies.” The United States is the largest global producer and exporter of natural gas. There are currently eight large-scale LNG projects operating in the United States and several additional projects are expanding or under construction. Under President Trump’s leadership, the Department has approved applications from projects authorized to export more than 17.7 Bcf/d of natural gas as LNG, an increase of approximately 25% from 2024 levels. So far in 2025, over 8 Bcf/d of U.S. LNG export capacity, including from Woodside Louisiana LNG, has reached a final investment decision and gone under construction.

Read More »

Russia Oil Prices Hit Lowest Since War Began

Russian crude prices are at their lowest since the war in Ukraine began, as sanctions deepen the discounts the nation’s oil industry needs to offer and benchmark futures tumble.  On average, Russian oil exporters are receiving just over $40 a barrel for cargoes shipped from the Baltic, Black Sea and the eastern port of Kozmino, according to data from Argus Media. That’s down 28% over the last three months, with recent restrictions targeting oil giants Rosneft PJSC and Lukoil PJSC widening the markdowns.  Mounting Western pressure on Russia’s oil trade has made it increasingly difficult to sell and deliver the barrels, with measures also targeting refiners at top buyers like India. In addition, global benchmark oil prices are sliding, trading below $60 a barrel for the first time since May on Tuesday.  The revenues Russia receives for its oil — which combined with gas account or about a quarter of the nation’s state budget — are critical to fund its war. Lower income strains the finances of the nation’s oil companies and reduces the amount of tax they pay into the Kremlin’s coffers.  The Trump administration has engaged in a diplomatic flurry geared toward ending the conflict over the last few weeks. President Vladimir Putin acknowledged that Russian economic growth was slowing down on a recent visit to India.  Indian officials said they expect imports from Russia to be about 800,000 barrels a day this month, sharply lower than in November, though still a significant volume of supplies. A Chinese refiner recently bought a shipment of crude from Russia’s eastern ports at the steepest discount this year. The two Asian nations are the main buyers of Russian oil.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate

Read More »

EIA Again Raises WTI Price Forecast for Both 2025 and 2026

In its latest short term energy outlook (STEO), which was released on December 9, the U.S. Energy Information Administration (EIA) again raised its West Texas Intermediate (WTI) price forecast for both 2025 and 2026. According to this STEO, the EIA now expects the WTI spot price to average $65.32 per barrel in 2025 and $51.42 per barrel in 2026. The EIA’s December STEO marks the latest in a line of STEOs with average WTI spot price forecast increases for both 2025 and 2026. In its previous November STEO, the EIA projected that the WTI spot price would average $65.15 per barrel in 2025 and $51.26 per barrel in 2026. The EIA’s October STEO projected that the WTI spot price would average $65.00 per barrel this year and $48.50 per barrel next year, and its September STEO forecast that the WTI spot price would average $64.16 per barrel in 2025 and $47.77 per barrel in 2026. Although the September STEO included an increase in the average WTI spot price forecast for 2025, compared to the previous August STEO, the average WTI spot price forecast for 2026 was unchanged from the previous STEO. A quarterly breakdown included in the EIA’s December STEO projected that the WTI spot price will average $59.31 per barrel in the fourth quarter of 2025, $50.93 per barrel in the first quarter of 2026, $50.68 per barrel in the second quarter, and $52.00 per barrel across the third and fourth quarters of next year. The WTI spot price averaged $71.85 per barrel in the first quarter, $64.63 per barrel in the second quarter, and $65.78 per barrel in the third quarter, the December STEO showed. It highlighted that the WTI spot price averaged $76.60 per barrel overall in 2024. In a J.P. Morgan report sent to Rigzone by

Read More »

Chevron Reduces Price for Venezuelan Oil

Chevron Corp, lowered the price of Venezuelan crude offered to US refiners after a tanker was seized by American forces in the Caribbean and as global prices drifted lower.  The oil supermajor sold a batch of Venezuelan oil on Dec. 11 — a day after US forces seized a vessel off the country’s coast — at weaker prices compared than a batch offered on Monday, according to people with knowledge of the situation.  The administration of President Donald Trump is stepping up pressure on Venezuela by targeting oil revenues critical to the survival of Nicolas Maduro regime. The seized vessel, the Skipper, is currently near the Dominican Republic and appeared to be en route to the US, according to vessel movements tracked by Bloomberg. While it’s unclear when the ship will be able to discharge, it’s expected arrival is pressuring already weak prices in the Gulf Coast market, the people said, asking not to be named because the information is private.  Chevron’s operations in Venezuela continue in full compliance with laws and regulations applicable to its business, as well as the sanctions frameworks provided for by the US government, the Houston-based company said in a statement.  The company sold about 10 oil cargoes of different grades for loading next month, in a sign that it’s pressing ahead despite heightened tensions between the two countries. The cargoes were sold in two separate tenders and price levels were not immediately available.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Phillips 66 Budgets $2.4B for 2026

Phillips 66 said Monday it expects a $2.4 billion budget for next year, consisting of $1.1 billion in sustaining capital and $1.3 billion in growth capital. “The 2026 capital budget reflects our ongoing commitment to capital discipline and maximizing shareholder returns”, chair and chief executive Mark Lashier said in an online statement. “We are investing growth capital in our NGL value chain and high-return refining projects, while also investing sustaining capital to support safe and reliable operations”. Houston, Texas-based Phillips 66 expects to shell out $1.1 billion into its refining business, comprising $590 million in sustaining capital and $520 million into growth projects. “With the consolidation of WRB Refining, we incorporated approximately $200 million of sustaining capital and $100 million of growth capital into the budget”, Lashier said. Phillips 66 recently acquired an additional 50 percent stake in WRB Refining LP from Cenovus Energy Inc for $1.4 billion, fully taking over the Wood River and Borger refineries, as confirmed by Phillips 66 in its third quarter report October 29. Wood River in Roxana, Illinois, has a gasoline and distillates production capacity of 176,000 bpd and 140,000 bpd respectively. Borger in Borger, Texas, produces up to 100,000 bpd of gasoline and 70,000 bpd of distillates, according to Phillips 66. The refining allotment for 2026 also includes a multiyear investment at the Humber refinery to enable the production of higher-quality gasoline and expand the facility’s access to “higher-value global markets”, the company said. Phillips 66 expects to start up the project in the second quarter of 2027. Located in North Lincolnshire on the English east coast, the Humber site produces up to 95,000 barrels per day (bpd) of gasoline and 115,000 bpd of distillates, according to Phillips 66. The refining budget also includes “over 100 low-capital, high-return projects to improve market capture

Read More »

Uptime Institute’s Max Smolaks: Power, Racks, and the Economics of the AI Data Center Boom

The latest episode of the Data Center Frontier Show opens not with a sweeping thesis, but with a reminder of just how quickly the industry’s center of gravity has shifted. Editor in Chief Matt Vincent is joined by Max Smolaks, research analyst at Uptime Institute, whom DCF met in person earlier this year at the Open Compute Project (OCP) Global Summit 2025 in San Jose. Since then, Smolaks has been closely tracking several of the most consequential—and least obvious—threads shaping the AI infrastructure boom. What emerges over the course of the conversation is not a single narrative, but a set of tensions: between power and place, openness and vertical integration, hyperscale ambition and economic reality. From Crypto to Compute: An Unlikely On-Ramp One of the clearest structural patterns Smolaks sees in today’s AI buildout is the growing number of large-scale AI data center projects that trace their origins back to cryptocurrency mining. It is a transition few would have predicted even a handful of years ago. Generative AI was not an anticipated workload in traditional capacity planning cycles. Three years ago, ChatGPT did not exist, and the industry had not yet begun to grapple with the scale, power density, and energy intensity now associated with AI training and inference. When demand surged, developers were left with only a limited set of viable options. Many leaned heavily on on-site generation—most often natural gas—to bypass grid delays. Others ended up in geographies that had already been “discovered” by crypto miners. For years, cryptocurrency operators had been quietly mapping underutilized power capacity. Latency did not matter. Proximity to population centers did not matter. Cheap, abundant electricity did—often in remote or unconventional locations that would never have appeared on a traditional data center site-selection short list. As crypto markets softened, those same sites became

Read More »

Google’s TPU Roadmap: Challenging Nvidia’s Dominance in AI Infrastructure

Google’s roadmap for its Tensor Processing Units has quietly evolved into a meaningful counterweight to Nvidia’s GPU dominance in cloud AI infrastructure—particularly at hyperscale. While Nvidia sells physical GPUs and associated systems, Google sells accelerator services through Google Cloud Platform. That distinction matters: Google isn’t competing in the GPU hardware market, but it is increasingly competing in the AI compute services market, where accelerator mix and economics directly influence hyperscaler strategy. Over the past 18–24 months, Google has focused on identifying workloads that map efficiently onto TPUs and has introduced successive generations of the architecture, each delivering notable gains in performance, memory bandwidth, and energy efficiency. Currently, three major TPU generations are broadly available in GCP: v5e and v5p, the “5-series” workhorses tuned for cost-efficient training and scale-out learning. Trillium (v6), offering a 4–5× performance uplift over v5e with significant efficiency gains. Ironwood (v7 / TPU7x), a pod-scale architecture of 9,216 chips delivering more than 40 exaFLOPS FP8 compute, designed explicitly for the emerging “age of inference.” Google is also aggressively marketing TPU capabilities to external customers. The expanded Anthropic agreement (up to one million TPUs, representing ≥1 GW of capacity and tens of billions of dollars) marks the most visible sign of TPU traction. Reporting also suggests that Google and Meta are in advanced discussions for a multibillion-dollar arrangement in which Meta would lease TPUs beginning in 2026 and potentially purchase systems outright starting in 2027. At the same time, Google is broadening its silicon ambitions. The newly introduced Axion CPUs and the fully integrated AI Hypercomputer architecture frame TPUs not as a standalone option, but as part of a multi-accelerator environment that includes Nvidia H100/Blackwell GPUs, custom CPUs, optimized storage, and high-performance fabrics. What follows is a deeper look at how the TPU stack has evolved, and what

Read More »

DCF Trends Summit 2025: Beyond the Grid – Natural Gas, Speed, and the New Data Center Reality

By 2025, the data center industry’s power problem has become a site-selection problem, a finance problem, a permitting problem and, increasingly, a communications problem. That was the throughline of “Beyond the Grid: Natural Gas, Speed, and the New Data Center Reality,” a DCF Trends Summit panel moderated by Stu Dyer, First Vice President at CBRE, with Aad den Elzen, VP of Power Generation at Solar Turbines (a Caterpillar company); Creede Williams, CEO & President of Exigent Energy Partners; and Adam Michaelis, Vice President of Hyperscale Engineering at PointOne Data Centers. In an industry that once treated proximity to gas infrastructure as a red flag, Dyer opened with a blunt marker of the market shift: what used to be a “no-go” is now, for many projects, the shortest path to “yes.” Vacancy is tight, preleasing is high, and the center of gravity is moving both in scale and geography as developers chase power beyond the traditional core. From 48MW Campuses to Gigawatt Expectations Dyer framed the panel’s premise with a Northern Virginia memory: a “big” 48MW campus in Sterling that was expected to last five to seven years—until a hyperscale takedown effectively erased the runway. That was the early warning sign of what’s now a different era entirely. Today, Dyer said, the industry isn’t debating 72MW or even 150MW blocks. Increasingly, the conversation starts at 500MW critical and, for some customers, pushes past a gigawatt. Grid delivery timelines have not kept pace with that shift, and the mismatch is forcing alternative strategies into the mainstream. “If you’re interested in speed and scale… gas.” If there was a sharp edge to the panel, it came from Williams’ assertion that for near-term speed-to-power at meaningful scale, natural gas is the only broadly viable option. Williams spoke as an independent power producer (IPP) operator who

Read More »

Roundtable: The Economics of Acceleration

Ben Rapp, Rehlko: The pace of AI deployment is outpacing grid capacity in many regions, which means power strategy is now directly tied to deployment timelines. To move fast without sacrificing lifecycle cost or reliability, operators are adopting modular power systems that can be installed and commissioned quickly, then expanded or adapted as loads grow. From an energy perspective, this requires architectures that support multiple pathways: traditional generation, cleaner fuels like HVO, battery energy storage, and eventually hydrogen or renewable integrations where feasible. Backup power is no longer a static insurance policy, it’s a dynamic part of the operating model, supporting uptime, compliance, and long-term cost management. Rehlko’s global footprint and broad energy portfolio enable us to support operators through these transitions with scalable solutions that meet existing technical needs while providing a roadmap for future adaptation.

Read More »

DCF Trends Summit 2025: Bridging the Data Center Power Gap – Utilities, On-Site Power, and the AI Buildout

The second installment in our recap series from the 2025 Data Center Frontier Trends Summit highlights a panel that brought unusual candor—and welcome urgency—to one of the defining constraints of the AI era: power availability. Moderated by Buddy Rizer, Executive Director of Economic Development for Loudoun County, Bridging the Data Center Power Gap: Ways to Streamline the Energy Supply Chain convened a powerhouse group of energy and data center executives representing on-site generation, independent power markets, regulated utilities, and hyperscale operators: Jeff Barber, VP of Global Data Centers, Bloom Energy Bob Kinscherf, VP of National Accounts, Constellation Stan Blackwell, Director, Data Center Practice, Dominion Energy Joel Jansen, SVP Regulated Commercial Operations, American Electric Power David McCall, VP of Innovation, QTS Data Centers As presented on September 26, 2025 in Reston, Virginia, the discussion quickly revealed that while no single answer exists to the industry’s power crunch, a more collaborative, multi-path playbook is now emerging—and evolving faster than many realize. A Grid Designed for Yesterday Meets AI-Era Demand Curves Rizer opened with context familiar to anyone operating in Northern Virginia: this region sits at the epicenter of globally scaled digital infrastructure, but its once-ample headroom has evaporated under the weight of AI scaling cycles. Across the panel, the message was consistent: demand curves have shifted permanently, and the step-changes in load growth require new thinking across the entire energy supply chain. Joel Jansen (AEP) underscored the pace of change. A decade ago, utilities faced flat or declining load growth. Now, “our load curve is going straight up,” driven by hyperscale and AI training clusters that are large, high-density, and intolerant of slow development cycles. AEP’s 40,000 miles of transmission and 225,000 miles of distribution infrastructure give it perspective: generation is challenging, but transmission and interconnection timelines are becoming decisive gating factors.

Read More »

DCF Trends Summit 2025 – Scaling AI: Adaptive Reuse, Power-Rich Sites, and the New GPU Frontier

When Jones Lang LaSalle (JLL)’s Sean Farney walked back on stage after lunch at the Data Center Frontier Trends Summit 2025, he didn’t bother easing into the topic. “This is the best one of the day,” he joked, “and it’s got the most buzzwords in the title.” The session, “Scaling AI: The Role of Adaptive Reuse and Power-Rich Sites in GPU Deployment,” lived up to that billing. Over the course of the hour, Farney and his panel of experts dug into the hard constraints now shaping AI infrastructure—and the unconventional sites and power strategies needed to overcome them. Joining Farney on stage were: Lovisa Tedestedt, Strategic Account Executive – Cloud & Service Providers, Schneider Electric Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers Scott Johns, Chief Commercial Officer, Sapphire Gas Solutions Together, they painted a picture of an industry running flat-out, where adaptive reuse, modular buildouts, and behind-the-meter power are becoming the fastest path to AI revenue. The Perfect Storm: 2.3% Vacancy, Power-Constrained Revenue Farney opened with fresh JLL research that set the stakes in stark terms. U.S. colo vacancy is down to 2.3% – roughly 98% utilization. Just five years ago, vacancy was about 10%. The industry is tracking to over 5.4 GW of colocation absorption this year, with 63% of first-half absorption concentrated in just two markets: Northern Virginia and Dallas. There’s roughly 8 GW of build pipeline, but about 73% of that is already pre-leased, largely by hyperscalers and “Mag 7” cloud and AI giants. “We are the envy of every industry on the planet,” Farney said. “That’s fantastic if you’re in the data center business. It’s a really bad thing if you’re a customer.” The message to CIOs and CTOs was blunt: if you don’t have a capacity strategy dialed in, your growth may be constrained

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »