Stay Ahead, Stay ONMINE

Learnings from a Machine Learning Engineer — Part 3: The Evaluation

In this third part of my series, I will explore the evaluation process which is a critical piece that will lead to a cleaner data set and elevate your model performance. We will see the difference between evaluation of a trained model (one not yet in production), and evaluation of a deployed model (one making real-world predictions). In Part 1, […]

In this third part of my series, I will explore the evaluation process which is a critical piece that will lead to a cleaner data set and elevate your model performance. We will see the difference between evaluation of a trained model (one not yet in production), and evaluation of a deployed model (one making real-world predictions).

In Part 1, I discussed the process of labelling your image data that you use in your Image Classification project. I showed how to define “good” images and create sub-classes. In Part 2, I went over various data sets, beyond the usual train-validation-test sets, such as benchmark sets, plus how to handle synthetic data and duplicate images.

Evaluation of the trained model

As machine learning engineers we look at accuracy, F1, log loss, and other metrics to decide if a model is ready to move to production. These are all important measures, but from my experience, these scores can be deceiving especially as the number of classes grows.

Although it can be time consuming, I find it very important to manually review the images that the model gets wrong, as well as the images that the model gives a low softmax “confidence” score to. This means adding a step immediately after your training run completes to calculate scores for all images — training, validation, test, and the benchmark sets. You only need to bring up for manual review the ones that the model had problems with. This should only be a small percentage of the total number of images. See the Double-check process below

What you do during the manual evaluation is to put yourself in a “training mindset” to ensure that the labelling standards are being followed that you setup in Part 1. Ask yourself:

  • “Is this a good image?” Is the subject front and center, and can you clearly see all the features?
  • “Is this the correct label?” Don’t be surprised if you find wrong labels.

You can either remove the bad images or fix the labels if they are wrong. Otherwise you can keep them in the data set and force the model to do better next time. Other questions I ask are:

  • “Why did the model get this wrong?”
  • “Why did this image get a low score?”
  • “What is it about the image that caused confusion?”

Sometimes the answer has nothing to do with that specific image. Frequently, it has to do with the other images, either in the ground truth class or in the predicted class. It is worth the effort to Double-check all images in both sets if you see a consistently bad guess. Again, don’t be surprised if you find poor images or wrong labels.

Weighted evaluation

When doing the evaluation of the trained model (above), we apply a lot of subjective analysis — “Why did the model get this wrong?” and “Is this a good image?” From these, you may only get a gut feeling.

Frequently, I will decide to hold off moving a model forward to production based on that gut feel. But how can you justify to your manager that you want to hit the brakes? This is where putting a more objective analysis comes in by creating a weighted average of the softmax “confidence” scores.

In order to apply a weighted evaluation, we need to identify sets of classes that deserve adjustments to the score. Here is where I create a list of “commonly confused” classes.

Commonly confused classes

Certain animals at our zoo can easily be mistaken. For example, African elephants and Asian elephants have different ear shapes. If your model gets these two mixed up, that is not as bad as guessing a giraffe! So perhaps you give partial credit here. You and your subject matter experts (SMEs) can come up with a list of these pairs and a weighted adjustment for each.

Photo by Matt Bango on Unsplash
Photo by Mathew Krizmanich on Unsplash

This weight can be factored into a modified cross-entropy loss function in the equation below. The back half of this equation will reduce the impact of being wrong for specific pairs of ground truth and prediction by using the “weight” function as a lookup. By default, the weighted adjustment would be 1 for all pairings, and the commonly confused classes would get something like 0.5.

In other words, it’s better to be unsure (have a lower confidence score) when you are wrong, compared to being super confident and wrong.

Modified cross-entropy loss function, image by author

Once this weighted log loss is calculated, I can compare to previous training runs to see if the new model is ready for production.

Confidence threshold report

Another valuable measure that incorporates the confidence threshold (in my example, 95) is to report on accuracy and false positive rates. Recall that when we apply the confidence threshold before presenting results, we help reduce false positives from being shown to the end user.

In this table, we look at the breakdown of “true positive above 95” for each data set. We get a sense that when a “good” picture comes through (like the ones from our train-validation-test set) it is very likely to surpass the threshold, thus the user is “happy” with the outcome. Conversely, the “false positive above 95” is extremely low for good pictures, thus only a small number of our users will be “sad” about the results.

Example Confidence Threshold Report, image by author

We expect the train-validation-test set results to be exceptional since our data is curated. So, as long as people take “good” pictures, the model should do very well. But to get a sense of how it does on extreme situations, let’s take a look at our benchmarks.

The “difficult” benchmark has more modest true positive and false positive rates, which reflects the fact that the images are more challenging. These values are much easier to compare across training runs, so that lets me set a min/max target. So for example, if I target a minimum of 80% for true positive, and maximum of 5% for false positive on this benchmark, then I can feel confident moving this to production.

The “out-of-scope” benchmark has no true positive rate because none of the images belong to any class the model can identify. Remember, we picked things like a bag of popcorn, etc., that are not zoo animals, so there cannot be any true positives. But we do get a false positive rate, which means the model gave a confident score to that bag of popcorn as some animal. And if we set a target maximum of 10% for this benchmark, then we may not want to move it to production.

Photo by Linus Mimietz on Unsplash

Right now, you may be thinking, “Well, what animal did it pick for the bag of popcorn?” Excellent question! Now you understand the importance of doing a manual review of the images that get bad results.

Evaluation of the deployed model

The evaluation that I described above applies to a model immediately after training. Now, you want to evaluate how your model is doing in the real world. The process is similar, but requires you to shift to a “production mindset” and asking yourself, “Did the model get this correct?” and “Should it have gotten this correct?” and “Did we tell the user the right thing?”

So, imagine that you are logging in for the morning — after sipping on your cold brew coffee, of course — and are presented with 500 images that your zoo guests took yesterday of different animals. Your job is to determine how satisfied the guests were using your model to identify the zoo animals.

Using the softmax “confidence” score for each image, we have a threshold before presenting results. Above the threshold, we tell the guest what the model predicted. I’ll call this the “happy path”. And below the threshold is the “sad path” where we ask them to try again.

Your review interface will first show you all the “happy path” images one at a time. This is where you ask yourself, “Did we get this right?” Hopefully, yes!

But if not, this is where things get tricky. So now you have to ask, “Why not?” Here are some things that it could be:

  • “Bad” picture — Poor lighting, bad angle, zoomed out, etc — refer to your labelling standards.
  • Out-of-scope — It’s a zoo animal, but unfortunately one that isn’t found in this zoo. Maybe it belongs to another zoo (your guest likes to travel and try out your app). Consider adding these to your data set.
  • Out-of-scope — It’s not a zoo animal. It could be an animal in your zoo, but not one typically contained there, like a neighborhood sparrow or mallard duck. This might be a candidate to add.
  • Out-of-scope — It’s something found in the zoo. A zoo usually has interesting trees and shrubs, so people might try to identify those. Another candidate to add.
  • Prankster — Completely out-of-scope. Because people like to play with technology, there’s the possibility you have a prankster that took a picture of a bag of popcorn, or a soft drink cup, or even a selfie. These are hard to prevent, but hopefully get a low enough score (below the threshold) so the model did not identify it as a zoo animal. If you see enough pattern in these, consider creating a class with special handling on the front-end.

After reviewing the “happy path” images, you move on to the “sad path” images — the ones that got a low confidence score and the app gave a “sorry, try again” message. This time you ask yourself, “Should the model have given this image a higher score?” which would have put it in the “happy path”. If so, then you want to ensure these images are added to the training set so next time it will do better. But most of time, the low score reflects many of the “bad” or out-of-scope situations mentioned above.

Perhaps your model performance is suffering and it has nothing to do with your model. Maybe it is the ways you users interacting with the app. Keep an eye out of non-technical problems and share your observations with the rest of your team. For example:

  • Are your users using the application in the ways you expected?
  • Are they not following the instructions?
  • Do the instructions need to be stated more clearly?
  • Is there anything you can do to improve the experience?

Collect statistics and new images

Both of the manual evaluations above open a gold mine of data. So, be sure to collect these statistics and feed them into a dashboard — your manager and your future self will thank you!

Photo by Justin Morgan on Unsplash

Keep track of these stats and generate reports that you and your can reference:

  • How often the model is being called?
  • What times of the day, what days of the week is it used?
  • Are your system resources able to handle the peak load?
  • What classes are the most common?
  • After evaluation, what is the accuracy for each class?
  • What is the breakdown for confidence scores?
  • How many scores are above and below the confidence threshold?

The single best thing you get from a deployed model is the additional real-world images! You can add these now images to improve coverage of your existing zoo animals. But more importantly, they provide you insight on other classes to add. For example, let’s say people enjoy taking a picture of the large walrus statue at the gate. Some of these may make sense to incorporate into your data set to provide a better user experience.

Creating a new class, like the walrus statue, is not a huge effort, and it avoids the false positive responses. It would be more embarrassing to identify a walrus statue as an elephant! As for the prankster and the bag of popcorn, you can configure your front-end to quietly handle these. You might even get creative and have fun with it like, “Thank you for visiting the food court.”

Double-check process

It is a good idea to double-check your image set when you suspect there may be problems with your data. I’m not suggesting a top-to-bottom check, because that would a monumental effort! Rather specific classes that you suspect could contain bad data that is degrading your model performance.

Immediately after my training run completes, I have a script that will use this new model to generate predictions for my entire data set. When this is complete, it will take the list of incorrect identifications, as well as the low scoring predictions, and automatically feed that list into the Double-check interface.

This interface will show, one at a time, the image in question, alongside an example image of the ground truth and an example image of what the model predicted. I can visually compare the three, side-by-side. The first thing I do is ensure the original image is a “good” picture, following my labelling standards. Then I check if the ground-truth label is indeed correct, or if there is something that made the model think it was the predicted label.

At this point I can:

  • Remove the original image if the image quality is poor.
  • Relabel the image if it belongs in a different class.

During this manual evaluation, you might notice dozens of the same wrong prediction. Ask yourself why the model made this mistake when the images seem perfectly fine. The answer may be some incorrect labels on images in the ground truth, or even in the predicted class!

Don’t hesitate to add those classes and sub-classes back into the Double-check interface and step through them all. You may have 100–200 pictures to review, but there is a good chance that one or two of the images will stand out as being the culprit.

Up next…

With a different mindset for a trained model versus a deployed model, we can now evaluate performances to decide which models are ready for production, and how well a production model is going to serve the public. This relies on a solid Double-check process and a critical eye on your data. And beyond the “gut feel” of your model, we can rely on the benchmark scores to support us.

In Part 4, we kick off the training run, but there are some subtle techniques to get the most out of the process and even ways to leverage throw-away models to expand your library image data.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

7 time-saving Linux commands

$ fortune | rev!ti deteled dna ,ti daer resu-repus eht tub ,liam dah uoY The shuf command allows you to randomize the lines in a file. Here’s a sample file and the results of shuffling its contents. $ cat namesBillDannyDorothyJimJoanneJohnMartyNancySandraStewart$ shuf namesJimNancyJoanneMartySandraDorothyStewartBillDannyJohn Each time you run shuf, the output should

Read More »

Want to transform networking? Empower the missing users

Nokia seems to have the same goal, but it is taking a different route to reach it. Rather than trying to assemble the ingredients of the kind of IoT needed for empowerment, they start with a recipe—the digital twin. Digital twins are computer models of real-world systems, designed to assemble

Read More »

Chevron VP Confirms Job Cuts

In a statement sent to Rigzone by the Chevron team, Chevron Corporation Vice Chairman Mark Nelson confirmed that the company expects to cut up to 20 percent of its workforce. “Chevron is taking action to simplify our organizational structure, execute faster and more effectively, and position the company for stronger

Read More »

Oil Settles Higher as OPEC+ Weighs Supply Delay

Oil snapped a three-session losing streak to settle near $72 a barrel after OPEC+ delegates said the cartel may postpone supply increases set to begin in April. It would be the fourth time the Saudi Arabia-led producer group has delayed plans to revive output. That’s eased worries about a supply surplus developing this year. The International Energy Agency is calling for an overhang of 450,000 barrels a day and, in the US, inventories are sitting at a three-month high while one measure of market tightness is flashing signs of oversupply. Prices have tumbled since US President Donald Trump’s inauguration, with his hawkish positions on everything from trade to foreign policy dragging oil to 2025 lows. Money managers have slashed their net bullish position on crude, while market gauges including time spreads are flashing signs of weakness. Another bearish headwind for crude emerged Tuesday, with the US and Russia agreeing to appoint teams to negotiate an end to the war in Ukraine. Russia’s invasion in 2022 prompted nations to put sanctions on its oil industry, and a peace agreement may include rolling back those restrictions, adding more supplies to the global market. In the near-term, though, a disruption to Kazakh oil flows via a major export pipeline could rein in supplies in the region. Oil Prices: WTI for March delivery rose 1.6% from Friday’s close to settle at $71.85 a barrel in New York. Futures didn’t settle on Monday due to the US Presidents’ Day holiday. Brent for April settlement advanced 0.8% to settle at $75.84 a barrel. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect

Read More »

TotalEnergies, Air Liquide Plan $628 Million Hydrogen Venture

TotalEnergies SE and Air Liquide SA plan a €600 million ($628 million) joint venture to produce green hydrogen for the French oil giant’s refinery in the Netherlands, along with a supply deal for its petrochemical plant in Belgium. The two companies aim to build a 250-megawatt electrolyzer powered by wind energy near the Zeeland refinery, Total said in a statement Tuesday. Separately, Total also agreed to buy green hydrogen for its Antwerp facility from a 200-megawatt electrolyzer that Air Liquide plans to build near Rotterdam. Total’s continued drive to reduce emissions at its refineries with low-carbon hydrogen, following other recent deals with Air Liquide and Air Products & Chemicals Inc., contrasts with a more cautious approach from its peers. “The partnership with Air Liquide takes on a new dimension and marks a new step in TotalEnergies’ ambition to decarbonize the hydrogen consumed by its refineries in Europe by 2030,” said Vincent Stoquart, President, Refining & Chemicals at TotalEnergies said in the statement. The joint project near the Zeeland refinery is expected to be commissioned in 2029, and the one that will supply the Antwerp plant should start operating by the end of 2027, Total said. A final investment decision still hasn’t been reached. Thanks to its existing hydrogen pipeline network, Air Liquide will also be able to serve other Dutch and Belgian customers, the French industrial gas company said in a separate statement.   Under the agreement, Total will supply the two electrolyzers with power from an offshore wind project in the Netherlands, while Air Liquide will also buy clean power from a Vattenfall wind farm off the Dutch coast. Upon completion, the two projects would represent a combined investment of more than €1 billion, and avoid annual emissions equivalent to as much as 500,000 tons of carbon dioxide, Air Liquide said. The company

Read More »

Swinney pledges extra £25m for Grangemouth and calls for matched funding

John Swinney has pledged a further £25 million to secure a “just transition” for Grangemouth, calling on the UK Government to match the Scottish Government’s funding. The First Minister said the Labour Government must “do what it said it would do before the election”. However, he praised the recent discussions he has had with Labour ministers Ed Miliband and Michael Shanks on the future of the industrial site. Earlier this month, redundancy letters were sent out to staff at the oil refinery owned by Petroineos – with some 65 of around 500 jobs expected to be retained. It was announced last year that the central Scotland facility would close and transition to become an import terminal, as Petroineos reported massive losses at the refinery. A £1.5 million report into the feasibility of Grangemouth becoming a low-carbon energy hub, known as Project Willow, is due to be published by the end of the month.John Swinney announced the new funding in a statement to the Scottish Parliament on Tuesday, saying it would come from ScotWind revenues in a budget amendment. Mr Swinney said: “Any redundancy, whether voluntary or compulsory, is a matter of deep regret. “That is particularly so given that this government believes that refining at Grangemouth should continue, that this closure is premature, and that it is detrimental to Scotland’s transition to net zero.” A careers fair will take place on March 6, he said, with 19 companies taking part. He said the additional £25 million for the Grangemouth just transition fund would take the Scottish Government’s total investment for the site to £87 million. This new money will expedite any proposals which come from Project Willow, he said. The report is examining other industries which could exist on the site such as plastics recycling, hydrogen production and sustainable aviation

Read More »

Europe’s Diesel Market Flashes Warning of Supply Pressures

Benchmark diesel futures are showing signs of market tightness in Europe, as fuel traders continue to face supply pressures. The most-immediate contracts on Tuesday hit the steepest backwardation — where barrels for prompt delivery are more expensive than those further down the curve — since March. The structure is often interpreted as a sign of limited supply relative to demand, with traders willing to pay a premium for fuel that’s available sooner. Europe’s diesel supplies have been pressured by refinery outages — both planned and unplanned — in recent weeks. January also saw a sharp drop-off in shipments into the European Union and UK, according to Vortexa Ltd. data compiled by Bloomberg. There are also signs of supply limitations in the US market, which regularly supplies Europe with cargoes. The country’s stockpiles are at their lowest for the time of year since 2014 and futures there are also strongly backwardated. “Cargoes diverting to the US, reverse arbitrage movements from Amsterdam-Rotterdam-Antwerp to PADD 1 are indicative of an undoubtedly tight US market,” said George Shaw, an oil analyst at Kpler. For Europe, that’s “having some reciprocal effect on gasoil spreads, as it needs to price higher in order to attract cargo flow now that demand is seasonally recovering.” Recent cold weather is also supportive for Europe’s market: lower temperatures can stoke demand for heating oil, a type of diesel. A recent jump in natural gas prices is another potential tailwind for prices. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

Ørsted looks at continuously ‘rightsizing’ organisation

Ørsted has not ruled out redundancies after it said it will be “rightsizing” its organisation following the departure of former chief executive Mads Nipper. Nipper left the Danish wind developer at the start of February. The company’s share price had fallen by 80% since he joined in January 2021, and it was hit by a string of impairments in the past two years amounting to 12.1bn kroner – driven largely by worsening economics within its US portfolio. “We will continue our company-wide efficiency programme to further drive cost efficiency beyond the DKK 1 billion savings plan implemented during 2024,” the Danish wind developer said in a statement this month, explaining its adjusted business and investment plan. “As we do not expect to construct at the same pace as our current build-out programme, we will also be rightsizing our cost base and organisation continuously.” A spokesperson did not rule out redundancies, though he said it was “too soon” to be drawing any conclusions about whether the apparent “rightsizing” would necessitate lay-offs, and declined to put a value or date on the reorganisation. Ørsted made the last of a series of redundancies in late October, after conducting a comprehensive review of its portfolio in February 2024, the spokesperson said. “Last year’s redundancies followed the plan that we announced in February 2024 where we said we would reduce 600-800 positions globally (out of approx. 8,900 employees at the time),” they said. “We made the last of those reductions in late October.” Former chief executive Nipper stepped down from the role in February, after the company increased its operating profit to 32 billion Danish kroner (£3.5bn) in 2024, compared to 18.7bn kroner (£2bn) a year earlier. However, most of its overall profit was wiped out by impairments, according to a statement. Ørsted group president

Read More »

New well at BP’s giant UK oilfield ‘exceeds expectations’

Energy giant BP continues to ramp up its newest facility in the North Sea, with production from its latest well “exceeding expectations”. BP said the well on the Clair Ridge platform in the UK Continental Shelf (UKCS), achieved an “unusual level of success” by producing 12,500 barrels of oil per day, according to a report seen by Energy Voice. Production at Clair Ridge started in 2018 following a £4.5 billion investment by BP and its field partners including Shell, Chevron and ConocoPhillips. The field is 47 miles (75km) west of Shetland. It is the second phase of development of Clair, which has been hailed as the largest oilfield in the UKCS. BP discovered Clair in 1977 but did not commence production until 2005 due to the complexity of the geology presented by the find. BP estimates the field holds 7 billion barrels of hydrocarbons. West of Shetland is technically in the Atlantic margin, but is often described as being included in the UK North Sea. BP said the results from the B22 well represented a “global first” for its offshore team, which deployed high-pressure technology that helped to achieve “maximum productivity”. The analysis came after BP said upstream production this year was likely to be lower than in 2024. In guidance published earlier this month in its 2024 full-year results, BP said oil production is expected to be “broadly flat” in 2025. BP is under pressure to improve its performance – it unveiled a 36% slump in annual profits to $8.9 billion (£7.2bn) in 2024. Chief executive Murray Auchincloss has pledged to reveal a “new direction” for BP at its delayed capital markets day later this month on 26 February. In a statement, senior vice president of BP North Sea Doris Reiter said the success of the well means it

Read More »

Data center spending to top $1 trillion by 2029 as AI transforms infrastructure

His projections account for recent advances in AI and data center efficiency, he says. For example, the open-source AI model from Chinese company DeepSeek seems to have shown that an LLM can produce very high-quality results at a very low cost with some clever architectural changes to how the models work. These improvements are likely to be quickly replicated by other AI companies. “A lot of these companies are trying to push out more efficient models,” says Fung. “There’s a lot of effort to reduce costs and to make it more efficient.” In addition, hyperscalers are designing and building their own chips, optimized for their AI workloads. Just the accelerator market alone is projected to reach $392 billion by 2029, Dell’Oro predicts. By that time, custom accelerators will outpace commercially available accelerators such as GPUs. The deployment of dedicated AI servers also has an impact on networking, power and cooling. As a result, spending on data center physical infrastructure (DCPI) will also increase, though at a more moderate pace, growing by 14% annually to $61 billion in 2029.  “DCPI deployments are a prerequisite to support AI workloads,” says Tam Dell’Oro, founder of Dell’Oro Group, in the report. The research firm raised its outlook in this area due to the fact that actual 2024 results exceeded its expectations, and demand is spreading from tier one to tier two cloud service providers. In addition, governments and tier one telecom operators are getting involved in data center expansion, making it a long-term trend.

Read More »

The Future of Property Values and Power in Virginia’s Loudoun County and ‘Data Center Alley’

Loudoun County’s FY 2026 Proposed Budget Is Released This week, Virginia’s Loudoun County released its FY 2026 Proposed Budget. The document notes how data centers are a major driver of revenue growth in Loudoun County, contributing significantly to both personal and real property tax revenues. As noted above, data centers generate almost 50% of Loudoun County property tax revenues. Importantly, Loudoun County has now implemented measures such as a Revenue Stabilization Fund (RSF) to manage the risks associated with this revenue dependency. The FY 2026 budget reflects the strong growth in data center-related revenue, allowing for tax rate reductions while still funding critical services and infrastructure projects. But the county is mindful of the potential volatility in data center revenue and is planning for long-term fiscal sustainability. The FY 2026 Proposed Budget notes how Loudoun County’s revenue from personal property taxes, particularly from data centers, has grown significantly. From FY 2013 to FY 2026, revenue from this source has increased from $60 million to over $800 million. Additionally, the county said its FY 2026 Proposed Budget benefits from $150 million in new revenue from the personal property tax portfolio, with $133 million generated specifically from computer equipment (primarily data centers). The county said data centers have also significantly impacted the real property tax portfolio. In Tax Year (TY) 2025, 73% of the county’s commercial portfolio is composed of data centers. The county said its overall commercial portfolio experienced a 50% increase in value between TY 2024 and TY 2025, largely driven by the appreciation of data center properties. RSF Meets Positive Economic Outlook The Loudoun County Board of Supervisors created the aformentioned Revenue Stabilization Fund (RSF) to manage the risks associated with the county’s reliance on data center-related revenue. The RSF targets 10% of data center-related real and personal property tax

Read More »

Deep Diving on DeepSeek: AI Disruption and the Future of Liquid Cooling

We know that the data center industry is currently undergoing a period of rapid transformation, driven by the increasing demands of artificial intelligence (AI) workloads and evolving cooling technologies. And it appears that the recent emergence of DeepSeek, a Chinese AI startup, alongside supply chain issues for NVIDIA’s next-generation GB200 AI chips, may be prompting data center operators to reconsider their cooling strategies. Angela Taylor, Chief of Staff at LiquidStack, provided insights to Data Center Frontier on these developments, outlining potential shifts in the industry and the future of liquid cooling adoption. DeepSeek’s Market Entry and Supply Chain Disruptions Taylor told DCF, “DeepSeek’s entry into the market, combined with NVIDIA’s GB200 supply chain delays, is giving data center operators a lot to think about.” At issue here is how DeepSeek’s R1 chatbot came out of the box positioned an energy-efficient AI model that reportedly requires significantly less power than many of its competitors. This development raises questions about whether current data center cooling infrastructures are adequate, particularly as AI workloads become more specialized and diverse. At the same time, NVIDIA’s highly anticipated GB200 NVL72 AI servers, designed to handle next-generation AI workloads, are reportedly facing supply chain bottlenecks. Advanced design requirements, particularly for high-bandwidth memory (HBM) and power-efficient cooling systems, have delayed shipments, with peak availability now expected between Q2 and Q3 of 2025.  This combination of a new AI player and delayed hardware supply has created uncertainty, compelling data center operators to reconsider their near-term cooling infrastructure investments. A Temporary Slowdown in AI Data Center Retrofits? Taylor also observed, “We may see a short-term slowdown in AI data center retrofits as operators assess whether air cooling can now meet their needs.” The efficiency of DeepSeek’s AI models suggests that some AI workloads may require less power and generate less heat, making air

Read More »

Georgia Follows Ohio’s Lead in Moving Energy Costs to Data Centers

The rule also mandates that any new contracts between Georgia Power and large-load customers exceeding 100 MW be submitted to the PSC for review. This provision ensures regulatory oversight and transparency in agreements that could significantly impact the state’s power grid and ratepayers. Commissioner Lauren “Bubba” McDonald points out that this is one of a number of actions that the PSC is planning to protect ratepayers, and that the PSC’s 2025 Integrated Resource Plan will further address data center power usage. Keeping Ahead of Anticipated Energy Demand This regulatory change reflects Georgia’s proactive approach to managing the increasing energy demands associated with the state’s growing data center industry, aiming to balance economic development with the interests of all electricity consumers. Georgia Power has been trying very hard to develop generation capacity to meet it’s expected usage pattern, but the demand is increasing at an incredible rate. In their projection for increased energy demand, the 2022 number was 400 MW by 2030. A year later, in their 2023 Integrated Resource Plan, the anticipated increase had grown to 6600 MW by 2030. Georgia Power recently brought online two new nuclear reactors at the Vogtle Electric Generating Plant, significantly increasing its nuclear generation capacity giving the four unit power generation station a capacity of over 4.5 GW. This development has contributed to a shift in Georgia’s energy mix, with clean energy sources surpassing fossil fuels for the first time. But despite the commitment to nuclear power, the company is also in the process of developing three new power plants at the Yates Steam Generating Plant. According to the AJC newspaper, regulators had approved the construction of fossil fuel power, approving natural gas and oil-fired power plants. Designed as “peaker” plants to come online at times of increased the demand, the power plants will

Read More »

Chevron, GE Vernova, Engine No.1 Join Race to Co-Locate Natural Gas Plants for U.S. Data Centers

Other Recent Natural Gas Developments for Data Centers As of February 2025, the data center industry has seen a host of significant developments in natural gas plant technologies and strategic partnerships aimed at meeting the escalating energy demands driven by AI and cloud computing. In addition to the partnership between Chevron, Engine No. 1, and GE Vernova, other consequential initiatives include the following: ExxonMobil’s Entry into the Electricity Market ExxonMobil has announced plans to build natural gas-fired power plants to supply electricity to AI data centers. The company intends to leverage carbon capture and storage technology to minimize emissions, positioning its natural gas solutions as competitive alternatives to nuclear power. This announcement in particular seemed to herald a notable shift in industry as fossil fuel companies venture into the electricity market to meet the rising demand for low-carbon power. Powerconnex Inc.’s Natural Gas Plant in Ohio An Ohio data center in New Albany, developed by Powerconnex Inc., plans to construct a natural gas-fired power plant on-site to meet its electricity needs amidst the AI industry’s increasing energy demands. The New Albany Energy Center is expected to generate up to 120 megawatts (MW) of electricity, with construction beginning in Q4 2025 and operations commencing by Q1 2026. Crusoe and Kalina Distributed Power Partnership in Alberta, Canada AI data center developer Crusoe has entered into a multi-year framework agreement with Kalina Distributed Power to develop multiple co-located AI data centers powered by natural gas power plants in Alberta, Canada. Crusoe will own and operate the data centers, purchasing power from three Kalina-owned 170 MW gas-fired power plants through 15-year Power Purchase Agreements (PPAs). Entergy’s Natural Gas Power Plants for Data Centers Entergy plans to deploy three new natural gas power plants, providing over 2,200 MW of energy over 15 years, pending approval

Read More »

Podcast: Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers

In the latest episode of the Data Center Frontier Show podcast, DCF Editor-in-Chief Matt Vincent sits down with Phill Lawson-Shanks, Chief Innovation Officer at Aligned Data Centers, for a wide-ranging discussion that touches on some of the most pressing trends and challenges shaping the future of the data center industry. From the role of nuclear energy and natural gas in addressing the sector’s growing power demands, to the rapid expansion of Aligned’s operations in Latin America (LATAM), in the course of the podcast Lawson-Shanks provides deep insight into where the industry is headed. Scaling Sustainability: Tracking Embodied Carbon and Scope 3 Emissions A key focus of the conversation is sustainability, where Aligned continues to push boundaries in carbon tracking and energy efficiency. Lawson-Shanks highlights the company’s commitment to monitoring embodied carbon—an effort that began four years ago and has since positioned Aligned as an industry leader. “We co-authored and helped found the Climate Accord with iMasons—taking sustainability to a whole new level,” he notes, emphasizing how Aligned is now extending its carbon traceability standards to ODATA’s facilities in LATAM. By implementing lifecycle assessments (LCAs) and tracking Scope 3 emissions, Aligned aims to provide clients with a detailed breakdown of their environmental impact. “The North American market is still behind in lifecycle assessments and environmental product declarations. Where gaps exist, we look for adjacencies and highlight them—helping move the industry forward,” Lawson-Shanks explains. The Nuclear Moment: A Game-Changer for Data Center Power One of the most compelling segments of the discussion revolves around the growing interest in nuclear energy—particularly small modular reactors (SMRs) and microreactors—as a viable long-term power solution for data centers. Lawson-Shanks describes the recent industry buzz surrounding Oklo’s announcement of a 12-gigawatt deployment with Switch as a significant milestone, calling the move “inevitable.” “There are dozens of nuclear

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »