Stay Ahead, Stay ONMINE

Learnings from a Machine Learning Engineer — Part 3: The Evaluation

In this third part of my series, I will explore the evaluation process which is a critical piece that will lead to a cleaner data set and elevate your model performance. We will see the difference between evaluation of a trained model (one not yet in production), and evaluation of a deployed model (one making real-world predictions). In Part 1, […]

In this third part of my series, I will explore the evaluation process which is a critical piece that will lead to a cleaner data set and elevate your model performance. We will see the difference between evaluation of a trained model (one not yet in production), and evaluation of a deployed model (one making real-world predictions).

In Part 1, I discussed the process of labelling your image data that you use in your Image Classification project. I showed how to define “good” images and create sub-classes. In Part 2, I went over various data sets, beyond the usual train-validation-test sets, such as benchmark sets, plus how to handle synthetic data and duplicate images.

Evaluation of the trained model

As machine learning engineers we look at accuracy, F1, log loss, and other metrics to decide if a model is ready to move to production. These are all important measures, but from my experience, these scores can be deceiving especially as the number of classes grows.

Although it can be time consuming, I find it very important to manually review the images that the model gets wrong, as well as the images that the model gives a low softmax “confidence” score to. This means adding a step immediately after your training run completes to calculate scores for all images — training, validation, test, and the benchmark sets. You only need to bring up for manual review the ones that the model had problems with. This should only be a small percentage of the total number of images. See the Double-check process below

What you do during the manual evaluation is to put yourself in a “training mindset” to ensure that the labelling standards are being followed that you setup in Part 1. Ask yourself:

  • “Is this a good image?” Is the subject front and center, and can you clearly see all the features?
  • “Is this the correct label?” Don’t be surprised if you find wrong labels.

You can either remove the bad images or fix the labels if they are wrong. Otherwise you can keep them in the data set and force the model to do better next time. Other questions I ask are:

  • “Why did the model get this wrong?”
  • “Why did this image get a low score?”
  • “What is it about the image that caused confusion?”

Sometimes the answer has nothing to do with that specific image. Frequently, it has to do with the other images, either in the ground truth class or in the predicted class. It is worth the effort to Double-check all images in both sets if you see a consistently bad guess. Again, don’t be surprised if you find poor images or wrong labels.

Weighted evaluation

When doing the evaluation of the trained model (above), we apply a lot of subjective analysis — “Why did the model get this wrong?” and “Is this a good image?” From these, you may only get a gut feeling.

Frequently, I will decide to hold off moving a model forward to production based on that gut feel. But how can you justify to your manager that you want to hit the brakes? This is where putting a more objective analysis comes in by creating a weighted average of the softmax “confidence” scores.

In order to apply a weighted evaluation, we need to identify sets of classes that deserve adjustments to the score. Here is where I create a list of “commonly confused” classes.

Commonly confused classes

Certain animals at our zoo can easily be mistaken. For example, African elephants and Asian elephants have different ear shapes. If your model gets these two mixed up, that is not as bad as guessing a giraffe! So perhaps you give partial credit here. You and your subject matter experts (SMEs) can come up with a list of these pairs and a weighted adjustment for each.

Photo by Matt Bango on Unsplash
Photo by Mathew Krizmanich on Unsplash

This weight can be factored into a modified cross-entropy loss function in the equation below. The back half of this equation will reduce the impact of being wrong for specific pairs of ground truth and prediction by using the “weight” function as a lookup. By default, the weighted adjustment would be 1 for all pairings, and the commonly confused classes would get something like 0.5.

In other words, it’s better to be unsure (have a lower confidence score) when you are wrong, compared to being super confident and wrong.

Modified cross-entropy loss function, image by author

Once this weighted log loss is calculated, I can compare to previous training runs to see if the new model is ready for production.

Confidence threshold report

Another valuable measure that incorporates the confidence threshold (in my example, 95) is to report on accuracy and false positive rates. Recall that when we apply the confidence threshold before presenting results, we help reduce false positives from being shown to the end user.

In this table, we look at the breakdown of “true positive above 95” for each data set. We get a sense that when a “good” picture comes through (like the ones from our train-validation-test set) it is very likely to surpass the threshold, thus the user is “happy” with the outcome. Conversely, the “false positive above 95” is extremely low for good pictures, thus only a small number of our users will be “sad” about the results.

Example Confidence Threshold Report, image by author

We expect the train-validation-test set results to be exceptional since our data is curated. So, as long as people take “good” pictures, the model should do very well. But to get a sense of how it does on extreme situations, let’s take a look at our benchmarks.

The “difficult” benchmark has more modest true positive and false positive rates, which reflects the fact that the images are more challenging. These values are much easier to compare across training runs, so that lets me set a min/max target. So for example, if I target a minimum of 80% for true positive, and maximum of 5% for false positive on this benchmark, then I can feel confident moving this to production.

The “out-of-scope” benchmark has no true positive rate because none of the images belong to any class the model can identify. Remember, we picked things like a bag of popcorn, etc., that are not zoo animals, so there cannot be any true positives. But we do get a false positive rate, which means the model gave a confident score to that bag of popcorn as some animal. And if we set a target maximum of 10% for this benchmark, then we may not want to move it to production.

Photo by Linus Mimietz on Unsplash

Right now, you may be thinking, “Well, what animal did it pick for the bag of popcorn?” Excellent question! Now you understand the importance of doing a manual review of the images that get bad results.

Evaluation of the deployed model

The evaluation that I described above applies to a model immediately after training. Now, you want to evaluate how your model is doing in the real world. The process is similar, but requires you to shift to a “production mindset” and asking yourself, “Did the model get this correct?” and “Should it have gotten this correct?” and “Did we tell the user the right thing?”

So, imagine that you are logging in for the morning — after sipping on your cold brew coffee, of course — and are presented with 500 images that your zoo guests took yesterday of different animals. Your job is to determine how satisfied the guests were using your model to identify the zoo animals.

Using the softmax “confidence” score for each image, we have a threshold before presenting results. Above the threshold, we tell the guest what the model predicted. I’ll call this the “happy path”. And below the threshold is the “sad path” where we ask them to try again.

Your review interface will first show you all the “happy path” images one at a time. This is where you ask yourself, “Did we get this right?” Hopefully, yes!

But if not, this is where things get tricky. So now you have to ask, “Why not?” Here are some things that it could be:

  • “Bad” picture — Poor lighting, bad angle, zoomed out, etc — refer to your labelling standards.
  • Out-of-scope — It’s a zoo animal, but unfortunately one that isn’t found in this zoo. Maybe it belongs to another zoo (your guest likes to travel and try out your app). Consider adding these to your data set.
  • Out-of-scope — It’s not a zoo animal. It could be an animal in your zoo, but not one typically contained there, like a neighborhood sparrow or mallard duck. This might be a candidate to add.
  • Out-of-scope — It’s something found in the zoo. A zoo usually has interesting trees and shrubs, so people might try to identify those. Another candidate to add.
  • Prankster — Completely out-of-scope. Because people like to play with technology, there’s the possibility you have a prankster that took a picture of a bag of popcorn, or a soft drink cup, or even a selfie. These are hard to prevent, but hopefully get a low enough score (below the threshold) so the model did not identify it as a zoo animal. If you see enough pattern in these, consider creating a class with special handling on the front-end.

After reviewing the “happy path” images, you move on to the “sad path” images — the ones that got a low confidence score and the app gave a “sorry, try again” message. This time you ask yourself, “Should the model have given this image a higher score?” which would have put it in the “happy path”. If so, then you want to ensure these images are added to the training set so next time it will do better. But most of time, the low score reflects many of the “bad” or out-of-scope situations mentioned above.

Perhaps your model performance is suffering and it has nothing to do with your model. Maybe it is the ways you users interacting with the app. Keep an eye out of non-technical problems and share your observations with the rest of your team. For example:

  • Are your users using the application in the ways you expected?
  • Are they not following the instructions?
  • Do the instructions need to be stated more clearly?
  • Is there anything you can do to improve the experience?

Collect statistics and new images

Both of the manual evaluations above open a gold mine of data. So, be sure to collect these statistics and feed them into a dashboard — your manager and your future self will thank you!

Photo by Justin Morgan on Unsplash

Keep track of these stats and generate reports that you and your can reference:

  • How often the model is being called?
  • What times of the day, what days of the week is it used?
  • Are your system resources able to handle the peak load?
  • What classes are the most common?
  • After evaluation, what is the accuracy for each class?
  • What is the breakdown for confidence scores?
  • How many scores are above and below the confidence threshold?

The single best thing you get from a deployed model is the additional real-world images! You can add these now images to improve coverage of your existing zoo animals. But more importantly, they provide you insight on other classes to add. For example, let’s say people enjoy taking a picture of the large walrus statue at the gate. Some of these may make sense to incorporate into your data set to provide a better user experience.

Creating a new class, like the walrus statue, is not a huge effort, and it avoids the false positive responses. It would be more embarrassing to identify a walrus statue as an elephant! As for the prankster and the bag of popcorn, you can configure your front-end to quietly handle these. You might even get creative and have fun with it like, “Thank you for visiting the food court.”

Double-check process

It is a good idea to double-check your image set when you suspect there may be problems with your data. I’m not suggesting a top-to-bottom check, because that would a monumental effort! Rather specific classes that you suspect could contain bad data that is degrading your model performance.

Immediately after my training run completes, I have a script that will use this new model to generate predictions for my entire data set. When this is complete, it will take the list of incorrect identifications, as well as the low scoring predictions, and automatically feed that list into the Double-check interface.

This interface will show, one at a time, the image in question, alongside an example image of the ground truth and an example image of what the model predicted. I can visually compare the three, side-by-side. The first thing I do is ensure the original image is a “good” picture, following my labelling standards. Then I check if the ground-truth label is indeed correct, or if there is something that made the model think it was the predicted label.

At this point I can:

  • Remove the original image if the image quality is poor.
  • Relabel the image if it belongs in a different class.

During this manual evaluation, you might notice dozens of the same wrong prediction. Ask yourself why the model made this mistake when the images seem perfectly fine. The answer may be some incorrect labels on images in the ground truth, or even in the predicted class!

Don’t hesitate to add those classes and sub-classes back into the Double-check interface and step through them all. You may have 100–200 pictures to review, but there is a good chance that one or two of the images will stand out as being the culprit.

Up next…

With a different mindset for a trained model versus a deployed model, we can now evaluate performances to decide which models are ready for production, and how well a production model is going to serve the public. This relies on a solid Double-check process and a critical eye on your data. And beyond the “gut feel” of your model, we can rely on the benchmark scores to support us.

In Part 4, we kick off the training run, but there are some subtle techniques to get the most out of the process and even ways to leverage throw-away models to expand your library image data.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AWS stealthily raises GPU prices by 15 percent

Amazon Web Services (AWS) raised the prices of its GPU instances for machine learning by around 15 percent this weekend, without warning, reports The Register. The price increase applies in particular to EC2 Capacity Blocks for ML, where, for example, the cost of the p5e.48xlarge instance rose from $ 34.61

Read More »

Oil Prices Jump as Short Covering Builds

Oil moved higher as traders digested a mix of geopolitical risks that could add a premium to prices while continuing to assess US measures to exert control over Venezuela’s oil. West Texas Intermediate rose 3.2% to settle below $58 a barrel. Prices continued to climb after settlement, rising more than 1% and leaving the market poised to wipe out losses from earlier in the week. President Donald Trump threatened to hit Iran “hard” if the country’s government killed protesters amid an ongoing period of unrest. A disruption to Iranian supply would prove an unexpected hurdle in a market that’s currently anticipating a glut of oil. Adding to the bullish momentum, an annual period of commodity index rebalancing is expected to see cash flow back into crude over the next few days. Call skews for Brent have also strengthened as traders pile into the options market to hedge. And entering the day, trend-following commodity trading advisers were 91% short in WTI, according to data from Kpler’s Bridgeton Research group. That positioning can leave traders rushing to cover shorts in the event of a price spike. The confluence of bullish events arrived as traders were weighing the US’s efforts to control the Venezuelan oil industry. Energy Secretary Chris Wright said the US plans to control sales of Venezuelan oil and would initially offer stored crude, while the Energy Department said barrels already were being marketed. State-owned Petroleos de Venezuela SA said it’s in negotiations with Washington over selling crude through a framework similar to an arrangement with Chevron Corp., the only supermajor operating in the country. Meanwhile, President Donald Trump told the New York Times that US oversight of the country could last years and that “the oil will take a while.” “We are really talking about a trade-flow shift as the

Read More »

Survey Shows OPEC Held Supply Flat Last Month

OPEC’s crude production held steady in December as a slump in Venezuela’s output to the lowest in two years was offset by increases in Iraq and some other members, a Bloomberg survey showed.  The Organization of the Petroleum Exporting Countries pumped an average of just over 29 million barrels a day, little changed from the previous month, according to the survey. Venezuelan output declined by about 14% to 830,000 barrels a day as the US blocked and seized tankers as part of a strategy to pressure the country’s leadership. Supplies increased from Iraq and a few other nations as they pressed on with the last in a series of collective increases before a planned pause in the first quarter of this year. The alliance, led by Saudi Arabia, aims to keep output steady through the end of March while global oil markets confront a surplus. World markets have been buffeted this week after President Donald Trump’s administration captured Venezuelan leader Nicolás Maduro, and said it would assume control of the OPEC member’s oil exports indefinitely.  While Trump has said that US oil companies will invest billions of dollars to rebuild Venezuela’s crumbling energy infrastructure, the nation’s situation in the short term remains precarious. Last month, Caracas was forced to shutter wells at the oil-rich Orinoco Belt amid the American blockade.  The shock move is the latest in an array of geopolitical challenges confronting the broader OPEC+ coalition, ranging from forecasts of a record supply glut to unrest in Iran and Russia’s ongoing war against Ukraine, which is taking a toll on the oil exports of fellow alliance member Kazakhstan. Oil prices are trading near the lowest in five years at just over $60 a barrel in London, squeezing the finances of OPEC+ members. Amid the uncertain backdrop, eight key nations agreed again this month to freeze output levels during the first quarter,

Read More »

Utilities under pressure: 6 power sector trends to watch in 2026

Listen to the article 10 min This audio is auto-generated. Please let us know if you have feedback. 2026 will be a year of reckoning for the electric power industry.  Major policy changes in the One Big Beautiful Bill Act, which axed most subsidies for clean energy and electric vehicles, are forcing utilities, manufacturers, developers and others to pivot fast. The impacts of those changes will become more pronounced over the coming months. Market forces will also have their say. Demand for power has never been greater. But some of the most aggressive predictions driving resource planning may not come to pass, leading some to fear the possibility of another tech bubble. At the same time, each passing day brings more distributed energy resources onto the grid, increasing the opportunities — and expectations — for utilities to harness those resources into a more dynamic, flexible and resilient system. Here are some of the top trends Utility Dive will be tracking over the coming year. Large loads — where are they, and who controls their interconnection — dominate industry concerns Across the United States, but particularly in markets like Texas and the Mid-Atlantic, large loads — mainly data centers designed to run artificial intelligence programs — are seeking to connect to the grid, driving up electricity demand forecasts and ballooning interconnection queues. That’s led some states to introduce new large load tariffs to weed out speculative requests, with more states expected to follow suit.  The Department of Energy is now pushing federal regulators to take a more active role in regulating how those loads get connected to the grid, setting the stage for a power struggle between state and federal authorities. The DOE asked the Federal Energy Regulatory Commission to issue rules by April 30, a deadline many say will be hard to meet. A

Read More »

China’s Top Oil Firms Turn to Beijing for Guidance on VEN

Leading Chinese oil companies with interests in Venezuela have asked Beijing for guidance on how to protect their investments as Washington cranks up pressure on the Latin American country to increase its economic ties with the US. State-owned firms led by China National Petroleum Corp. raised concerns this week with government agencies and sought advice from officials, in an effort to align their responses with Beijing’s diplomatic strategy and to salvage existing claims to some of the world’s largest oil reserves, according to people familiar with the situation. They asked not to be identified as the discussions are private. The companies, closely monitoring developments even before the US seized President Nicolas Maduro at the weekend, are also conducting their own assessments of the situation on the ground, the people said. Top Beijing officials are separately reviewing events and trying to better understand corporate exposure, while planning for scenarios including a worst case where China’s investments would go to zero, they added.  While it is typical for government-backed firms to maintain close ties with officials in Beijing, the emergency consultations underscore the stakes for Chinese majors, caught off-guard by Washington’s raid and by the rapid escalation of efforts to establish a US sphere of influence in the Americas. Beyond the immediate impact of US actions, all are concerned about long-term prospects, the people said. Chinese companies have established a significant footprint across Latin America over the past decades, including under the Belt and Road Initiative. Venezuela, with few other friends, has been among the most important beneficiaries of this largesse — in part because of its vast oil wealth. China first extended financing for infrastructure and oil projects in 2007, under former President Hugo Chavez. Public data supports estimates that Beijing had lent upwards of $60 billion in oil-backed loans through state-run banks by 2015. 

Read More »

USA Crude Oil Stocks Drop Nearly 4MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 3.8 million barrels from the week ending December 26 to the week ending January 2, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was released on January 7 and included data for the week ending January 2. According to the report, crude oil stocks, not including the SPR, stood at 419.1 million barrels on January 2, 422.9 million barrels on December 26, 2025, and 414.6 million barrels on January 3, 2025. Crude oil in the SPR stood at 413.5 million barrels on January 2, 413.2 million barrels on December 26, and 393.8 million barrels on January 3, 2025, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.707 billion barrels on January 2, the report revealed. Total petroleum stocks were up 8.4 million barrels week on week and up 78.7 million barrels year on year, the report pointed out. “At 419.1 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are about three percent above the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 5.6 million barrels last week and are about three percent below the five year average for this time of year. Propane/propylene inventories decreased 2.2 million barrels from last week and are about 29 percent above the five year

Read More »

Imperial Expects Up To $1.6B Capex for 2026

Imperial Oil Ltd said it expects CAD 2-2.2 billion ($1.6 billion) in capital and exploration expenditure for next year, compared to CAD 1.9-2.1 billion for this year. The Canadian oil sands-focused producer, majority-owned by Exxon Mobil Corp, earlier announced a cost-saving restructuring plan. “The company’s strategy remains focused on maximizing the value of its existing assets and progressing advantaged high-value growth opportunities while delivering industry-leading returns to shareholders”, Imperial said in a guidance statement. Imperial expects a gross production of 441,000-460,000 gross oil equivalent barrels per day (boed) in 2026. In the first nine months of 2025, Imperial averaged 436,000 boed gross, according to its third quarter report October 31. While that fell short of the upper end of its 2025 projection of 433,000-456,000 boed, the third quarter figure was 462,000 boed, the company’s highest quarterly output in over 30 years with Kearl recording its highest-ever quarterly gross production at 316,000 barrels per day (bpd). “Higher volumes reflect reliability improvements and continued growth at Kearl and Cold Lake, progressing towards targets of 300,000 and 165,000 barrels per day respectively”, Imperial said of its production forecast for 2026. “Turnarounds are planned at Cold Lake, Syncrude and at Kearl, where planned work at the K1 plant will extend the turnaround interval from two years to four years”. Next year Imperial “will progress secondary bitumen recovery projects at Kearl, high-value infill drilling and Mahihkan SA-SAGD at Cold Lake and mine progression at both Kearl and Syncrude”, the company said. Downstream, Imperial expects to process 395,000-405,000 bpd with a utilization rate of 91-93 percent. “The company is planning to complete turnarounds at Strathcona and Sarnia”, Imperial said. “At Strathcona, the work will focus on the crude unit, after achieving its longest-ever run length of 10 years. “Imperial continues to focus on further improving and maximizing

Read More »

JLL’s 2026 Global Data Center Outlook: Navigating the AI Supercycle, Power Scarcity and Structural Market Transformation

Sovereign AI and National Infrastructure Policy JLL frames artificial intelligence infrastructure as an emerging national strategic asset, with sovereign AI initiatives representing an estimated $8 billion in cumulative capital expenditure by 2030. While modest relative to hyperscale investment totals, this segment carries outsized strategic importance. Data localization mandates, evolving AI regulation, and national security considerations are increasingly driving governments to prioritize domestic compute capacity, often with pricing premiums reaching as high as 60%. Examples cited across Europe, the Middle East, North America, and Asia underscore a consistent pattern: digital sovereignty is no longer an abstract policy goal, but a concrete driver of data center siting, ownership structures, and financing models. In practice, sovereign AI initiatives are accelerating demand for locally controlled infrastructure, influencing where capital is deployed and how assets are underwritten. For developers and investors, this shift introduces a distinct set of considerations. Sovereign projects tend to favor jurisdictional alignment, long-term tenancy, and enhanced security requirements, while also benefiting from regulatory tailwinds and, in some cases, direct state involvement. As AI capabilities become more tightly linked to economic competitiveness and national resilience, policy-driven demand is likely to remain a durable (if specialized) component of global data center growth. Energy and Sustainability as the Central Constraint Energy availability emerges as the report’s dominant structural constraint. In many major markets, average grid interconnection timelines now extend beyond four years, effectively decoupling data center development schedules from traditional utility planning cycles. As a result, operators are increasingly pursuing alternative energy strategies to maintain project momentum, including: Behind-the-meter generation Expanded use of natural gas, particularly in the United States Private-wire renewable energy projects Battery energy storage systems (BESS) JLL points to declining battery costs, seen falling below $90 per kilowatt-hour in select deployments, as a meaningful enabler of grid flexibility, renewable firming, and

Read More »

SoftBank, DigitalBridge, and Stargate: The Next Phase of OpenAI’s Infrastructure Strategy

OpenAI framed Stargate as an AI infrastructure platform; a mechanism to secure long-duration, frontier-scale compute across both training and inference by coordinating capital, land, power, and supply chain with major partners. When OpenAI announced Stargate in January 2025, the headline commitment was explicit: an intention to invest up to $500 billion over four to five years to build new AI infrastructure in the U.S., with $100 billion targeted for near-term deployment. The strategic backdrop in 2025 was straightforward. OpenAI’s model roadmap—larger models, more agents, expanded multimodality, and rising enterprise workloads—was driving a compute curve increasingly difficult to satisfy through conventional cloud procurement alone. Stargate emerged as a form of “control plane” for: Capacity ownership and priority access, rather than simply renting GPUs. Power-first site selection, encompassing grid interconnects, generation, water access, and permitting. A broader partner ecosystem beyond Microsoft, while still maintaining a working relationship with Microsoft for cloud capacity where appropriate. 2025 Progress: From Launch to Portfolio Buildout January 2025: Stargate Launches as a National-Scale Initiative OpenAI publicly launched Project Stargate on Jan. 21, 2025, positioning it as a national-scale AI infrastructure initiative. At this early stage, the work was less about construction and more about establishing governance, aligning partners, and shaping a public narrative in which compute was framed as “industrial policy meets real estate meets energy,” rather than simply an exercise in buying more GPUs. July 2025: Oracle Partnership Anchors a 4.5-GW Capacity Step On July 22, 2025, OpenAI announced that Stargate had advanced through a partnership with Oracle to develop 4.5 gigawatts of additional U.S. data center capacity. The scale of the commitment marked a clear transition from conceptual ambition to site- and megawatt-level planning. A figure of this magnitude reshaped the narrative. At 4.5 GW, Stargate forced alignment across transformers, transmission upgrades, switchgear, long-lead cooling

Read More »

Lenovo unveils purpose-built AI inferencing servers

There is also the Lenovo ThinkSystem SR650i, which offers high-density GPU computing power for faster AI inference and is intended for easy installation in existing data centers to work with existing systems. Finally, there is the Lenovo ThinkEdge SE455i for smaller, edge locations such as retail outlets, telecom sites, and industrial facilities. Its compact design allows for low-latency AI inference close to where data is generated and is rugged enough to operate in temperatures ranging from -5°C to 55°C. All of the servers include Lenovo’s Neptune air- and liquid-cooling technology and are available through the TruScale pay-as-you-go pricing model. In addition to the new hardware, Lenovo introduced new AI Advisory Services with AI Factory Integration. This service gives access to professionals for identifying, deploying, and managing best-fit AI Inferencing servers. It also launched Premier Support Plus, a service that gives professional assistance in data center management, freeing up IT resources for more important projects.

Read More »

Samsung warns of memory shortages driving industry-wide price surge in 2026

SK Hynix reported during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, while Micron recently exited the consumer memory market entirely to focus on enterprise and AI customers. Enterprise hardware costs surge The supply constraints have translated directly into sharp price increases across enterprise hardware. Samsung raised prices for 32GB DDR5 modules to $239 from $149 in September, a 60% increase, while contract pricing for DDR5 has surged more than 100%, reaching $19.50 per unit compared to around $7 earlier in 2025. DRAM prices have already risen approximately 50% year to date and are expected to climb another 30% in Q4 2025, followed by an additional 20% in early 2026, according to Counterpoint Research. The firm projected that DDR5 64GB RDIMM modules, widely used in enterprise data centers, could cost twice as much by the end of 2026 as they did in early 2025. Gartner forecast DRAM prices to increase by 47% in 2026 due to significant undersupply in both traditional and legacy DRAM markets, Chauhan said. Procurement leverage shifts to hyperscalers The pricing pressures and supply constraints are reshaping the power dynamics in enterprise procurement. For enterprise procurement, supplier size no longer guarantees stability. “As supply becomes more contested in 2026, procurement leverage will hinge less on volume and more on strategic alignment,” Rawat said. Hyperscale cloud providers secure supply through long-term commitments, capacity reservations, and direct fab investments, obtaining lower costs and assured availability. Mid-market firms rely on shorter contracts and spot sourcing, competing for residual capacity after large buyers claim priority supply.

Read More »

Eight Trends That Will Shape the Data Center Industry in 2026

For much of the past decade, the data center industry has been able to speak in broad strokes. Growth was strong. Demand was durable. Power was assumed to arrive eventually. And “the data center” could still be discussed as a single, increasingly important, but largely invisible, piece of digital infrastructure. That era is ending. As the industry heads into 2026, the dominant forces shaping data center development are no longer additive. They are interlocking and increasingly unforgiving. AI drives density. Density drives cooling. Cooling and density drive power. Power drives site selection, timelines, capital structure, and public response. And once those forces converge, they pull the industry into places it has not always had to operate comfortably: utility planning rooms, regulatory hearings, capital committee debates, and community negotiations. The throughline of this year’s forecast is clarity: Clarity about workload classes. Clarity about physics. Clarity about risk. And clarity about where the industry’s assumptions may no longer hold. One of the most important shifts entering 2026 is that it may increasingly no longer be accurate, or useful, to talk about “data centers” as a single category. What public discourse often lumps together now conceals two very different realities: AI factories built around sustained, power-dense GPU utilization, and general-purpose data centers supporting a far more elastic mix of cloud, enterprise, storage, and interconnection workloads. That distinction is no longer academic. It is shaping how projects are financed, how power is delivered, how facilities are cooled, and how communities respond. It’s also worth qualifying a line we’ve used before, and still stand by in spirit: that every data center is becoming an AI data center. In 2026, we feel that statement is best understood more as a trajectory, and less a design brief. AI is now embedded across the data center stack: in

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Data Center Facility Technician (All Shifts Available)Impact, TX This position is also available in: Ashburn, VA; Abilene, TX; Needham, MA; Lyndhurst, NJ; Philadelphia, PA; Atlantic City, NJ or New York, NY. Navy Nuke / Military Vets leaving service accepted!  This opportunity is working with a leading mission-critical data center provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients, colo providers and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning EngineerAshburn, VA This traveling position is also available in: New York, NY; White Plains, NY;  Richmond, VA; Montvale, NJ; Charlotte, NC; Atlanta, GA; Hampton, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Salt Lake City, UT; Dallas, TX or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »