Stay Ahead, Stay ONMINE

Milliseconds to breach: How patch automation closes attackers’ fastest loophole

This article is part of VentureBeat’s special issue, “The cyber resilience playbook: Navigating the new era of threats.” Read more from this special issue here. Procrastinating about patching has killed more networks and damaged more companies than any zero-day exploit or advanced cyberattack. Complacency kills — and carries a high price. Down-rev (having old patches in place that are “down revision”) or no patching at all is how ransomware gets installed, data breaches occur and companies are fined for being out of compliance. It isn’t a matter of if a company will be breached but when — particularly if they don’t prioritize patch management. Why so many security teams procrastinate – and pay a high price Let’s be honest about how patching is perceived in many security teams and across IT organizations: It’s often delegated to staff members assigned with the department’s most rote, mundane tasks. Why? No one wants to spend their time on something that is often repetitive and at times manually intensive, yet requires complete focus to get done right. Most security and IT teams tell VentureBeat in confidence that patching is too time-consuming and takes away from more interesting projects. That’s consistent with an Ivanti study that found that the majority (71%) of IT and security professionals think patching is overly complex, cumbersome and time-consuming. Remote work and decentralized workspaces make patching even more complicated, 57% of security professionals reported. Also consistent with what VentureBeat is hearing from security teams, Ivanti found that 62% of IT and security leaders admit that patch management takes a backseat to other tasks. The truth is that device inventory and manual approaches to patch management haven’t been keeping up for a while (years). In the meantime, adversaries are busy improving their tradecraft, creating weaponized large language models (LLMs) and attack apps. Not patching? It’s like taking the lock off your front door Crime waves are hitting affluent, gated communities as criminals use remote video cameras for 24/7 surveillance. Leaving a home unlocked without a security system is an open invitation for robbers. Not patching endpoints is the same. And, let’s be honest: Any task that gets deprioritized and pushed down action item lists will most likely never be entirely completed. Adversaries are improving their tradecrafts all the time by studying common vulnerabilities and exposures (CVEs) and finding lists of companies that have those vulnerabilities — making them even more susceptible targets. Gartner often weighs in on patching in their research and considers it part of their vulnerability management coverage. Their recent study, Top 5 Elements of Effective Vulnerability Management, emphasizes that “many organizations still mismanage patching exceptions, resulting in missing or ineffective mitigations and increased risk.” Mismanagement starts when teams deprioritize patching and consider manual processes “good enough” to complete increasingly complex, challenging and mundane tasks. This is made worse with siloed teams. Such mismanagement creates exploitable gaps. The old mantra “scan, patch, rescan” isn’t scaling when adversaries are using AI and generative AI attacks to scan for endpoints to target at machine speed. GigaOm’s Radar for Unified Endpoint Management (UEM) report further highlights how patching remains a significant challenge, with many vendors struggling to provide consistent application, device driver and firmware patching. The report urges organizations to consider how they can improve patch management as part of a broader effort to automate and scale vulnerability management. Why traditional patch management fails in today’s threat landscape Patch management in most organizations begins with scheduled monthly cycles that rely on static Common Vulnerability Scoring System (CVSS) severity scores to help prioritize vulnerabilities. Adversaries are moving faster and creating more complex threats than CVSS scores can keep up with. As Karl Triebes, Ivanti’s CPO, explained: “Relying solely on severity ratings and a fixed monthly cycle exposes organizations to unaccounted risk. These ratings overlook unique business context, security gaps and evolving threats.” In today’s fast-moving environment, static scores cannot capture an organization’s nuanced risk profile. Gartner’s framework underscores the need for “advanced prioritization techniques and automated workflows that integrate asset criticality and active threat data to direct limited resources toward vulnerabilities that truly matter.” The GigaOm report similarly notes that, while most UEM solutions support OS patching, fewer provide “patching for third-party applications, device drivers and firmware,” leaving gaps that adversaries exploit. Risk-based and continuous patch management: A smarter approach Chris Goettl, Ivanti’s VP of product management for endpoint security, explained to VentureBeat: “Risk-based patch prioritization goes beyond CVSS scores by considering active exploitation, threat intelligence and asset criticality.” Taking this more dynamic approach helps organizations anticipate and react to risks in real time, which is far more efficient than using CVSS scores. Triebes expanded: “Relying solely on severity ratings and a fixed monthly cycle exposes organizations to unaccounted risk. These ratings overlook your unique business context, security gaps and evolving threats.” However, prioritization alone isn’t enough. Adversaries can quickly weaponize vulnerabilities within hours and have proven that genAI is making them even more efficient than in the past. Ransomware attackers find new ways to weaponize old vulnerabilities. Organizations following monthly or quarterly patching cycles can’t keep up with the pace of new tradecraft.   Machine learning (ML)-based patch management systems have long been able to prioritize patches based on current threats and business risks. Regular maintenance ensures compliance with PCI DSS, HIPAA and GDPR, while AI automation bridges the gap between detection and response, reducing exposure. Gartner warns that relying on manual processes creates “bottlenecks, delays zero-day response and results in lower-priority patches being applied while actively exploited vulnerabilities remain unaddressed.” Organizations must shift to continuous, automated patching to keep pace with adversaries. Choosing the right patch management solution There are many advantages of integrating gen AI and improving long-standing ML algorithms that are at the core of automated patch management systems. All vendors who compete in the market have roadmaps incorporating these technologies. The GigaOm Radar for Patch Management Solutions Report highlights the technical strengths and weaknesses of top patch management providers. It compares vendors including Atera, Automox, BMC client management patch powered by Ivanti, Canonical, ConnectWise, Flexera, GFI, ITarian, Jamf, Kaseya, ManageEngine, N-able, NinjaOne, SecPod, SysWard, Syxsense and Tanium. The GigaOm Radar plots vendor solutions across a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes — balancing “maturity” versus “innovation” and feature “play” versus “platform play” — while providing an arrow that projects each solution’s evolution over the coming 12 to 18 months. Gartner advises security teams to “leverage risk-based prioritization and automated workflow tools to reduce time-to-patch,” and every vendor in this market is reflecting that in their roadmaps. A strong patching strategy requires the following: Strategic deployment and automation: Mapping critical assets and reducing manual errors through AI-driven automation. Risk-based prioritization: Focusing on actively exploited threats. Centralized management and continuous monitoring: Consolidating patching efforts and maintaining real-time security visibility. By aligning patching strategies with these principles, organizations can reduce their teams’ workloads and build stronger cyber resilience. Automating patch management: Measuring success in real time All vendors who compete in this market have attained a baseline level of performance and functionality by streamlining patch validation, testing and deployment. By correlating patch data with real-world exploit activity, vendors are reducing customers’ mean time to remediation (MTTR). Measuring success is critical. Gartner recommends tracking the following (at a minimum): Mean-time-to-patch (MTTP): The average time to remediate vulnerabilities. Patch coverage percentage: The proportion of patched assets relative to vulnerable ones. Exploit window reduction: The time from vulnerability disclosure to remediation. Risk reduction impact: The number of actively exploited vulnerabilities patched before incidents occur. Automate patch management — or fall behind Patching isn’t the action item security teams should just get to after other higher-priority tasks are completed. It must be core to keeping a business alive and free of potential threats. Simply put, patching is at the heart of cyber resilience. Yet, too many organizations deprioritize it, leaving known vulnerabilities wide open for attackers increasingly using AI to strike faster than ever. Static CVSS scores have proven they can’t keep up, and fixed cycles have turned into more of a liability than an asset. The message is simple: When it comes to patching, complacency is dangerous — it’s time to make it a priority.

This article is part of VentureBeat’s special issue, “The cyber resilience playbook: Navigating the new era of threats.” Read more from this special issue here.

Procrastinating about patching has killed more networks and damaged more companies than any zero-day exploit or advanced cyberattack.

Complacency kills — and carries a high price. Down-rev (having old patches in place that are “down revision”) or no patching at all is how ransomware gets installed, data breaches occur and companies are fined for being out of compliance. It isn’t a matter of if a company will be breached but when — particularly if they don’t prioritize patch management.

Why so many security teams procrastinate – and pay a high price

Let’s be honest about how patching is perceived in many security teams and across IT organizations: It’s often delegated to staff members assigned with the department’s most rote, mundane tasks. Why? No one wants to spend their time on something that is often repetitive and at times manually intensive, yet requires complete focus to get done right.

Most security and IT teams tell VentureBeat in confidence that patching is too time-consuming and takes away from more interesting projects. That’s consistent with an Ivanti study that found that the majority (71%) of IT and security professionals think patching is overly complex, cumbersome and time-consuming.

Remote work and decentralized workspaces make patching even more complicated, 57% of security professionals reported. Also consistent with what VentureBeat is hearing from security teams, Ivanti found that 62% of IT and security leaders admit that patch management takes a backseat to other tasks.

The truth is that device inventory and manual approaches to patch management haven’t been keeping up for a while (years). In the meantime, adversaries are busy improving their tradecraft, creating weaponized large language models (LLMs) and attack apps.

Not patching? It’s like taking the lock off your front door

Crime waves are hitting affluent, gated communities as criminals use remote video cameras for 24/7 surveillance. Leaving a home unlocked without a security system is an open invitation for robbers.

Not patching endpoints is the same. And, let’s be honest: Any task that gets deprioritized and pushed down action item lists will most likely never be entirely completed. Adversaries are improving their tradecrafts all the time by studying common vulnerabilities and exposures (CVEs) and finding lists of companies that have those vulnerabilities — making them even more susceptible targets.

Gartner often weighs in on patching in their research and considers it part of their vulnerability management coverage. Their recent study, Top 5 Elements of Effective Vulnerability Management, emphasizes that “many organizations still mismanage patching exceptions, resulting in missing or ineffective mitigations and increased risk.”

Mismanagement starts when teams deprioritize patching and consider manual processes “good enough” to complete increasingly complex, challenging and mundane tasks. This is made worse with siloed teams. Such mismanagement creates exploitable gaps. The old mantra “scan, patch, rescan” isn’t scaling when adversaries are using AI and generative AI attacks to scan for endpoints to target at machine speed.

GigaOm’s Radar for Unified Endpoint Management (UEM) report further highlights how patching remains a significant challenge, with many vendors struggling to provide consistent application, device driver and firmware patching. The report urges organizations to consider how they can improve patch management as part of a broader effort to automate and scale vulnerability management.

Why traditional patch management fails in today’s threat landscape

Patch management in most organizations begins with scheduled monthly cycles that rely on static Common Vulnerability Scoring System (CVSS) severity scores to help prioritize vulnerabilities. Adversaries are moving faster and creating more complex threats than CVSS scores can keep up with.

As Karl Triebes, Ivanti’s CPO, explained: “Relying solely on severity ratings and a fixed monthly cycle exposes organizations to unaccounted risk. These ratings overlook unique business context, security gaps and evolving threats.” In today’s fast-moving environment, static scores cannot capture an organization’s nuanced risk profile.

Gartner’s framework underscores the need for “advanced prioritization techniques and automated workflows that integrate asset criticality and active threat data to direct limited resources toward vulnerabilities that truly matter.” The GigaOm report similarly notes that, while most UEM solutions support OS patching, fewer provide “patching for third-party applications, device drivers and firmware,” leaving gaps that adversaries exploit.

Risk-based and continuous patch management: A smarter approach

Chris Goettl, Ivanti’s VP of product management for endpoint security, explained to VentureBeat: “Risk-based patch prioritization goes beyond CVSS scores by considering active exploitation, threat intelligence and asset criticality.” Taking this more dynamic approach helps organizations anticipate and react to risks in real time, which is far more efficient than using CVSS scores.

Triebes expanded: “Relying solely on severity ratings and a fixed monthly cycle exposes organizations to unaccounted risk. These ratings overlook your unique business context, security gaps and evolving threats.” However, prioritization alone isn’t enough.

Adversaries can quickly weaponize vulnerabilities within hours and have proven that genAI is making them even more efficient than in the past. Ransomware attackers find new ways to weaponize old vulnerabilities. Organizations following monthly or quarterly patching cycles can’t keep up with the pace of new tradecraft.  

Machine learning (ML)-based patch management systems have long been able to prioritize patches based on current threats and business risks. Regular maintenance ensures compliance with PCI DSS, HIPAA and GDPR, while AI automation bridges the gap between detection and response, reducing exposure.

Gartner warns that relying on manual processes creates “bottlenecks, delays zero-day response and results in lower-priority patches being applied while actively exploited vulnerabilities remain unaddressed.” Organizations must shift to continuous, automated patching to keep pace with adversaries.

Choosing the right patch management solution

There are many advantages of integrating gen AI and improving long-standing ML algorithms that are at the core of automated patch management systems. All vendors who compete in the market have roadmaps incorporating these technologies.

The GigaOm Radar for Patch Management Solutions Report highlights the technical strengths and weaknesses of top patch management providers. It compares vendors including Atera, Automox, BMC client management patch powered by Ivanti, Canonical, ConnectWise, Flexera, GFI, ITarian, Jamf, Kaseya, ManageEngine, N-able, NinjaOne, SecPod, SysWard, Syxsense and Tanium.

The GigaOm Radar plots vendor solutions across a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes — balancing “maturity” versus “innovation” and feature “play” versus “platform play” — while providing an arrow that projects each solution’s evolution over the coming 12 to 18 months.

Gartner advises security teams to “leverage risk-based prioritization and automated workflow tools to reduce time-to-patch,” and every vendor in this market is reflecting that in their roadmaps. A strong patching strategy requires the following:

  • Strategic deployment and automation: Mapping critical assets and reducing manual errors through AI-driven automation.
  • Risk-based prioritization: Focusing on actively exploited threats.
  • Centralized management and continuous monitoring: Consolidating patching efforts and maintaining real-time security visibility.

By aligning patching strategies with these principles, organizations can reduce their teams’ workloads and build stronger cyber resilience.

Automating patch management: Measuring success in real time

All vendors who compete in this market have attained a baseline level of performance and functionality by streamlining patch validation, testing and deployment. By correlating patch data with real-world exploit activity, vendors are reducing customers’ mean time to remediation (MTTR).

Measuring success is critical. Gartner recommends tracking the following (at a minimum):

  • Mean-time-to-patch (MTTP): The average time to remediate vulnerabilities.
  • Patch coverage percentage: The proportion of patched assets relative to vulnerable ones.
  • Exploit window reduction: The time from vulnerability disclosure to remediation.
  • Risk reduction impact: The number of actively exploited vulnerabilities patched before incidents occur.

Automate patch management — or fall behind

Patching isn’t the action item security teams should just get to after other higher-priority tasks are completed. It must be core to keeping a business alive and free of potential threats.

Simply put, patching is at the heart of cyber resilience. Yet, too many organizations deprioritize it, leaving known vulnerabilities wide open for attackers increasingly using AI to strike faster than ever. Static CVSS scores have proven they can’t keep up, and fixed cycles have turned into more of a liability than an asset.

The message is simple: When it comes to patching, complacency is dangerous — it’s time to make it a priority.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Fortinet speeds threat detection with improved FortiAnalyzer

The package also now integrates with FortiAI, the vendor’s genAI assistant, to better support analytics and telemetry to help security teams speed threat investigation and response, the vendor stated. “FortiAI identifies the threats that need analysis from the data collected by FortiAnalyzer, primarily collected from FortiGates. By automating the collection,

Read More »

Aryaka adds AI-powered observability to SASE platform

Nadkarni explained that Aryaka runs unsupervised machine learning models on the data to identify anomalies and outliers in the data. For example, the models may detect a sudden spike in traffic to a domain that has not been seen before. This unsupervised analysis helps surface potential issues or areas of

Read More »

Oil Holds Above $72 Amid Supply Uncertainty

Oil extended a string of marginal gains to settle above $72 a barrel amid uncertainty over global supplies and a slumping dollar that made commodities priced in the currency more attractive. Crude has risen for three straight sessions on the prospect of supplies tightening after the disruption of a key Kazakh pipeline and reports that OPEC+ may push back a planned production increase. Although trading has calmed after a tumultuous start to the year, investors liquidating their positions in West Texas Intermediate’s so-called prompt spread ahead of contract expiry pressured futures late in the session. The spread has been flirting with a bearish contango structure in recent days. Still, prices remain locked in a narrow range this month as the market becomes increasingly numb to the array of changes that US President Donald Trump is seeking to implement. “Prices will likely remain rangebound, continuing to move with headlines,” Royal Bank of Canada analysts including Brian Leisen wrote in a note, adding that “as more time passes without the market realizing a substantial catalyst,” traders will tend to position themselves closer to average prices. In the US, government figures showed commercial oil inventories grew 4.63 million barrels last week, the fourth straight buildup and a bigger increase than projected by an industry group and Bloomberg users. Gasoline futures popped after the report on signs refiners are shifting their operations to produce less of the motor fuel and more diesel, which is increasingly in demand because of cold weather in the US. Most-active gasoline futures climbed as much as 0.9%, reversing earlier losses. Oil Prices:     WTI for March delivery, which expires Thursday, added 0.4% to settle at $72.57 a barrel in New York.     The more active April contract settled at $72.48 a barrel.     Brent for April climbed 0.6% to

Read More »

EU Trade Chief Says Ready to Work With USA on Lower Tariffs

The European Union is willing to work with the Trump administration on a deal that would lower tariffs for industrial goods and boost purchases of American exports like natural gas and soybeans, the bloc’s trade chief said.  Europe is ready to look at lowering its taxes on US imports including autos, Maros Sefcovic, the EU commissioner for trade, told reporters in Washington on Thursday. Sefcovic, who held talks the day before with President Donald Trump’s trade team, said the car industry featured “to quite an extent” in the discussions. Trump has floated higher auto tariffs and demanded that the EU reduce its charges on American cars, which are currently at 10% compared with 2.5% in the US. He’s also set to impose 25% tariffs on steel and aluminum from March 12, ahead of other possible charges including reciprocal tariffs based on policies of partners that are seen as obstacles to US trade. Trump has cited the EU’s value-added tax as the kind of measure he’s looking to respond to. The EU trade chief said talks with the US at this stage are aimed at finding common ground and building momentum. He said the EU is ready to look at reducing its duties on imported industrial goods, adding that reciprocal tariffs must be made to work for both sides and that the US and Europe have exchanged lists of trade grievances.  “We would be ready to look how we can lower the import duties for all industrial products because this is what will lead to the benefit of businesses and people on both sides of the Atlantic,” Sefcovic said. He said there’s room for the US and EU to work together and counter global over-capacity in industries such as steel, where both sides see Chinese output as a key problem. “My

Read More »

Trump’s tariffs could raise onshore wind turbine costs by 7%, slow development: WoodMac

Dive Brief: Tariffs proposed by the Trump administration — some of which have already taken effect — could increase onshore wind turbine costs by 7% and overall project costs by 5%, according to analysis by Wood MacKenzie. The increased costs could trigger a 3% to 9% cut in new wind capacity installed anually through 2028, according to Endri Lico, principal wind supply chain analyst for Wood Mackenzie. Deployment could slow as much as 20% to 30% after 2028 if the tariffs remained in place, Lico said. Onshore wind is generally less reliant on imports than other U.S. industries and could adapt to new tariffs. However, ongoing political and economic uncertainty could deter investment in domestic manufacturing, which could alleviate some of the tariffs’ impacts, Lico said. Dive Insight: Onshore wind developers are generally well-positioned to adapt to the Trump administration’s stated trade policies, but they wouldn’t escape a potential trade war completely unscathed, according to Wood Mackenzie. The global consulting firm estimates that the tariffs currently proposed by the Trump administration — a 25% duty on imports from Mexico and Canada and an additional 10% on Chinese imports on top of existing tariffs — could increase the levelized cost of wind energy by 4% over the next few years. A universal 25% tariff would increase the cost of wind energy by 7%, according to Wood Mackenzie. President Trump has delayed the implementation of tariffs on Mexico and Canada, but the 10% tariff on Chinese imports has already taken effect. Wood Mackenzie did not break out the potential impact of the individual tariffs, Lico said. U.S. wind developers are generally less dependent on imports than their counterparts in the solar industry, but they still rely heavily on imported components like blades, drivetrains and electrical, according to Wood Mackenzie. Forty-one percent of the industry’s

Read More »

Heliene inks deal with Origami Solar to offer US-made steel-framed solar modules

Solar module manufacturer Heliene and solar panel frame maker Origami Solar have reached a multi-year agreement to sell U.S.-made solar modules framed by domestic steel starting in April, the companies announced Wednesday. The announcement came a week after President Trump restored a 25% tariff on imported steel and raised the tariff on imported aluminum to 25%. Steel and aluminum are often used in solar racking and tracking. The tariffs are slated to take effect March 12. In the release, Origami said its steel frames “provide a compelling cost advantage over domestic aluminum and eliminate the tariff and supply chain risk of imported aluminum frames.” Heliene CEO Martin Pochtaruk told Utility Dive he expects the cost of manufacturing tracking with either imported or domestically produced steel and aluminum will increase, as tariffs on imports will lead to higher demand for the domestic product. “The moment that you restrict market access by setting import duties right for a particular country, a particular product, automatically the price increases for the domestic product,” Pochtaruk said. However, the frame is “not an important component of the cost, in comparison with solar cells,” he said. He expects the cost of solar installations to increase across the board as a result of the tariffs, but said Heliene will end up passing any costs to clients, as it sells on costs plus margin. “Our clients choose what components go into their modules, and we see that because of inventories and in general, for aluminum and steel in the U.S., if there’s a tariff that applies, we’re going to see increasing prices for imports as well as domestic products by May,” Pochtaruk said. “So a three month lag.” In April, Heliene will begin to offer Origami’s steel frames for their 144 and 156 half-cut bifacial modules “in addition to

Read More »

Calcasieu Pass LNG in Louisiana to Start ‘Commercial Operation’ April

Venture Global said it will launch “commercial operations” for the Calcasieu Pass liquefied natural gas (LNG) export project in Louisiana in April. The facility in Cameron Parish already began production January 2022, according to Venture Global. However, Calcasieu Pass LNG has not supplied offtakers, prompting BP PLC to complain to the Federal Energy Regulatory Commission (FERC) in 2023. The Arlington, Virginia-based company “notified its long-term customers that its Calcasieu Pass facility will commence commercial operations on April 15, 2025”, Venture Global said in an online statement. “The facility will achieve its commercial operation date, or COD, in under 68 months from its August 2019 final investment decision, despite substantial impacts including two hurricanes, the COVID-19 pandemic, and major unforeseen manufacturing issues, such as with the Heat Recovery Steam Generators forming part of the facility’s power island”. The 432-acre Calcasieu Pass LNG sits on the Calcasieu Ship Channel at the mouth of the Gulf of Mexico. The facility provides deepwater access, proximity to gas supplies and ease of transport, according to Venture Global. Venture Global previously secured authorizations to export a cumulative 620 billion cubic feet (Bcf) a year, or 12 million metric tons per annum (MMtpa), of LNG to FTA and non-FTA countries. It later applied to increase the limit to 640.666 Bcf per year or 12.4 MMtpa. The Energy Department granted the increase for the portion of nations with a free-trade agreement (FTA) with the U.S. on April 22, 2022, while a decision on the non-FTA increase is pending. On December 11, 2023, BP complained to FERC saying Venture Global had shipped over 200 cargoes from Calcasieu Pass LNG over the past year and a half at a time of high prices. All the while, Venture Global did not deliver volumes owed to long-term customers that made the project financially

Read More »

GB Energy puts interim CEO in place

The state-owned Great British Energy has put an interim chief executive in place as it continues its search for a permanent leader. Dan McGrail, currently the chief executive of trade body RenewableUK, will be seconded to the role for six months. The appointment comes after industry sources warned that the government faces a struggle to find someone who has the credentials to take charge of GB Energy’s £8.3-billion budget but who will also work for a civil servant’s salary and be based at its headquarters in Aberdeen. In a statement, the Department for Energy Security and Net Zero (DESNZ) said McGrail “will be based in Scotland, working from the Aberdeen headquarters”. Currently his workplace at RenewableUK is in London. DESNZ added he will take up his post in March and recruitment for the permanent CEO “will also begin shortly”. The entity has yet to select physical premises in the Granite City. McGrail, who currently sits on the board for WindEurope, was previously CEO of Siemens Engines and managing director of Siemens Power Generation. Juergen Maier, who is designated as GB Energy’s “start up chair”, was his boss in his role as CEO of Siemens UK. Maier said: “Dan brings invaluable experience from a long career in clean energy and joins Great British Energy at a critical time to help spearhead our work to help make Britain energy independent. “I look forward to working with him to back innovation, create sustainable jobs, and grow our supply chains.” In January, the organisation appointed five new non-executive directors. Energy Secretary Ed Miliband said:  “With the appointment of Dan McGrail as interim CEO we now have a fantastic team in place to lead Great British Energy and start delivering on our plan for change.” McGrail said:  “Together with the talented leadership team, I’m excited to

Read More »

Ireland says there will be no computation without generation

Stanish said that, in 2023, she wrote a paper that predicted “by 2028, more than 70% of multinational enterprises will alter their data center strategies due to limited energy supplies and data center moratoriums, up from only about 5% in 2023. It has been interesting watching this trend evolve as expected, with Ireland being a major force in this conversation since the boycotts against data center growth started a few years ago.” Fair, equitable, and stable electricity allocation, she said, “means that the availability of electricity for digital services is not guaranteed in the future, and I expect these policies, data center moratoriums, and regional rejections will only continue and expand moving forward.” Stanish pointed out that this trend is not just occurring in Ireland. “Many studies show that, globally, enterprises’ digital technologies are consuming energy at a faster rate than overall growth in energy supply (though, to be clear, these studies mostly assume a static position on energy efficiency of current technologies, and don’t take into account potential for nuclear or hydrogen to assuage some of these supply issues).” If taken at face value, she said, this means that a lack of resources could cause widespread electricity shortages in data centers over the next several years. To mitigate this, Stanish said, “so far, data center moratoriums and related constraints (including reduced tax incentives) have been enacted in the US (specifically Virginia and Georgia), Denmark, Singapore, and other countries, in response to concerns about the excessive energy consumption of IT, particularly regarding compute-intense AI workloads and concerns regarding an IT energy monopoly in certain regions. As a result, governments (federal, state, county, etc.) are working to ensure that consumption does not outpace capacity.” Changes needed In its report, the CRU stated, “a safe and secure supply of energy is essential

Read More »

Perspective: Can We Solve the AI Data Center Power Crisis with Microgrids?

President Trump announced a$500 billion private sector investment in the nation’s Artificial Intelligence (AI) infrastructure last month. The investment will come from The Stargate Project, a joint venture between OpenAI, SoftBank, Oracle and MGX, which intends to build 20 new AI data centers in the U.S in the next four to five years. The Stargate Project committed$100 billion for immediate deployment and construction has already begun on its first data center in Texas. At approximately a half a million square feet each, the partners say these new facilities will cement America’s leadership in AI, create jobs and stimulate economic growth. Stargate is not the only game in town, either. Microsoft is expected to invest$80 billion in AI data center development in 2025, with Google, AWS and Meta also spending big. While all this investment in AI infrastructure is certainly exciting, experts say there’s one lingering question that’s yet to be answered and it’s a big one: How are we going to power all these AI data centers? This will be one of the many questions tackled duringMicrogrid Knowledge’s annual conference, which will be held in Texas April 15-17 at the Sheraton Dallas. “Powering Data Centers: Collaborative Microgrid Solutions for a Growing Market” will be one of the key sessions on April 16. Industry experts will gather to discuss how private entities, developers and utilities can work together to deploy microgrids and distributed energy technologies that address the data center industry’s power needs. The panel will share solutions, technologies and strategies that will favorably position data centers in the energy queue. In advance of this session, we sat down with two microgrid experts to learn more about the challenges facing the data center industry and how microgrids can address the sector’s growing energy needs. We spoke with Michael Stadler, co-founder and

Read More »

Data Center Tours: Iron Mountain VA-1, Manassas, Virginia

Iron Mountain Northern Virginia Overview Iron Mountain’s Northern Virginia data centers VA-1 through VA-7 are situated on a 142-acre highly secure campus in Prince William County, Virginia. Located at 11680 Hayden Road in Manassas, Iron Mountain VA-1 spans 167,958 sq. ft. and harbors 12.4 MW of total capacity to meet colocation needs. The 36 MW VA-2 facility stands nearby. The total campus features a mixture of single and multi-tenant facilities which together provide more than 2,000,000 SF of highly efficient green colocation space for enterprises, federal agencies, service providers and hyperscale clouds.  The company notes that its Manassas campus offers tax savings compared to Ashburn and exceptional levels of energy-efficiency as well as a diverse and accessible ecosystem of cloud, network and other service providers.  Iron Mountain’s Virginia campus has 9 total planned data centers, with 5 operational facilities to date and two more data centers coming soon. VA-2 recently became the first data center in the United States to achieve DCOS Maturity Level 3.    As we continued the tour, Kinra led the way toward the break room, an area where customers can grab coffee or catch up on work. Unlike the high-end aesthetic of some other colocation providers, Iron Mountain’s approach is more practical and focused on functionality. At the secure shipping and receiving area, Kinra explained the process for handling customer equipment. “This is where our customers ship their equipment into,” he said. “They submit a ticket, send their shipments in, and we’ll take it, put it aside for them, and let them know when it’s here. Sometimes they ask us to take it to their environment, which we’ll do for them via a smart hands ticket.” Power Infrastructure and Security Measures The VA-1 campus is supported by a single substation, providing the necessary power for its growing

Read More »

Land and Expand: DPO, Microsoft, JLL and BlackChamber, Prologis, Core Scientific, Overwatch Capital

Land and Expand is a periodic feature at Data Center Frontier highlighting the latest data center development news, including new sites, land acquisitions and campus expansions. Here are some of the new and notable developments from hyperscale and colocation data center developers and operators about which we’ve been reading lately. DPO to Develop $200 Million AI Data Center in Wisconsin Rapids; Strategic Partnership with Billerud’s CWPCo Unlocks Hydroelectric Power for High-Density AI Compute Digital Power Optimization (DPO) is moving forward with plans to build a $200 million high-performance computing (HPC) data center in Wisconsin Rapids, Wisconsin. The project, designed to support up to 20 megawatts (MW) of artificial intelligence (AI) computing, leverages an innovative partnership with Consolidated Water Power Company (CWPCo), a subsidiary of global packaging leader Billerud. DPO specializes in developing and operating data centers optimized for power-dense computing. By partnering with utilities and independent power producers, DPO colocates its facilities at energy generation sites, ensuring direct access to sustainable power for AI, HPC, and blockchain computing. The company is privately held. Leveraging Power Infrastructure for Speed-to-Energization CWPCo, a regulated utility subsidiary, has operated hydroelectric generation assets since 1894, reliably serving industrial and commercial customers in Wisconsin Rapids, Biron, and Stevens Point. Parent company Billerud is a global leader in high-performance packaging materials, committed to sustainability and innovation. The company operates nine production facilities across Sweden, the USA, and Finland, employing 5,800 people in over 19 countries.  The data center will be powered by CWPCo’s renewable hydroelectric assets, tapping into the utility’s existing 32 megawatts of generation capacity. The partnership grants DPO a long-term land lease—extending up to 50 years—alongside interconnection rights to an already-energized substation and a firm, reliable power supply. “AI infrastructure is evolving at an unprecedented pace, and access to power-dense sites is critical,” said Andrew

Read More »

Data center spending to top $1 trillion by 2029 as AI transforms infrastructure

His projections account for recent advances in AI and data center efficiency, he says. For example, the open-source AI model from Chinese company DeepSeek seems to have shown that an LLM can produce very high-quality results at a very low cost with some clever architectural changes to how the models work. These improvements are likely to be quickly replicated by other AI companies. “A lot of these companies are trying to push out more efficient models,” says Fung. “There’s a lot of effort to reduce costs and to make it more efficient.” In addition, hyperscalers are designing and building their own chips, optimized for their AI workloads. Just the accelerator market alone is projected to reach $392 billion by 2029, Dell’Oro predicts. By that time, custom accelerators will outpace commercially available accelerators such as GPUs. The deployment of dedicated AI servers also has an impact on networking, power and cooling. As a result, spending on data center physical infrastructure (DCPI) will also increase, though at a more moderate pace, growing by 14% annually to $61 billion in 2029.  “DCPI deployments are a prerequisite to support AI workloads,” says Tam Dell’Oro, founder of Dell’Oro Group, in the report. The research firm raised its outlook in this area due to the fact that actual 2024 results exceeded its expectations, and demand is spreading from tier one to tier two cloud service providers. In addition, governments and tier one telecom operators are getting involved in data center expansion, making it a long-term trend.

Read More »

The Future of Property Values and Power in Virginia’s Loudoun County and ‘Data Center Alley’

Loudoun County’s FY 2026 Proposed Budget Is Released This week, Virginia’s Loudoun County released its FY 2026 Proposed Budget. The document notes how data centers are a major driver of revenue growth in Loudoun County, contributing significantly to both personal and real property tax revenues. As noted above, data centers generate almost 50% of Loudoun County property tax revenues. Importantly, Loudoun County has now implemented measures such as a Revenue Stabilization Fund (RSF) to manage the risks associated with this revenue dependency. The FY 2026 budget reflects the strong growth in data center-related revenue, allowing for tax rate reductions while still funding critical services and infrastructure projects. But the county is mindful of the potential volatility in data center revenue and is planning for long-term fiscal sustainability. The FY 2026 Proposed Budget notes how Loudoun County’s revenue from personal property taxes, particularly from data centers, has grown significantly. From FY 2013 to FY 2026, revenue from this source has increased from $60 million to over $800 million. Additionally, the county said its FY 2026 Proposed Budget benefits from $150 million in new revenue from the personal property tax portfolio, with $133 million generated specifically from computer equipment (primarily data centers). The county said data centers have also significantly impacted the real property tax portfolio. In Tax Year (TY) 2025, 73% of the county’s commercial portfolio is composed of data centers. The county said its overall commercial portfolio experienced a 50% increase in value between TY 2024 and TY 2025, largely driven by the appreciation of data center properties. RSF Meets Positive Economic Outlook The Loudoun County Board of Supervisors created the aformentioned Revenue Stabilization Fund (RSF) to manage the risks associated with the county’s reliance on data center-related revenue. The RSF targets 10% of data center-related real and personal property tax

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »