Stay Ahead, Stay ONMINE

Cybersecurity’s global alarm system is breaking down

Every day, billions of people trust digital systems to run everything from communication to commerce to critical infrastructure. But the global early warning system that alerts security teams to dangerous software flaws is showing critical gaps in coverage—and most users have no idea their digital lives are likely becoming more vulnerable. Over the past eighteen months, two pillars of global cybersecurity have flirted with apparent collapse. In February 2024, the US-backed National Vulnerability Database (NVD)—relied on globally for its free analysis of security threats—abruptly stopped publishing new entries, citing a cryptic “change in interagency support.” Then, in April of this year, the Common Vulnerabilities and Exposures (CVE) program, the fundamental numbering system for tracking software flaws, seemed at similar risk: A leaked letter warned of an imminent contract expiration. Cybersecurity practitioners have since flooded Discord channels and LinkedIn feeds with emergency posts and memes of “NVD” and “CVE” engraved on tombstones. Unpatched vulnerabilities are the second most common way cyberattackers break in, and they have led to fatal hospital outages and critical infrastructure failures. In a social media post, Jen Easterly, a US cybersecurity expert, said: “Losing [CVE] would be like tearing out the card catalog from every library at once—leaving defenders to sort through chaos while attackers take full advantage.” If CVEs identify each vulnerability like a book in a card catalog, NVD entries provide the detailed review with context around severity, scope, and exploitability.  In the end, the Cybersecurity and Infrastructure Security Agency (CISA) extended funding for CVE another year, attributing the incident to a “contract administration issue.” But the NVD’s story has proved more complicated. Its parent organization, the National Institute of Standards and Technology (NIST), reportedly saw its budget cut roughly 12% in 2024, right around the time that CISA pulled its $3.7 million in annual funding for the NVD. Shortly after, as the backlog grew, CISA launched its own “Vulnrichment” program to help address the analysis gap, while promoting a more distributed approach that allows multiple authorized partners to publish enriched data.  “CISA continuously assesses how to most effectively allocate limited resources to help organizations reduce the risk of newly disclosed vulnerabilities,” says Sandy Radesky, the agency’s associate director for vulnerability management. Rather than just filling the gap, she emphasizes that Vulnrichment was established to provide unique additional information, like recommended actions for specific stakeholders, and to “reduce dependency of the federal government’s role to be the sole provider of vulnerability enrichment.” Meanwhile, NIST has scrambled to hire contractors to help clear the backlog. Despite a return to pre-crisis processing levels, a boom in vulnerabilities newly disclosed to the NVD has outpaced these efforts. Currently, over 25,000 vulnerabilities await processing – nearly 10 times the previous high in 2017, according to data from software company Anchore. Before that, the NVD largely kept pace with CVE publications, maintaining a minimal backlog. “Things have been disruptive, and we’ve been going through times of change across the board,” Matthew Scholl, then chief of the computer security division in NIST’s Information Technology Laboratory, said at an industry event in April. “Leadership has assured me and everyone that NVD is and will continue to be a mission priority for NIST, both in resourcing and capabilities.” Scholl left NIST in May after 20 years at the agency, and NIST declined to comment on the backlog.  The situation has now prompted multiple government actions, with the Department of Commerce launching an audit of the NVD in May and House Democrats calling for a broader probe of both programs in June. But the damage to trust is already transforming geopolitics and supply chains as security teams prepare for a new era of cyber risk. “It’s left a bad taste, and people are realizing they can’t rely on this,” says Rose Gupta, who builds and runs enterprise vulnerability management programs. “Even if they get everything together tomorrow with a bigger budget, I don’t know that this won’t happen again. So I have to make sure I have other controls in place.” As these public resources falter, organizations and governments are confronting a critical weakness in our digital infrastructure: Essential global cybersecurity services depend on a complex web of US agency interests and government funding that can be cut or redirected at any time. Security haves and have-nots What began as a trickle of software vulnerabilities in the early Internet era has become an unstoppable avalanche, and the free databases that have tracked them for decades have struggled to keep up. In early July, the CVE database crossed over 300,000 catalogued vulnerabilities. Numbers jump unpredictably each year, sometimes by 10% or much more. Even before its latest crisis, the NVD was notorious for delayed publication of new vulnerability analyses, often trailing private security software and vendor advisories by weeks or months. Gupta has watched organizations increasingly adopt commercial vulnerability management (VM) software that includes its own threat intelligence services. “We’ve definitely become over-reliant on our VM tools,” she notes, describing security teams’ growing dependence on vendors like Qualys, Rapid7, and Tenable to supplement or replace unreliable public databases. These platforms combine their own research with various data sources to create proprietary risk scores that help teams prioritize fixes. But not all organizations can afford to fill the NVD’s gap with premium security tools. “Smaller companies and startups, already at a disadvantage, are going to be more at risk,” she explains.  Komal Rawat, a security engineer in New Delhi whose mid-stage cloud startup has a limited budget, describes the impact in stark terms: “If NVD goes, there will be a crisis in the market. Other databases are not that popular, and to the extent they are adopted, they are not free. If you don’t have recent data, you’re exposed to attackers who do.” The growing backlog means new devices could be more likely to have vulnerability blind spots—whether that’s a Ring doorbell at home or an office building’s “smart” access control system. The biggest risk may be “one-off” security flaws that fly under the radar. “There are thousands of vulnerabilities that will not affect the majority of enterprises,” says Gupta. “Those are the ones that we’re not getting analysis on, which would leave us at risk.” NIST acknowledges it has limited visibility into which organizations are most affected by the backlog. “We don’t track which industries use which products and therefore cannot measure impact to specific industries,” a spokesperson says. Instead, the team prioritizes vulnerabilities on the basis of CISA’s known exploits list and those included in vendor advisories like Microsoft Patch Tuesday. The biggest vulnerability Brian Martin has watched this system evolve—and deteriorate—from the inside. A former CVE board member and an original project leader behind the Open Source Vulnerability Database, he has built a combative reputation over the decades as a leading historian and practitioner. Martin says his current project, VulnDB (part of Flashpoint Security), outperforms the official databases he once helped oversee. “Our team processes more vulnerabilities, at a much faster turnaround, and we do it for a fraction of the cost,” he says, referring to the tens of millions in government contracts that support the current system.  When we spoke in May, Martin said his database contains more than 112,000 vulnerabilities with no CVE identifiers—security flaws that exist in the wild but remain invisible to organizations that rely solely on public channels. “If you gave me the money to triple my team, that non-CVE number would be in the 500,000 range,” he said. In the US, official vulnerability management duties are split between a web of contractors, agencies, and nonprofit centers like the Mitre Corporation. Critics like Martin say that creates potential for redundancy, confusion, and inefficiency, with layers of middle management and relatively few actual vulnerability experts. Others defend the value of this fragmentation. “These programs build on or complement each other to create a more comprehensive, supportive, and diverse community,” CISA said in a statement. “That increases the resilience and usefulness of the entire ecosystem.” As American leadership wavers, other nations are stepping up. China now operates multiple vulnerability databases, some surprisingly robust but tainted by the possibility that they are subject to state control. In May, the European Union accelerated the launch of its own database, as well as a decentralized “Global CVE” architecture. Following social media and cloud services, vulnerability intelligence has become another front in the contest for technological independence.  That leaves security professionals to navigate multiple, potentially conflicting sources of data. “It’s going to be a mess, but I would rather have too much information than none at all,” says Gupta, describing how her team monitors multiple databases despite the added complexity.  Resetting software liability As defenders adapt to the fragmenting landscape, the tech industry faces another reckoning: Why don’t software vendors carry more responsibility for protecting their customers from security issues? Major vendors routinely disclose—but don’t necessarily patch—thousands of new vulnerabilities each year. A single exposure could crash critical systems or increase the risks of fraud and data misuse.  For decades, the industry has hidden behind legal shields. “Shrink-wrap licenses” once forced consumers to broadly waive their right to hold software vendors liable for defects. Today’s end-user license agreements (EULAs), often delivered in pop-up browser windows, have evolved into incomprehensibly long documents. Last November, a lab project called “EULAS of Despair” used the length of War and Peace (587,287 words) to measure these sprawling contracts. The worst offender? Twitter, at 15.83 novels’ worth of fine print. “This is a legal fiction that we’ve created around this whole ecosystem, and it’s just not sustainable,” says Andrea Matwyshyn, a US special advisor and technology law professor at Penn State University, where she directs the Policy Innovation Lab of Tomorrow. “Some people point to the fact that software can contain a mix of products and services, creating more complex facts. But just like in engineering or financial litigation, even the most messy scenarios can be resolved with the assistance of experts.” This liability shield is finally beginning to crack. In July 2024, a faulty security update in CrowdStrike’s popular endpoint detection software crashed millions of Windows computers worldwide and caused outages at everything from airlines to hospitals to 911 systems. The incident led to billions in estimated damages, and the city of Portland, Oregon, even declared a “state of emergency.” Now, affected companies like Delta Airlines have hired high-priced attorneys to pursue major damages—a signal opening of the floodgates to litigation. Despite the soaring number of vulnerabilities, many fall into long-established categories, such as SQL injections that interfere with database queries and buffer memory overflows that enable code to be executed remotely. Matwyshyn advocates for a mandatory “software bill of materials,” or S-BOM—an ingredients list that would let organizations understand what components and potential vulnerabilities exist throughout their software supply chains. One recent report found 30% of data breaches stemmed from the vulnerabilities of third-party software vendors or cloud service providers. She adds: “When you can’t tell the difference between the companies that are cutting corners and a company that has really invested in doing right by their customers, that results in a market where everyone loses.” CISA leadership shares this sentiment, with a spokesperson emphasizing its “secure-by-design principles,” such as “making essential security features available without additional cost, eliminating classes of vulnerabilities, and building products in a way that reduces the cybersecurity burden on customers.” Avoiding a digital ‘dark age’ It will likely come as no surprise that practitioners are looking to AI to help fill the gap, while at the same time preparing for a coming swarm of cyberattacks by AI agents. Security researchers have used an OpenAI model to discover new “zero-day” vulnerabilities. And both the NVD and CVE teams are developing “AI-powered tools” to help streamline data collection, identification, and processing. NIST says that “up to 65% of our analysis time has been spent generating CPEs”—product information codes that pinpoint affected software. If AI can solve even part of this tedious process, it could dramatically speed up the analysis pipeline. But Martin cautions against optimism around AI, noting that the technology remains unproven and often riddled with inaccuracies—which, in security, can be fatal. “Rather than AI or ML [machine learning], there are ways to strategically automate bits of the processing of that vulnerability data while ensuring 99.5% accuracy,” he says.  AI also fails to address more fundamental challenges in governance. The CVE Foundation, launched in April 2025 by breakaway board members, proposes a globally funded nonprofit model similar to that of the internet’s addressing system, which transitioned from US government control to international governance. Other security leaders are pushing to revitalize open-source alternatives like Google’s OSV Project or the NVD++ (maintained by VulnCheck), which are accessible to the public but currently have limited resources. As these various reform efforts gain momentum, the world is waking up to the fact that vulnerability intelligence—like disease surveillance or aviation safety—requires sustained cooperation and public investment. Without it, a patchwork of paid databases will be all that remains, threatening to leave all but the richest organizations and nations permanently exposed. Matthew King is a technology and environmental journalist based in New York. He previously worked for cybersecurity firm Tenable.

Every day, billions of people trust digital systems to run everything from communication to commerce to critical infrastructure. But the global early warning system that alerts security teams to dangerous software flaws is showing critical gaps in coverage—and most users have no idea their digital lives are likely becoming more vulnerable.

Over the past eighteen months, two pillars of global cybersecurity have flirted with apparent collapse. In February 2024, the US-backed National Vulnerability Database (NVD)—relied on globally for its free analysis of security threats—abruptly stopped publishing new entries, citing a cryptic “change in interagency support.” Then, in April of this year, the Common Vulnerabilities and Exposures (CVE) program, the fundamental numbering system for tracking software flaws, seemed at similar risk: A leaked letter warned of an imminent contract expiration.

Cybersecurity practitioners have since flooded Discord channels and LinkedIn feeds with emergency posts and memes of “NVD” and “CVE” engraved on tombstones. Unpatched vulnerabilities are the second most common way cyberattackers break in, and they have led to fatal hospital outages and critical infrastructure failures. In a social media post, Jen Easterly, a US cybersecurity expert, said: “Losing [CVE] would be like tearing out the card catalog from every library at once—leaving defenders to sort through chaos while attackers take full advantage.” If CVEs identify each vulnerability like a book in a card catalog, NVD entries provide the detailed review with context around severity, scope, and exploitability. 

In the end, the Cybersecurity and Infrastructure Security Agency (CISA) extended funding for CVE another year, attributing the incident to a “contract administration issue.” But the NVD’s story has proved more complicated. Its parent organization, the National Institute of Standards and Technology (NIST), reportedly saw its budget cut roughly 12% in 2024, right around the time that CISA pulled its $3.7 million in annual funding for the NVD. Shortly after, as the backlog grew, CISA launched its own “Vulnrichment” program to help address the analysis gap, while promoting a more distributed approach that allows multiple authorized partners to publish enriched data. 

“CISA continuously assesses how to most effectively allocate limited resources to help organizations reduce the risk of newly disclosed vulnerabilities,” says Sandy Radesky, the agency’s associate director for vulnerability management. Rather than just filling the gap, she emphasizes that Vulnrichment was established to provide unique additional information, like recommended actions for specific stakeholders, and to “reduce dependency of the federal government’s role to be the sole provider of vulnerability enrichment.”

Meanwhile, NIST has scrambled to hire contractors to help clear the backlog. Despite a return to pre-crisis processing levels, a boom in vulnerabilities newly disclosed to the NVD has outpaced these efforts. Currently, over 25,000 vulnerabilities await processing – nearly 10 times the previous high in 2017, according to data from software company Anchore. Before that, the NVD largely kept pace with CVE publications, maintaining a minimal backlog.

“Things have been disruptive, and we’ve been going through times of change across the board,” Matthew Scholl, then chief of the computer security division in NIST’s Information Technology Laboratory, said at an industry event in April. “Leadership has assured me and everyone that NVD is and will continue to be a mission priority for NIST, both in resourcing and capabilities.” Scholl left NIST in May after 20 years at the agency, and NIST declined to comment on the backlog. 

The situation has now prompted multiple government actions, with the Department of Commerce launching an audit of the NVD in May and House Democrats calling for a broader probe of both programs in June. But the damage to trust is already transforming geopolitics and supply chains as security teams prepare for a new era of cyber risk. “It’s left a bad taste, and people are realizing they can’t rely on this,” says Rose Gupta, who builds and runs enterprise vulnerability management programs. “Even if they get everything together tomorrow with a bigger budget, I don’t know that this won’t happen again. So I have to make sure I have other controls in place.”

As these public resources falter, organizations and governments are confronting a critical weakness in our digital infrastructure: Essential global cybersecurity services depend on a complex web of US agency interests and government funding that can be cut or redirected at any time.

Security haves and have-nots

What began as a trickle of software vulnerabilities in the early Internet era has become an unstoppable avalanche, and the free databases that have tracked them for decades have struggled to keep up. In early July, the CVE database crossed over 300,000 catalogued vulnerabilities. Numbers jump unpredictably each year, sometimes by 10% or much more. Even before its latest crisis, the NVD was notorious for delayed publication of new vulnerability analyses, often trailing private security software and vendor advisories by weeks or months.

Gupta has watched organizations increasingly adopt commercial vulnerability management (VM) software that includes its own threat intelligence services. “We’ve definitely become over-reliant on our VM tools,” she notes, describing security teams’ growing dependence on vendors like Qualys, Rapid7, and Tenable to supplement or replace unreliable public databases. These platforms combine their own research with various data sources to create proprietary risk scores that help teams prioritize fixes. But not all organizations can afford to fill the NVD’s gap with premium security tools. “Smaller companies and startups, already at a disadvantage, are going to be more at risk,” she explains. 

Komal Rawat, a security engineer in New Delhi whose mid-stage cloud startup has a limited budget, describes the impact in stark terms: “If NVD goes, there will be a crisis in the market. Other databases are not that popular, and to the extent they are adopted, they are not free. If you don’t have recent data, you’re exposed to attackers who do.”

The growing backlog means new devices could be more likely to have vulnerability blind spots—whether that’s a Ring doorbell at home or an office building’s “smart” access control system. The biggest risk may be “one-off” security flaws that fly under the radar. “There are thousands of vulnerabilities that will not affect the majority of enterprises,” says Gupta. “Those are the ones that we’re not getting analysis on, which would leave us at risk.”

NIST acknowledges it has limited visibility into which organizations are most affected by the backlog. “We don’t track which industries use which products and therefore cannot measure impact to specific industries,” a spokesperson says. Instead, the team prioritizes vulnerabilities on the basis of CISA’s known exploits list and those included in vendor advisories like Microsoft Patch Tuesday.

The biggest vulnerability

Brian Martin has watched this system evolve—and deteriorate—from the inside. A former CVE board member and an original project leader behind the Open Source Vulnerability Database, he has built a combative reputation over the decades as a leading historian and practitioner. Martin says his current project, VulnDB (part of Flashpoint Security), outperforms the official databases he once helped oversee. “Our team processes more vulnerabilities, at a much faster turnaround, and we do it for a fraction of the cost,” he says, referring to the tens of millions in government contracts that support the current system. 

When we spoke in May, Martin said his database contains more than 112,000 vulnerabilities with no CVE identifiers—security flaws that exist in the wild but remain invisible to organizations that rely solely on public channels. “If you gave me the money to triple my team, that non-CVE number would be in the 500,000 range,” he said.

In the US, official vulnerability management duties are split between a web of contractors, agencies, and nonprofit centers like the Mitre Corporation. Critics like Martin say that creates potential for redundancy, confusion, and inefficiency, with layers of middle management and relatively few actual vulnerability experts. Others defend the value of this fragmentation. “These programs build on or complement each other to create a more comprehensive, supportive, and diverse community,” CISA said in a statement. “That increases the resilience and usefulness of the entire ecosystem.”

As American leadership wavers, other nations are stepping up. China now operates multiple vulnerability databases, some surprisingly robust but tainted by the possibility that they are subject to state control. In May, the European Union accelerated the launch of its own database, as well as a decentralized “Global CVE” architecture. Following social media and cloud services, vulnerability intelligence has become another front in the contest for technological independence. 

That leaves security professionals to navigate multiple, potentially conflicting sources of data. “It’s going to be a mess, but I would rather have too much information than none at all,” says Gupta, describing how her team monitors multiple databases despite the added complexity. 

Resetting software liability

As defenders adapt to the fragmenting landscape, the tech industry faces another reckoning: Why don’t software vendors carry more responsibility for protecting their customers from security issues? Major vendors routinely disclose—but don’t necessarily patch—thousands of new vulnerabilities each year. A single exposure could crash critical systems or increase the risks of fraud and data misuse. 

For decades, the industry has hidden behind legal shields. “Shrink-wrap licenses” once forced consumers to broadly waive their right to hold software vendors liable for defects. Today’s end-user license agreements (EULAs), often delivered in pop-up browser windows, have evolved into incomprehensibly long documents. Last November, a lab project called “EULAS of Despair” used the length of War and Peace (587,287 words) to measure these sprawling contracts. The worst offender? Twitter, at 15.83 novels’ worth of fine print.

“This is a legal fiction that we’ve created around this whole ecosystem, and it’s just not sustainable,” says Andrea Matwyshyn, a US special advisor and technology law professor at Penn State University, where she directs the Policy Innovation Lab of Tomorrow. “Some people point to the fact that software can contain a mix of products and services, creating more complex facts. But just like in engineering or financial litigation, even the most messy scenarios can be resolved with the assistance of experts.”

This liability shield is finally beginning to crack. In July 2024, a faulty security update in CrowdStrike’s popular endpoint detection software crashed millions of Windows computers worldwide and caused outages at everything from airlines to hospitals to 911 systems. The incident led to billions in estimated damages, and the city of Portland, Oregon, even declared a “state of emergency.” Now, affected companies like Delta Airlines have hired high-priced attorneys to pursue major damages—a signal opening of the floodgates to litigation.

Despite the soaring number of vulnerabilities, many fall into long-established categories, such as SQL injections that interfere with database queries and buffer memory overflows that enable code to be executed remotely. Matwyshyn advocates for a mandatory “software bill of materials,” or S-BOM—an ingredients list that would let organizations understand what components and potential vulnerabilities exist throughout their software supply chains. One recent report found 30% of data breaches stemmed from the vulnerabilities of third-party software vendors or cloud service providers.

She adds: “When you can’t tell the difference between the companies that are cutting corners and a company that has really invested in doing right by their customers, that results in a market where everyone loses.”

CISA leadership shares this sentiment, with a spokesperson emphasizing its “secure-by-design principles,” such as “making essential security features available without additional cost, eliminating classes of vulnerabilities, and building products in a way that reduces the cybersecurity burden on customers.”

Avoiding a digital ‘dark age’

It will likely come as no surprise that practitioners are looking to AI to help fill the gap, while at the same time preparing for a coming swarm of cyberattacks by AI agents. Security researchers have used an OpenAI model to discover new “zero-day” vulnerabilities. And both the NVD and CVE teams are developing “AI-powered tools” to help streamline data collection, identification, and processing. NIST says that “up to 65% of our analysis time has been spent generating CPEs”—product information codes that pinpoint affected software. If AI can solve even part of this tedious process, it could dramatically speed up the analysis pipeline.

But Martin cautions against optimism around AI, noting that the technology remains unproven and often riddled with inaccuracies—which, in security, can be fatal. “Rather than AI or ML [machine learning], there are ways to strategically automate bits of the processing of that vulnerability data while ensuring 99.5% accuracy,” he says. 

AI also fails to address more fundamental challenges in governance. The CVE Foundation, launched in April 2025 by breakaway board members, proposes a globally funded nonprofit model similar to that of the internet’s addressing system, which transitioned from US government control to international governance. Other security leaders are pushing to revitalize open-source alternatives like Google’s OSV Project or the NVD++ (maintained by VulnCheck), which are accessible to the public but currently have limited resources.

As these various reform efforts gain momentum, the world is waking up to the fact that vulnerability intelligence—like disease surveillance or aviation safety—requires sustained cooperation and public investment. Without it, a patchwork of paid databases will be all that remains, threatening to leave all but the richest organizations and nations permanently exposed.

Matthew King is a technology and environmental journalist based in New York. He previously worked for cybersecurity firm Tenable.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

A timeline of Broadcom/VMware and Siemens licensing dispute

June 24: VMware responds, saying that Siemens distributed infringing VMware products to its US subsidiaries in violation of US copyright law by accessing VMware’s US server. July 1: Nah uh, says Siemens. First, any actions taken by the parent company occurred in Germany. Also, downloading allegedly copyrighted software does not

Read More »

JPMorgan launches carbon market blockchain app

Dive Brief: JPMorgan Chase is working to allow voluntary carbon markets to issue blockchain tokens at the registry level that represent ownership of carbon credits, permitting market participants to issue, transfer and retire credits, the bank announced Wednesday. JPMorgan is currently exploring testing processes with carbon registries from S&P Global

Read More »

IBM Power11 challenges x86 and GPU giants with security-first server strategy

The IBM Power Cyber Vault solution is designed to provide protection against cyberattacks such as data corruption and encryption with proactive immutable snapshots that are automatically captured, stored, and tested on a custom-defined schedule, IBM said. Power11 also uses NIST-approved built-in quantum-safe cryptography designed to help protect systems from harvest-now, decrypt-later attacks

Read More »

Mozambique’s $57B LNG Projects Get Reboot Despite Risk

Four years after terrorist attacks halted a massive liquefied natural gas project in Mozambique, momentum behind $57 billion in facilities that will export the fuel is picking up. TotalEnergies SE and Eni SpA have readied contractors and signed agreements for preliminary work on projects to add capacity. In addition, the French major’s Chief Executive Officer Patrick Pouyanne was scheduled to meet Mozambican President Daniel Chapo on Thursday, according to people with knowledge of the matter who asked not to be identified because the information is not public. Revenue from gas exports could be significant for the southern African nation, one of the world’s poorest. At the same time, developers must weigh the risk of renewed terrorist attacks, which prompted Total to halt its $20 billion facility in 2021 in the northern province of Cabo Delgado.  Mozambique has deployed its army, police, mercenaries and regional armed forces to end the attacks by the Islamic State-linked insurgents. But they have not been stamped out. “Activity by the insurgent group will remain a threat to LNG timelines and the operating environment across Cabo Delgado province,” said Ryan Cummings, director at Cape Town-based Signal Risk. “A near-term resumption of LNG-related activity could be a trigger for renewed attacks.” Fighting intensified in June, with Islamic State claiming what would be among its deadliest single assaults on security forces. Chapo said in an interview last week that Total’s 2021 declaration of force majeure would remain in place indefinitely “if we’re waiting for Cabo Delgado to be a heaven.” He described business as proceeding as usual in the district of Palma, where attacks occurred that precipitated shuttering the project.  Over the last few months, Total received approval for key financing from the US Export-Import Bank. Pouyanne has likened remaining obstacles to “more a question of paperwork,” while

Read More »

Argentina Asks US Appeals Court to Delay YPF Turnover Order

Argentina asked a US appeals court for an emergency delay of an order requiring it to hand over by a Monday deadline its controlling stake in energy company YPF SA to help satisfy a $16 billion judgment. The government of Argentine President Javier Milei requested the delay Thursday afternoon, hours after filing notice it was appealing the June 30 handover order by US District Judge Loretta Preska in New York. The South American nation likened Preska’s order to a foreign court directing the US government to “pack up the gold stored at Fort Knox and ship it abroad.” Argentina wants the federal appeals court in New York to put the order on hold while it considers the country’s arguments to overturn Preska’s decision, a process that could take months. Preska, who ruled in 2023 that Argentina owed billions to shareholders affected by a 2012 nationalization of YPF, found last month that the country’s 51 percent stake wasn’t shielded by foreign sovereign immunity. Preska ordered Argentina to turn over the shares within 14 days to a group led by Burford Capital, a litigation funding firm that acquired the interests of original YPF shareholders.  An Argentine law passed at the time of the nationalization bars Milei’s government from transferring the government’s YPF stake without a two-thirds vote of the national Congress. Without a delay, Argentina on Monday faces a choice among “changing its own domestic laws, violating those laws, or disregarding a US court’s order,” the country said in a court filing. The June 30 ruling came as a major blow to Milei, who inherited the case when he took office about 18 months ago promising to turn around Argentina’s flailing economy. Argentina’s sovereign bonds and YPF shares both dropped after the ruling, while the country’s parallel exchange rate weakened.  Milei vowed to

Read More »

Puerto Rico Board Suspends $20B New Fortress Gas Deal

Puerto Rico’s finance watchdog is refusing to OK a $20 billion natural gas supply deal that it said would give New Fortress Energy Inc. a near monopoly over the island’s energy future.  The Financial Oversight and Management Board has “profound concerns” about a proposed 15-year contract between Genera PR – a New Fortress subsidiary that operates the territory’s power plants – and the company unit that delivers liquefied gas, according to a letter to Puerto Rico’s energy czar, Josue Colon.  Approving the contract would “lock the island into a long-term commitment with a single supplier, potentially undermining market competition and limiting flexibility,” the board wrote, saying the deal would create a “monopolistic arrangement that would ultimately jeopardize energy security.” New Fortress Energy did not respond to a request for comment. Colon’s office declined to immediately comment on the letter. The watchdog’s objections are just the latest blow to New Fortress, which lost more than 80 percent of its market value in the last year as it struggles to shore up its finances and reassure investors and bondholders. The shares fell as much as 19 percent on Thursday. “Given the magnitude of the proposed contract and the critical nature of the services at stake, it would be irresponsible for the Oversight Board to review the proposed contract thoroughly in this short time,” the board wrote.  Even so, the board said it is willing to meet with all those involved to ensure the deal is “fiscally responsible.” New Fortress already is a key supplier of LNG to Puerto Rico’s power sector. Other sources include EcoElectrica and Crowley, which ship the fuel to plants operated by other companies. New Fortress’ initial LNG supply contract was due to expire in June but has been extended on a temporary basis.  In April, government agencies called for bids to provide

Read More »

Energy Department Authorizes Strategic Petroleum Reserve Exchange to Support Fuel Supply in Gulf Coast

WASHINGTON—The U.S. Department of Energy (DOE) today announced the authorization of an exchange from the Strategic Petroleum Reserve (SPR) with ExxonMobil Corporation to address logistical challenges impacting crude oil deliveries to the company’s Baton Rouge refinery. U.S. Secretary of Energy Chris Wright authorized this action to help maintain stable regional supply of transportation fuels across Louisiana and the broader Gulf Coast. This action preserves the SPR’s operational flexibility and will not impact or delay the Department’s ongoing efforts to refill the reserve. Under the exchange agreement, DOE will provide up to 1 million barrels of crude oil from the SPR. The exchange will support ExxonMobil’s restoration of refinery operations that were reduced due to an offshore supply disruption. ExxonMobil will return the borrowed crude along with additional barrels of crude oil for the SPR at no cost to the taxpayer. The Department remains in close coordination with industry partners to ensure stability in the fuel supply chain during the peak demand season. DOE continues to encourage refiners to prioritize efficient production and delivery of refined fuels, stands ready to support the nation’s energy security through the responsible use of strategic resources, and will continue to deliver on President Trump’s commitment to protect American energy security by refilling the SPR. Background: Sections 159 and 160 of the Energy Policy and Conservation Act (EPCA), 42 U.S.C.A. §§ 6239 and 6240, authorize the Secretary of Energy to exchange SPR petroleum products and to acquire petroleum products by exchange for storage in the SPR. The Secretary of Energy has previously exercised this legal authority to conduct emergency exchanges in response to supply disruptions, including Keystone Pipeline in 2022, and the Calcasieu Ship Channel closures in 2006 and 2000. An oil supply disruption has led to reduced operations at the Baton Rouge refinery, limiting production

Read More »

Misaligned interconnection, transmission planning could hurt competitive markets: FERC’s Chang

There is a “misalignment” in grid interconnection and transmission planning processes that could harm competitive power markets, according to Judy Chang, a member of the Federal Energy Regulatory Commission. Amid surging electric demand forecasts, the interconnection logjam has led grid operators to propose short-term solutions, Chang said Thursday at a meeting held in Woodstock, Vermont, by WIRES, a transmission-focused trade group. The PJM Interconnection, the Midcontinent Independent System Operator and the Southwest Power Pool have proposed one-time processes that would create a fast-track interconnection review for planned generating projects that meet certain criteria. FERC approved PJM’s plan earlier this year over the opposition of some renewable energy companies that contend selected projects will be able to unfairly jump ahead of others that have been waiting in interconnection queues. “I don’t really love short-term fixes,” Chang said. “I really prefer to have better processes — fair and competitive processes — so that generators interconnecting know the rules of the game.” Chang said various issues are coming to a head at the same time, including disputes over interconnection cost allocation and the system’s ability to upgrade interconnection infrastructure and bring on new generation as fast as possible. Those issues could affect the future of competitive power markets, Chang said, noting that some states are considering withdrawing from regional transmission organizations. “I worry about how much states might want to compromise … the open access and competitive access to transmission and competitive markets by pulling back and finding internal solutions, or by complaining about competitive markets not meeting the challenge of the day,” she said. Getting generation online as quickly as possible is a key priority, according to Chang. “We should plan, design, permit — all faster,” Chang said. “So I am a big supporter of permitting reform on all infrastructure, but also

Read More »

Environmental, consumer advocates sue Bonneville for joining SPP’s day-ahead market

Five advocacy groups on Thursday sued the Bonneville Power Administration for deciding to join the Southwest Power Pool’s Markets+ day-ahead market. The BPA’s May 9 decision to join Markets+ instead of a day-ahead market for the West developed by the California Independent System Operator will lead to higher electric costs, inefficient operations between market seams and potentially the need to build more generating facilities, according to the lawsuit filed by NW Energy Coalition, the Idaho Conservation League, the Montana Environmental Information Center, the Oregon Citizens’ Utility Board and the Sierra Club. “Bonneville’s decision on markets will affect the transmission and generation of electric power across the West and is exactly the type of major federal action that should first consider the harms it could cause to our air quality, grid system reliability, [and] fish and wildlife,” Jaimini Parekh, a senior attorney for Earthjustice, said in a press release. Earthjustice represents the advocacy groups. The BPA’s decision violated the Pacific Northwest Electric Power Planning and Conservation Act, the National Environmental Policy Act and the Administrative Procedure Act, according to the groups. “BPA did not rationally explain how joining the smaller, non-contiguous Markets+ footprint will enable it to meet its duty to promote an adequate, efficient, economical, and reliable power supply for the region that also gives priority to clean, renewable resources,” the groups said in the lawsuit filed at the U.S. Court of Appeals for the Ninth Circuit. The groups also contend that BPA’s choice to join Markets+ will likely increase the risk of blackouts during periods of high or extreme electricity demand because of the “many and complex” seams that power must be transferred across in the market as compared to CAISO’s Extended Day-Ahead Market or a “no-action” alternative. BPA does not comment on active litigation, said Nick Quinata, a

Read More »

Nvidia hits $4T market cap as AI, high-performance semiconductors hit stride

“The company added $1 trillion in market value in less than a year, a pace that surpasses Apple and Microsoft’s previous trajectories. This rapid ascent reflects how indispensable AI chipmakers have become in today’s digital economy,” Kiran Raj, practice head, Strategic Intelligence (Disruptor) at GlobalData, said in a statement. According to GlobalData’s Innovation Radar report, “AI Chips – Trends, Market Dynamics and Innovations,” the global AI chip market is projected to reach $154 billion by 2030, growing at a compound annual growth rate (CAGR) of 20%. Nvidia has much of that market, but it also has a giant bullseye on its back with many competitors gunning for its crown. “With its AI chips powering everything from data centers and cloud computing to autonomous vehicles and robotics, Nvidia is uniquely positioned. However, competitive pressure is mounting. Players like AMD, Intel, Google, and Huawei are doubling down on custom silicon, while regulatory headwinds and export restrictions are reshaping the competitive dynamics,” he said.

Read More »

Enterprises will strengthen networks to take on AI, survey finds

Private data centers: 29.5% Traditional public cloud: 35.4% GPU as a service specialists: 18.5% Edge compute: 16.6% “There is little variation from training to inference, but the general pattern is workloads are concentrated a bit in traditional public cloud and then hyperscalers have significant presence in private data centers,” McGillicuddy explained. “There is emerging interest around deploying AI workloads at the corporate edge and edge compute environments as well, which allows them to have workloads residing closer to edge data in the enterprise, which helps them combat latency issues and things like that. The big key takeaway here is that the typical enterprise is going to need to make sure that its data center network is ready to support AI workloads.” AI networking challenges The popularity of AI doesn’t remove some of the business and technical concerns that the technology brings to enterprise leaders. According to the EMA survey, business concerns include security risk (39%), cost/budget (33%), rapid technology evolution (33%), and networking team skills gaps (29%). Respondents also indicated several concerns around both data center networking issues and WAN issues. Concerns related to data center networking included: Integration between AI network and legacy networks: 43% Bandwidth demand: 41% Coordinating traffic flows of synchronized AI workloads: 38% Latency: 36% WAN issues respondents shared included: Complexity of workload distribution across sites: 42% Latency between workloads and data at WAN edge: 39% Complexity of traffic prioritization: 36% Network congestion: 33% “It’s really not cheap to make your network AI ready,” McGillicuddy stated. “You might need to invest in a lot of new switches and you might need to upgrade your WAN or switch vendors. You might need to make some changes to your underlay around what kind of connectivity your AI traffic is going over.” Enterprise leaders intend to invest in infrastructure

Read More »

CoreWeave acquires Core Scientific for $9B to power AI infrastructure push

Such a shift, analysts say, could offer short-term benefits for enterprises, particularly in cost and access, but also introduces new operational risks. “This acquisition may potentially lower enterprise pricing through lease cost elimination and annual savings, while improving GPU access via expanded power capacity, enabling faster deployment of Nvidia chipsets and systems,” said Charlie Dai, VP and principal analyst at Forrester. “However, service reliability risks persist during this crypto-to-AI retrofitting.” This also indicates that struggling vendors such as Core Scientific and similar have a way to cash out, according to Yugal Joshi, partner at Everest Group. “However, it does not materially impact the availability of Nvidia GPUs and similar for enterprises,” Joshi added. “Consolidation does impact the pricing power of vendors.” Concerns for enterprises Rising demand for AI-ready infrastructure can raise concerns among enterprises, particularly over access to power-rich data centers and future capacity constraints. “The biggest concern that CIOs should have with this acquisition is that mature data center infrastructure with dedicated power is an acquisition target,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “This may turn out to create challenges for CIOs currently collocating data workloads or seeking to keep more of their data loads on private data centers rather than in the cloud.”

Read More »

CoreWeave achieves a first with Nvidia GB300 NVL72 deployment

The deployment, Kimball said, “brings Dell quality to the commodity space. Wins like this really validate what Dell has been doing in reshaping its portfolio to accommodate the needs of the market — both in the cloud and the enterprise.” Although concerns were voiced last year that Nvidia’s next-generation Blackwell data center processors had significant overheating problems when they were installed in high-capacity server racks, he said that a repeat performance is unlikely. Nvidia, said Kimball “has been very disciplined in its approach with its GPUs and not shipping silicon until it is ready. And Dell almost doubles down on this maniacal quality focus. I don’t mean to sound like I have blind faith, but I’ve watched both companies over the last several years be intentional in delivering product in volume. Especially as the competitive market starts to shape up more strongly, I expect there is an extremely high degree of confidence in quality.” CoreWeave ‘has one purpose’ He said, “like Lambda Labs, Crusoe and others, [CoreWeave] seemingly has one purpose (for now): deliver GPU capacity to the market. While I expect these cloud providers will expand in services, I think for now the type of customer employing services is on the early adopter side of AI. From an enterprise perspective, I have to think that organizations well into their AI journey are the consumers of CoreWeave.”  “CoreWeave is also being utilized by a lot of the model providers and tech vendors playing in the AI space,” Kimball pointed out. “For instance, it’s public knowledge that Microsoft, OpenAI, Meta, IBM and others use CoreWeave GPUs for model training and more. It makes sense. These are the customers that truly benefit from the performance lift that we see from generation to generation.”

Read More »

Oracle to power OpenAI’s AGI ambitions with 4.5GW expansion

“For CIOs, this shift means more competition for AI infrastructure. Over the next 12–24 months, securing capacity for AI workloads will likely get harder, not easier. Though cost is coming down but demand is increasing as well, due to which CIOs must plan earlier and build stronger partnerships to ensure availability,” said Pareekh Jain, CEO at EIIRTrend & Pareekh Consulting. He added that CIOs should expect longer wait times for AI infrastructure. To mitigate this, they should lock in capacity through reserved instances, diversify across regions and cloud providers, and work with vendors to align on long-term demand forecasts.  “Enterprises stand to benefit from more efficient and cost-effective AI infrastructure tailored to specialized AI workloads, significantly lower their overall future AI-related investments and expenses. Consequently, CIOs face a critical task: to analyze and predict the diverse AI workloads that will prevail across their organizations, business units, functions, and employee personas in the future. This foresight will be crucial in prioritizing and optimizing AI workloads for either in-house deployment or outsourced infrastructure, ensuring strategic and efficient resource allocation,” said Neil Shah, vice president at Counterpoint Research. Strategic pivot toward AI data centers The OpenAI-Oracle deal comes in stark contrast to developments earlier this year. In April, AWS was reported to be scaling back its plans for leasing new colocation capacity — a move that AWS Vice President for global data centers Kevin Miller described as routine capacity management, not a shift in long-term expansion plans. Still, these announcements raised questions around whether the hyperscale data center boom was beginning to plateau. “This isn’t a slowdown, it’s a strategic pivot. The era of building generic data center capacity is over. The new global imperative is a race for specialized, high-density, AI-ready compute. Hyperscalers are not slowing down; they are reallocating their capital to

Read More »

Arista Buys VeloCloud to reboot SD-WANs amid AI infrastructure shift

What this doesn’t answer is how Arista Networks plans to add newer, security-oriented Secure Access Service Edge (SASE) capabilities to VeloCloud’s older SD-WAN technology. Post-acquisition, it still has only some of the building blocks necessary to achieve this. Mapping AI However, in 2025 there is always more going on with networking acquisitions than simply adding another brick to the wall, and in this case it’s the way AI is changing data flows across networks. “In the new AI era, the concepts of what comprises a user and a site in a WAN have changed fundamentally. The introduction of agentic AI even changes what might be considered a user,” wrote Arista Networks CEO, Jayshree Ullal, in a blog highlighting AI’s effect on WAN architectures. “In addition to people accessing data on demand, new AI agents will be deployed to access data independently, adapting over time to solve problems and enhance user productivity,” she said. Specifically, WANs needed modernization to cope with the effect AI traffic flows are having on data center traffic. Sanjay Uppal, now VP and general manager of the new VeloCloud Division at Arista Networks, elaborated. “The next step in SD-WAN is to identify, secure and optimize agentic AI traffic across that distributed enterprise, this time from all end points across to branches, campus sites, and the different data center locations, both public and private,” he wrote. “The best way to grab this opportunity was in partnership with a networking systems leader, as customers were increasingly looking for a comprehensive solution from LAN/Campus across the WAN to the data center.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »