Stay Ahead, Stay ONMINE

Why US federal health agencies are abandoning mRNA vaccines

This time five years ago, we were in the throes of the covid-19 pandemic. By August 2020, we’d seen school closures, national lockdowns, and widespread panic. That year, the coronavirus was responsible for around 3 million deaths, according to the World Health Organization. Then came the vaccines. The first mRNA vaccines for covid were authorized for use in December 2020. By the end of the following month, over 100 million doses had been administered. Billions more have been administered since then. The vaccines worked well and are thought to have saved millions of lives. The US government played an important role in the introduction of these vaccines, providing $18 billion to support their development as part of Operation Warp Speed. But now, that government is turning its back on the technology. Funding is being withdrawn. Partnerships are being canceled. Leaders of US health agencies are casting doubt on the vaccines’ effectiveness and safety. And this week, the director of the National Institutes of Health implied that the reversal was due to a lack of public trust in the technology. Plenty of claims are being thrown about. Let’s consider the evidence. mRNA is a molecule found in cells that essentially helps DNA make proteins. The vaccines work in a similar way, except they carry genetic instructions for proteins found on the surface of the coronavirus. This can help train our immune systems to tackle the virus itself. Research into mRNA vaccines has been underway for decades. But things really kicked into gear when the virus behind covid-19 triggered a pandemic in 2020. A huge international effort—along with plenty of funding—fast-tracked research and development. The genetic code for the Sars-CoV-2 virus was sequenced in January 2020. The first vaccines were being administered by the end of that year. That’s wildly fast by pharma standards—drugs can typically spend around a decade in development. And they seemed to work really well. Early trials in tens of thousands of volunteers suggested that Pfizer and BioNTech’s vaccine conferred “95% protection against covid-19.” No vaccine is perfect, but for a disease that was responsible for millions of deaths, the figures were impressive. Still, there were naysayers. Including Robert F. Kennedy Jr., the notorious antivaccine activist who currently leads the US’s health agencies. He has called covid vaccines “unsafe and ineffective.” In 2021, he petitioned the US Food and Drug Administration to revoke the authorization for covid vaccines. That same year, Instagram removed his account from the platform after he repeatedly shared “debunked claims about the coronavirus or vaccines.” So perhaps we shouldn’t have been surprised when the US Department of Health and Human Services, which RFK Jr. now heads, announced “the beginning of a coordinated wind-down” of mRNA vaccine development earlier this month. HHS is canceling almost $500 million worth of funding for the technology. “The data show these vaccines fail to protect effectively against upper respiratory infections like covid and flu,” Kennedy said in a statement. Well, as we’ve seen, the mRNA covid vaccines were hugely effective during the pandemic. And researchers are working on other mRNA vaccines for infections including flu. Our current flu vaccines aren’t ideal—they are produced slowly in a process that requires hen’s eggs, based on predictions about which flu strains are likely to be prominent in the winter. They’re not all that protective. mRNA vaccines, on the other hand, can be made quickly and cheaply, perhaps once we already know which flu strains we need to protect against. And scientists are making progress with universal flu vaccines—drugs that could potentially protect against multiple flu strains. Kennedy’s other claim is that the vaccines aren’t safe. There have certainly been reports of adverse events. Usually these are mild and short-lived—most people will be familiar with the fatigue and flu-like symptoms that can follow a covid jab. But some are more serious: Some people have developed neurological and cardiovascular conditions.  These problems are rare, according to an evaluation of adverse outcomes in almost 100 million people who received covid vaccines. Most studies of mRNA vaccines haven’t reported an increase in the risk of Guillain-Barré syndrome, a condition that affects nerves and has been linked to covid vaccines. Covid vaccines can increase the risk of myocarditis and pericarditis in young men. But the picture isn’t straightforward. Vaccinated individuals appear to have double the risk of myocarditis compared with unvaccinated people. But the overall risk is still low. And it’s still not as high as the risk of myocarditis following a covid infection. And then there are the claims that mRNA vaccines don’t have the support of the public. That’s what Jay Bhattacharya, director of the NIH, wrote in an opinion piece published in the Washington Post on Wednesday. “No matter how elegant the science, a platform that lacks credibility among the people it seeks to protect cannot fulfill its public health mission,” Bhattacharya wrote. He blamed the Biden administration, which he wrote “did not manage public trust in the coronavirus vaccines.” It’s an interesting take from someone who played a pretty significant role in undermining public trust in covid policies, including vaccine mandates. In 2020, Bhattacharya coauthored the Great Barrington Declaration—an open letter making the case against lockdowns. He became a vocal critic of US health agencies, including the NIH, and their handling of the outbreak. Unlike Kennedy, Bhattacharya hasn’t called the vaccines unsafe or ineffective. But he has called vaccine mandates “unethical.” Curiously, the US government doesn’t seem to be turning away from all vaccine research. Just work on mRNA vaccines. Some of the funding budget originally earmarked for covid vaccines will be redirected to two senior staffers at the NIH who are exploring the use of an old vaccine technology that makes use of inactivated viruses—a move that researchers are describing as “troubling” and “appalling,” according to reporting by Science. Not all mRNA research is being abandoned, either. Bhattacharya has expressed his support for research into the use of mRNA-based treatments for cancer. Such “vaccine therapeutics” were being explored before covid came along. (Notably, Bhattacharya isn’t referring to them as “vaccines.”) It is difficult to predict how this will all shake out for mRNA vaccines. We mustn’t forget that this technology helped save millions of lives and shows huge promise for the development of cheap, effective, and potentially universal vaccines. Let’s hope that the recent upsets won’t prevent it from achieving its potential. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This time five years ago, we were in the throes of the covid-19 pandemic. By August 2020, we’d seen school closures, national lockdowns, and widespread panic. That year, the coronavirus was responsible for around 3 million deaths, according to the World Health Organization.

Then came the vaccines. The first mRNA vaccines for covid were authorized for use in December 2020. By the end of the following month, over 100 million doses had been administered. Billions more have been administered since then. The vaccines worked well and are thought to have saved millions of lives.

The US government played an important role in the introduction of these vaccines, providing $18 billion to support their development as part of Operation Warp Speed.

But now, that government is turning its back on the technology. Funding is being withdrawn. Partnerships are being canceled. Leaders of US health agencies are casting doubt on the vaccines’ effectiveness and safety. And this week, the director of the National Institutes of Health implied that the reversal was due to a lack of public trust in the technology.

Plenty of claims are being thrown about. Let’s consider the evidence.

mRNA is a molecule found in cells that essentially helps DNA make proteins. The vaccines work in a similar way, except they carry genetic instructions for proteins found on the surface of the coronavirus. This can help train our immune systems to tackle the virus itself.

Research into mRNA vaccines has been underway for decades. But things really kicked into gear when the virus behind covid-19 triggered a pandemic in 2020. A huge international effort—along with plenty of funding—fast-tracked research and development.

The genetic code for the Sars-CoV-2 virus was sequenced in January 2020. The first vaccines were being administered by the end of that year. That’s wildly fast by pharma standards—drugs can typically spend around a decade in development.

And they seemed to work really well. Early trials in tens of thousands of volunteers suggested that Pfizer and BioNTech’s vaccine conferred “95% protection against covid-19.” No vaccine is perfect, but for a disease that was responsible for millions of deaths, the figures were impressive.

Still, there were naysayers. Including Robert F. Kennedy Jr., the notorious antivaccine activist who currently leads the US’s health agencies. He has called covid vaccines “unsafe and ineffective.” In 2021, he petitioned the US Food and Drug Administration to revoke the authorization for covid vaccines. That same year, Instagram removed his account from the platform after he repeatedly shared “debunked claims about the coronavirus or vaccines.”

So perhaps we shouldn’t have been surprised when the US Department of Health and Human Services, which RFK Jr. now heads, announced “the beginning of a coordinated wind-down” of mRNA vaccine development earlier this month. HHS is canceling almost $500 million worth of funding for the technology. “The data show these vaccines fail to protect effectively against upper respiratory infections like covid and flu,” Kennedy said in a statement.

Well, as we’ve seen, the mRNA covid vaccines were hugely effective during the pandemic. And researchers are working on other mRNA vaccines for infections including flu. Our current flu vaccines aren’t ideal—they are produced slowly in a process that requires hen’s eggs, based on predictions about which flu strains are likely to be prominent in the winter. They’re not all that protective.

mRNA vaccines, on the other hand, can be made quickly and cheaply, perhaps once we already know which flu strains we need to protect against. And scientists are making progress with universal flu vaccines—drugs that could potentially protect against multiple flu strains.

Kennedy’s other claim is that the vaccines aren’t safe. There have certainly been reports of adverse events. Usually these are mild and short-lived—most people will be familiar with the fatigue and flu-like symptoms that can follow a covid jab. But some are more serious: Some people have developed neurological and cardiovascular conditions. 

These problems are rare, according to an evaluation of adverse outcomes in almost 100 million people who received covid vaccines. Most studies of mRNA vaccines haven’t reported an increase in the risk of Guillain-Barré syndrome, a condition that affects nerves and has been linked to covid vaccines.

Covid vaccines can increase the risk of myocarditis and pericarditis in young men. But the picture isn’t straightforward. Vaccinated individuals appear to have double the risk of myocarditis compared with unvaccinated people. But the overall risk is still low. And it’s still not as high as the risk of myocarditis following a covid infection.

And then there are the claims that mRNA vaccines don’t have the support of the public. That’s what Jay Bhattacharya, director of the NIH, wrote in an opinion piece published in the Washington Post on Wednesday.

“No matter how elegant the science, a platform that lacks credibility among the people it seeks to protect cannot fulfill its public health mission,” Bhattacharya wrote. He blamed the Biden administration, which he wrote “did not manage public trust in the coronavirus vaccines.”

It’s an interesting take from someone who played a pretty significant role in undermining public trust in covid policies, including vaccine mandates. In 2020, Bhattacharya coauthored the Great Barrington Declaration—an open letter making the case against lockdowns. He became a vocal critic of US health agencies, including the NIH, and their handling of the outbreak. Unlike Kennedy, Bhattacharya hasn’t called the vaccines unsafe or ineffective. But he has called vaccine mandates “unethical.”

Curiously, the US government doesn’t seem to be turning away from all vaccine research. Just work on mRNA vaccines. Some of the funding budget originally earmarked for covid vaccines will be redirected to two senior staffers at the NIH who are exploring the use of an old vaccine technology that makes use of inactivated viruses—a move that researchers are describing as “troubling” and “appalling,” according to reporting by Science.

Not all mRNA research is being abandoned, either. Bhattacharya has expressed his support for research into the use of mRNA-based treatments for cancer. Such “vaccine therapeutics” were being explored before covid came along. (Notably, Bhattacharya isn’t referring to them as “vaccines.”)

It is difficult to predict how this will all shake out for mRNA vaccines. We mustn’t forget that this technology helped save millions of lives and shows huge promise for the development of cheap, effective, and potentially universal vaccines. Let’s hope that the recent upsets won’t prevent it from achieving its potential.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

SonicWall rolls out eight new firewalls, expands cyber warranty

NSa 4800: 24x1GbE, 8x10G SFP+, 20 Gbps firewall throughput, 13 Gbps threat prevention NSa 5800: 24x1GbE, 8x10G SFP+, 30 Gbps firewall throughput, 24 Gbps threat prevention SonicWall Multi-gigabit connectivity addresses real market demand The Generation 8 portfolio introduces multi-gigabit connectivity across both product lines. Even lower-end desktop models now support

Read More »

ConocoPhillips lets well stimulation services contract for North Sea assets

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: #c19a06; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style

Read More »

Cisco strengthens AI networking story

“Overall, AI demand in the enterprise will grow over time. But enterprise customers need to see the value, see the ROI. Also, they have to have a well-defined use case,” Wollenweber said, noting that the 12-month innovation cycles of GPU vendors can be problematic if customers choose the wrong platform.

Read More »

DEF CON research takes aim at ZTNA, calls it a bust

Major vendor vulnerabilities span authentication and design flaws The research exposed critical vulnerabilities across Check Point, Zscaler and Netskope that fell into three primary categories: authentication bypasses, credential storage failures and cross-tenant exploitation. Authentication bypass vulnerabilities Zscaler’s SAML implementation contained the most severe authentication flaw. The researchers discovered that the

Read More »

Valmet Signs Agreement to Supply Petrobras with Spare Parts

Finland-based Valmet has signed a nationwide agreement with Petróleo Brasileiro S.A. (Petrobras) to supply spare parts for Neles valves, actuators, and positioners across all of the Brazilian energy firm’s operational units, including oil and natural gas exploration, production, refining, commercialization, and power generation. With an initial term of one year and the possibility of automatic renewal for up to five years, the agreement is a “significant milestone in the partnership between the two companies,” Valmet said in a news release. Financial terms of the agreement were not disclosed. The first order was included in Valmet’s orders received in the second quarter, with potential orders to be recognized over the agreement period, according to the release. According to Valmet, the initiative offers Petrobras “strategic benefits such as guaranteed valve equipment availability, greater cost predictability, streamlined processes, reduced bureaucracy, and specialized technical support with a recognized quality standard”. “This agreement reinforces Petrobras’ trust in Valmet’s ability to deliver reliable, high-performance valve solutions and strengthens our presence in the oil and gas segment in the region,” Marco Souza, business manager in the Latin America refining and chemical industry for Valmet, said. Energy Management Solution for Finland Utility Earlier, Valmet said it won a contract to supply a Valmet DNA energy management solution to Alva’s new heat production units, heat accumulators and combined heat and power (CHP) plants, Keljonlahti and Rauhalahti in Jyväskylä, Finland. Alva-yhtiöt Oy is a utility company owned by the City of Jyväskylä, Finland, that provides and develops services in district heating, electricity distribution, and water and wastewater management, primarily in Central Finland, according to an earlier statement. The new system will be integrated with the plants’ existing Valmet DNA Automation system, ensuring improved performance and reliability, Valmet said. Delivery of the solution will be in spring 2026, the company said.

Read More »

EIA Cuts Brent Oil Price Forecast for 2025 and 2026

The U.S. Energy Information Administration (EIA) cut its Brent spot average crude oil price forecast for 2025 and 2026 in its latest short term energy outlook (STEO), which was released on August 12. According to that STEO, the EIA sees the Brent spot price averaging $67.22 per barrel this year and $51.43 per barrel next year. In its previous STEO, which was released in July, the EIA projected that the Brent spot price would average $68.89 per barrel in 2025 and $58.48 per barrel in 2026. The EIA revealed in its latest STEO that it sees the Brent spot price average coming in at $67.40 per barrel in the third quarter of this year, $58.05 per barrel in the fourth quarter, $49.97 per barrel in the first quarter of next year, $49.67 per barrel in the second quarter, $52 per barrel in the third quarter, and $54 per barrel in the fourth quarter. In its previous July STEO, the EIA projected that the Brent spot price would average $68.02 per barrel in the third quarter of 2025, $64.02 per barrel in the fourth quarter, $60 per barrel in the first quarter of 2026, $59 per barrel in the second quarter, $58 per barrel in the third quarter, and $57 per barrel in the fourth quarter. Both STEOs highlighted that the Brent spot price averaged $80.56 per barrel in 2024. “Significant growth in oil supply will cause crude oil prices to fall in the coming months,” the EIA warned in its August STEO.   “In our forecast, the Brent crude oil spot price falls from $71 per barrel in July to $58 per barrel in 4Q25 and $49 per barrel in March and April 2026,” it added. “On August 3, OPEC+ members again agreed to accelerate their scheduled production increases. The

Read More »

Analysts Look at Trump-Putin Alaska Meeting

If Russia loses its streams of revenue from selling its gas and oil to third countries, then it will have a severe impact on the Russian economy. That’s what Olga Tokariuk, a fellow with the Democratic Resilience Program at the Center for European Policy Analysis (CEPA), said in an exclusive CEPA press briefing held Thursday, which Rigzone was invited to. “The threat of secondary sanctions has been particularly successful, so Putin would like to avoid that at any cost,” Tokariuk said in the briefing. “However … there are a lot of questions about … any kind of credibility of agreements with Russia, like how reliable are … Russian promises to stick to agreements of a ceasefire or anything else that might be discussed and put on paper at the meeting in Alaska,” Tokariuk added. “Most likely … Putin and the Russian regime will try to create an impression that … they are participating in the negotiations, that they are sticking to some kind of agreements and deals that are being discussed, but they do not seem to have any intention of actually stopping the war,” Tokariuk warned. “They would kind of continue this deception game in order to avoid secondary sanctions, in order to avoid, you know, that devastating impact they might have on the on the Russian economy, but there are really no indications that I see of Russia actually willing to put an end to the war in Ukraine,” Tokariuk went on to state in the briefing. In a report sent to Rigzone by the Standard Chartered team late Tuesday, analysts at Standard Chartered Bank highlighted that U.S. President Donald Trump and Russian President Vladimir Putin are due to meet in Alaska today, “with the U.S. appearing determined to broker a Russia-Ukraine ceasefire agreement”. “Although a comprehensive and

Read More »

New Era Helium Rebrands as It Takes On Data Center Growth

New Era Helium Inc., which controls over 137,000 acres that hold helium and natural gas reserves in Southeastern New Mexico, has rebranded to New Era Energy & Digital Inc. to reflect its recent shift into a vertically integrated energy supplier. The new company aims to develop “next-generation digital infrastructure and integrated power assets, including powered land and powered shells”, it said in an online statement. “The company delivers turnkey solutions that will enable hyperscale, enterprise and edge operators to accelerate data center deployment, optimize total cost of ownership and future-proof their infrastructure investments”. The Midland, Texas-based company “projects generational AI infrastructure demand will grow exponentially over the next decade, driven by rising capacity and significant increases in sector investment”. “In line with its strategic focus on power and compute infrastructure, the company is in discussions with various parties on how best to maximize its natural gas and helium assets”, it said. “The company remains committed to the global AI ecosystem, where helium continues to play a crucial role in semiconductor manufacturing and the future growth of AI”. Chief executive E. Will Gray II said, “We are the bridge between Silicon Valley and Houston, connecting the compute demands of tomorrow with the energy systems of today, for a shared digital future. With a growing base of vertically integrated assets, from powered land to powered shells, we bring deep infrastructure and energy expertise to help hyperscale, enterprise, and edge operators deploy future-ready HPC campuses faster”. The company said it “continues to execute the strategy it introduced with its Texas Critical Data Centers (TCDC) project focused on integrating behind-the-meter power (off-grid) and real estate (Powered Land) and digital infrastructure tailored for the rapidly expanding AI compute market”. Planned to rise in Ector County, Texas, TCDC will be an AI and high-performance computing (HPC) campus

Read More »

Ukraine Strikes Another Key Russian Oil Refinery

Ukrainian drones attacked Lukoil PJSC’s major refinery in Volgograd in the early hours of Thursday as Kyiv has ramped up strikes on Russian energy infrastructure this month.  The Russian Air Force repelled a “massive UAV attack” on the Volgograd region overnight and as a result of falling debris, oil-products spills caught fire at the facility, Andrei Bocharov, regional governor, said in a Telegram statement.  The blaze was extinguished by 8:13 p.m. local time, the press office of the regional administration said in a separate Telegram statement. The Volgograd refinery, among the 10 largest in Russia, is the latest strategic asset to be hit by Ukraine this month, with Presidents Vladimir Putin and Donald Trump preparing to meet in Alaska on Friday to discuss a potential ceasefire.  Others targets include three facilities owned by oil giant Rosneft PJSC, a key helium facility owned by gas giant Gazprom PJSC and an oil-pumping station in Unecha, a hub for Russia’s network of crude-exporting pipelines.  Ukraine’s General Staff confirmed the attack on the Volgograd facility in a Telegram statement. Lukoil didn’t respond to a Bloomberg request for a comment on a potential damage and impact on crude-processing rates.  The Volgograd refinery, which was attacked several times early this year, has a design capacity to process around 300,000 barrels a day of crude and supplies oil-products mainly to southern regions in Russia, with some batches also going on exports.  The attack on the facility, which accounts for roughly 5 percent of Russia’s crude processing, may put even further pressure on nation’s refinery runs amid higher seasonal demand for gasoline and diesel. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Tamboran Gets Native Title Holders Consent to Sell Beetaloo Appraisal Gas

Sydney-headquartered natural gas developer Tamboran Resources Corporation has signed a milestone agreement with Native Title Holders and the Northern Land Council. The company said in a media release that the deal opens the door for the sale of appraisal gas from its exploration permits in the Beetaloo Basin. Under the agreement, Native Title Holders grant approval for the Beetaloo Joint Venture (BJV) to sell up to 60 terajoules per day of appraisal gas from EP 98 and EP 117 from the proposed Shenandoah South Pilot Project over three years. The BJV is now working on obtaining the necessary permits for long-term production. Additionally, Tamboran said that the BJV has agreed to supply 40 million cubic feet per day (MMcfd) to the Northern Territory government until mid-2041, ensuring energy security for the region. The first gas delivery is targeted for mid-2026, pending weather conditions and customary stakeholder approvals. “This is a significant moment for Tamboran Resources, being the first operator in the Beetaloo Basin to secure Native Title Holder approval to sell gas under the legislated appraisal framework”, Richard Stoneburner, Tamboran Chairman and Interim CEO, said. “The agreement provides a range of financial and non-financial benefits for Native Title Holders, and we are excited to continue working together into the future to create jobs, grow business, and create more opportunities for the local community”, he said. The agreement is necessary to obtain authorization from the Northern Territory Minister for Mining and Energy under section 57AAA of the Petroleum Act 1984 to extract gas from the EPs on an appraisal basis, Tamboran said. Tamboran said it will now advance its application to the Northern Territory government for the recovery of gas on an appraisal basis. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein

Read More »

Cisco’s 9% security growth is misleadingly low

AI infrastructure ahead of plan and growing Cisco reported more than $800 million in AI infrastructure orders from webscale customers in the fourth quarter, bringing the total for 2025 to more than $2 billion, which is over 2x what the company originally projected. This is a mix of its own Nexus switches, optics, AI PODs, UCS servers and Silicon One. Success here is critical for Cisco, as at one time, the company had next to no business with the hyperscalers. The development of Silicon One was pivotal in Cisco’s success with this audience, as it’s given them market leading price performance. Cisco has also cultivated a partnership with Nvidia and is the only company to have its silicon integrated into the GPU maker’s Spectrum-X product. There is another wave of business coming for Cisco in this area selling AI infrastructure to non-hyperscalers. Robbins talked about this: “The Cisco Secure AI Factory with Nvidia provides a blueprint for building AI-ready data centers for enterprises, sovereign cloud providers and newly emerging neocloud providers. We expect the sovereign AI opportunity to build momentum in the second half of fiscal year ’26.” AI will drive campus upgrades Most of the focus of network growth in AI has been in the data center, as that’s where the growth has been. However, the traffic agentic AI creates will drive campus upgrades as well. On the call, Cisco showed a chart of traffic generated by chatbots pre and post agentic, and it shows Cisco is expecting agentic to drive a consistent level of traffic that most networks are not able to handle. Robbin explained: “Network traffic will not only increase beyond the peaks of current chatbot interaction but will remain consistently high with agents in constant interaction.” The impact of this is twofold. The bump in traffic

Read More »

Nvidia targets data center with new servers, AI software

More Nvidia news from SIGGRAPH The RTX Pro server wasn’t the only news at the show. Nvidia also announced two new models of its Nemotron model family – the Nemotron Nano 2 and Llama Nemotron Super 1.5 – with advanced reasoning capabilities for building smarter AI agents. Nemotron is a family of enterprise-ready, open large language models (LLM) designed to enhance agentic AI for tasks requiring sophisticated reasoning, instruction following, coding, tool use, and multimodal (text+vision) understanding. These models deliver high accuracy for their relative size in areas such as scientific reasoning, coding, tool use, instruction following and chat, according to Nvidia. They are designed to imbue AI agents with deeper cognitive abilities and help AI systems explore options, weigh decisions and deliver results within defined constraints. Nvidia claims Nemotron Nano 2 achieves up to six times higher token generation throughput compared to other models its size. Llama Nemotron Super 1.5 offers top-tier performance and leads in reasoning accuracy, making it suitable for handling complex enterprise tasks. Also, Nvidia is empowering robotics and machines to “see” and react to what they see with new AI models that can ingest visual information and think about said information. The vendor just announced Cosmos Reason, a new open, customizable 7 billion-parameter reasoning Vision Language Models, or VLMs. VLMs allow robots and vision agents to think about what they see, just like a human. Up to now, robots have had the ability to “see,” but their reaction to what they saw was extremely limited. A VLM provides robotics with the ability to think about their actions.

Read More »

Cisco Q4 results: AI infrastructure orders surpass goal

Adding a little more color to the enterprise results, Robbins called out some of Cisco’s network modernization offerings for customers. “We’ve got [our] routing refresh, we’ve got a lot of new technology in our data center networking business … we have Wi-Fi 7 gear, which grew triple digits year over year. We’re in year eight of the [Catalyst 9000] offering, and honestly, if you look at products that were pre-[Catalyst 9000] that are still installed in our customer base, there’s tens of billions of dollars of install base there that we can go after,” Robbins said. Demand for AI infrastructure will extend to enterprises, Robbins said: “If you think about the AI revolution…We tend to see these things begin first in the cloud providers, which we’re clearly seeing the AI play in the cloud providers. Then we see it shift into the enterprise. We see a shift in the back end, in this case, from the back end to the front end. We believe that will occur as enterprises start using more of these services, and then enterprises will also build out inferencing… and we’re even seeing the telco business actually pick up as they’re telling us they’re increasing their network capacity and they’re modernizing their infrastructure in preparation for AI. So, we think AI is going to drive network modernization across all of these segments.” Taking aim at security Overall security order growth in Q4 was up double digits, Robbins said. “We have 80 new Hypershield customers, largely connected to this new smart switch [the N9300 Smart Switch]. So that strategy is working. And I would say that we had 480-plus new SSE customers during the quarter. So that’s our secure services edge [which] is really getting good traction. Robbins singled out the demand for integrated networking and security

Read More »

Uptime Institute’s Jay Dietrich on Why Net Zero Isn’t Enough for Sustainable Data Centers

In the latest episode of the Data Center Frontier Show podcast, Editor-in-Chief Matt Vincent sits down with Jay Dietrich, Research Director of Sustainability at Uptime Institute, to examine what real sustainability looks like inside the data center, and why popular narratives around net zero, offsets, and carbon neutrality often obscure more than they reveal. Over the course of our conversation, Dietrich walks listeners through Uptime’s expanding role in guiding data center operators toward measurable sustainability outcomes; not just certifications, but operational performance improvements at the facility level. “Window Dressing” vs. Real Progress Dietrich is candid about the challenges operators face in navigating the current landscape of sustainability reporting. Despite high-level claims of carbon neutrality, many facilities still operate inefficiently, relying heavily on carbon offsets or energy attribute certificates to hit corporate goals. “An EU survey found that 80% of data centers report carbon-free operations based on market calculations, while their national grids run at only 55% renewable,” Dietrich says. “The only thing that truly matters is the performance of the actual facility.” To close this gap, Uptime offers a Sustainability Gap Analysis and a Sustainable Operations Certification, helping data center operators minimize energy and water use, improve cooling efficiency, and increase the useful work delivered per megawatt hour. Redefining the Sustainable Data Center One of the discussion’s core messages: a net zero data center is not necessarily a sustainable one. Dietrich stresses the need to shift focus from corporate carbon accounting toward IT utilization, emphasizing metrics like: Work delivered per unit of energy consumed. Work delivered per metric ton of CO₂ emitted (location-based). Actual IT infrastructure utilization rates. Underutilized IT infrastructure — still common across the industry — is one of the biggest sustainability blind spots. “Running IT at 10% utilization wastes capacity, space, and energy,” says Dietrich. “Increasing that

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Peter Kazella of Pkaza Critical Facilities Recruiting provides tips for building a healthy applicant pool. Switchgear Field Service Technician – Critical Facilities Nationwide Travel  This position is also available in any major data center region: Ashburn, VA; Charlotte, NC; Atlanta, GA; Denver, CO; Portland, OR; Seattle, WA; Las Vegas, NV; or Phoenix, AZ. Multiple opportunities for both senior and mid-level switchgear field service technicians. These openings are with a nationwide market leader of power distribution solutions, specializing in switchgear and controls for mission-critical environments. This company provides customized solutions for enterprise, colocation, and hyperscale data centers, ensuring reliability and uptime through controls integration, power distribution solutions, and switchgear installations. Their services include installations, retrofits, upgrades, turnkey electrical solutions, and preventive & corrective maintenance of UPS, switchgear, generators, and PLC systems. This is an excellent career-growth opportunity to work on exciting projects with leading-edge technology and competitive compensation. Electrical Commissioning Engineer New Albany, OH This traveling position is also available in: Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; Minneapolis, MN; Phoenix, AZ; Dallas, TX; or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA agents and CxA PMs *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth

Read More »

DCF Trends Summit 2025: Power Chat

Data Center Frontier Editor in Chief Matt Vincent and Contributing Editor Bill Kleyman (CEO/Apolo) recently had another video chat to discuss the Data Center Frontier Trends Summit 2025 and, in particular, the event’s focus on data center power. The second annual Trends Summit is scheduled for August 26-28 in Reston, Virginia. Register Now This second QuickChat in our series—following the opening discussion that set the stage for the conference—now zeroes in on data center power challenges. The conversation explores how AI’s exponential growth is straining power infrastructure, turning energy into both a critical market-entry barrier and a defining theme of the event. The Power Challenge: Quantifying the Scale Kleyman outlined the sheer scale of the power challenge facing the industry. He cited a Goldman Sachs report projecting that U.S. data center power consumption, currently around 2-3% of total consumption, could more than double to 8-9% by 2028. He also highlighted a forecast that the data center industry will require upwards of 50 gigawatts (GW) of power by 2035, a figure he put into perspective by noting that 1 GW can power a city of a million people. The discussion mentions a few large-scale projects that illustrate this trend: Kevin O’Leary‘s Wonder Valley project in Alberta, Canada, aiming for 8 GW. Tract’s new multi-gigawatt campus project in Texas amid plans for a staggering 25 GW national build-out. Kleyman also referenced the NVL576 rack unveiled at Nvidia GTC 2025, which is capable of supporting 600 kilowatts (kW) per rack, signaling the official arrival of the “megawatt era class of data center racks.” Grid-Optional Solutions and the Summit Panels The discussion transitioned to how the industry is moving from being solely grid-reliant to “grid-optional.” A significant trend highlighted is the inclusion of behind-the-meter power generation in new builds, with 30% of new U.S. data

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »