Stay Ahead, Stay ONMINE

Nvidia Blackwell accelerates computer-aided engineering software by 50X

Nvidia announced that Nvidia Blackwell hardware will accelerate by 50 times the big computer-aided software engineering firms’ software for digital twins. The vendors include Ansys, Altair, Cadence, Siemens and Synopsys. With such accelerated software, along with Nvidia CUDA-X libraries and blueprints to optimize performance such as automotive, aerospace, energy, manufacturing and life sciences can significantly reduce product development time, cut costs and increase design accuracy while maintaining energy efficiency. “CUDA-accelerated physical simulation on Nvidia Blackwell has enhanced real-time digital twins and is reimagining the entire engineering process,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “The day is coming when virtually all products will be created and brought to life as a digital twin long before it is realized physically.” The company unveiled the news during Huang’s keynote at the GTC 2025 event. As noted in my recent Q&A with Ansys CTO Prith Banerjee, the gap between simulation and reality is closing, not just in things like video games but also with the similar technology used for engineering simulations. Ecosystem support for Nvidia Blackwell Software providers can help their customers develop digital twins with real-time interactivity and now accelerate them with NVIDIA Blackwell technologies. The growing ecosystem integrating Blackwell into its software includes Altair, Ansys, BeyondMath, Cadence, COMSOL, ENGYS, Flexcompute, Hexagon, Luminary Cloud, M-Star, NAVASTO, an Autodesk company, Neural Concept, nTop, Rescale, Siemens, Simscale, Synopsys and Volcano Platforms. Cadence is using Nvidia Grace Blackwell-accelerated systems to help solve one of computational fluid dynamics’ biggest challenges — the simulation of an entire aircraft during takeoff and landing. Using the Cadence Fidelity CFD solver, Cadence successfully ran multibillion cell simulations on a single NVIDIA GB200 NVL72 server in under 24 hours, which would have previously required a CPU cluster with hundreds of thousands of cores and several days to complete. This breakthrough will help the aerospace industry move toward designing safer, more efficient aircrafts while reducing the amount of expensive wind-tunnel testing required, speeding time to market. Anirudh Devgan, president and CEO of Cadence, said in a statement, “Nvidia Blackwell’s acceleration ofthe Cadence.AI portfolio delivers increased productivity and quality of results for intelligent system design — reducing engineering tasks that took hours to minutes and unlocking simulations not possible before. Our collaboration with Nvidia drives innovation across semiconductors, data centers, physical AI and sciences.” Sassine Ghazi, president and CEO of Synopsys, said, “At GTC, we’re unveiling the latest performance results observed across our leading portfolio when optimizing Synopsys solutions for NVIDIA Blackwell to accelerate computationally intensive chip design workflows. Synopsys technology is mission-critical to the productivity and capabilities of engineering teams, from silicon to systems. By harnessing the power of Nvidia accelerated computing, we can help customers unlock new levels of performance and deliver their innovations even faster.” Ajei Gopal, president and CEO of Ansys, said, in a statement: “The close collaboration between Ansys and NVIDIA is accelerating innovation at an unprecedented pace. By harnessing the computational performance of NVIDIA Blackwell GPUs, we at Ansys are empowering engineers at Volvo Cars to tackle the most complex computational fluid dynamics challenges with exceptional speed and accuracy — enabling more optimization studies and delivering more performant vehicles.” James Scapa, founder and CEO of Altair, said in a statement, “The Nvidia Blackwell platform’s computing power, combined with Altair’s cutting-edge simulation tools, gives users transformative capabilities. This combination makes GPU-based simulations up to 1.6 times faster compared with the previous generation, helping engineers rapidly solve design challenges and giving industries the power to create safer, more sustainable products through real-time digital twins and physics-informed AI.” Roland Busch, President and CEO of Siemens said: “The combination of Nvidia’s groundbreaking Blackwell architecture with Siemens’ physics-based digital twins will enable engineers to drastically reduce development times and costs through using photo-realistic, interactive digital twins. This collaboration will allow us to help customers like BMW innovate faster, optimize processes, and achieve remarkable levels of efficiency in design and manufacturing.” Rescale CAE Hub With Nvidia Blackwell Rescale’s newly launched CAE Hub enables customers to streamline their access to Nvidia technologies and CUDA®-accelerated software developed by leading independent software vendors. Rescale CAE Hub provides flexible, high-performance computing and AI technologies in the cloud powered by Nvidia GPUs and Nvidia DGX Cloud. Boom Supersonic, the company building the world’s fastest airliner, will use the Nvidia Omniverse Blueprint for real-time digital twins and Blackwell-accelerated CFD solvers on Rescale CAE Hub to design and optimize its new supersonic passenger jet. The company’s product development cycle, which is almost entirely simulation-driven, will use the Rescale platform accelerated by Blackwell GPUs to test different flight conditions and refine requirements in a continuous loop with simulation. The adoption of the Rescale CAE Hub powered by Blackwell GPUs expands Boom Supersonic’s collaboration with Nvidia. Through the Nvidia PhysicsNeMo framework and the Rescale AI Physics platform, Boom Supersonic can unlock 4x more design explorations for its supersonic airliner, speeding iteration to improve performance and time to market. The Nvidia Omniverse Blueprint for real-time digital twins, now generally available, is also part of the Rescale CAE Hub. The blueprint brings together Nvidia CUDA-X libraries, Nvidia PhysicsNeMo AI and the Nvidia Omniverse platform — and is also adding the first Nvidia NIM microservice for external aerodynamics, the study of how air moves around objects.

Nvidia announced that Nvidia Blackwell hardware will accelerate by 50 times the big computer-aided software engineering firms’ software for digital twins.

The vendors include Ansys, Altair, Cadence, Siemens and Synopsys. With such accelerated software, along with Nvidia CUDA-X libraries and blueprints to optimize performance such as automotive, aerospace, energy, manufacturing and life sciences can significantly reduce product development time, cut costs and increase design accuracy while maintaining energy efficiency.

“CUDA-accelerated physical simulation on Nvidia Blackwell has enhanced real-time digital twins and is reimagining the entire engineering process,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “The day is coming when virtually all products will be created and brought to life as a digital twin long before it is realized physically.”

The company unveiled the news during Huang’s keynote at the GTC 2025 event. As noted in my recent Q&A with Ansys CTO Prith Banerjee, the gap between simulation and reality is closing, not just in things like video games but also with the similar technology used for engineering simulations.

Ecosystem support for Nvidia Blackwell

Software providers can help their customers develop digital twins with real-time interactivity and now accelerate them with NVIDIA Blackwell technologies.

The growing ecosystem integrating Blackwell into its software includes Altair, Ansys, BeyondMath, Cadence, COMSOL, ENGYS, Flexcompute, Hexagon, Luminary Cloud, M-Star, NAVASTO, an Autodesk company, Neural Concept, nTop, Rescale, Siemens, Simscale, Synopsys and Volcano Platforms.

Cadence is using Nvidia Grace Blackwell-accelerated systems to help solve one of computational fluid dynamics’ biggest challenges — the simulation of an entire aircraft during takeoff and landing. Using the Cadence Fidelity CFD solver, Cadence successfully ran multibillion cell simulations on a single NVIDIA GB200 NVL72 server in under 24 hours, which would have previously required a CPU cluster with hundreds of thousands of cores and several days to complete.

This breakthrough will help the aerospace industry move toward designing safer, more efficient aircrafts while reducing the amount of expensive wind-tunnel testing required, speeding time to market.

Anirudh Devgan, president and CEO of Cadence, said in a statement, “Nvidia Blackwell’s acceleration of
the Cadence.AI portfolio delivers increased productivity and quality of results for intelligent system design — reducing engineering tasks that took hours to minutes and unlocking simulations not possible before. Our collaboration with Nvidia drives innovation across semiconductors, data centers, physical AI and sciences.”

Sassine Ghazi, president and CEO of Synopsys, said, “At GTC, we’re unveiling the latest performance results observed across our leading portfolio when optimizing Synopsys solutions for NVIDIA Blackwell to accelerate computationally intensive chip design workflows. Synopsys technology is mission-critical to the productivity and capabilities of engineering teams, from silicon to systems. By harnessing the power of Nvidia accelerated computing, we can help customers unlock new levels of performance and deliver their innovations even faster.”

Ajei Gopal, president and CEO of Ansys, said, in a statement: “The close collaboration between Ansys and NVIDIA is accelerating innovation at an unprecedented pace. By harnessing the computational performance of NVIDIA Blackwell GPUs, we at Ansys are empowering engineers at Volvo Cars to tackle the most complex computational fluid dynamics challenges with exceptional speed and accuracy — enabling more optimization studies and delivering more performant vehicles.”

James Scapa, founder and CEO of Altair, said in a statement, “The Nvidia Blackwell platform’s computing power, combined with Altair’s cutting-edge simulation tools, gives users transformative capabilities. This combination makes GPU-based simulations up to 1.6 times faster compared with the previous generation, helping engineers rapidly solve design challenges and giving industries the power to create safer, more sustainable products through real-time digital twins and physics-informed AI.”

Roland Busch, President and CEO of Siemens said: “The combination of Nvidia’s groundbreaking Blackwell architecture with Siemens’ physics-based digital twins will enable engineers to drastically reduce development times and costs through using photo-realistic, interactive digital twins. This collaboration will allow us to help customers like BMW innovate faster, optimize processes, and achieve remarkable levels of efficiency in design and manufacturing.”

Rescale CAE Hub With Nvidia Blackwell

Rescale’s newly launched CAE Hub enables customers to streamline their access to Nvidia technologies and CUDA®-accelerated software developed by leading independent software vendors. Rescale CAE Hub provides flexible, high-performance computing and AI technologies in the cloud powered by Nvidia GPUs and Nvidia DGX Cloud.

Boom Supersonic, the company building the world’s fastest airliner, will use the Nvidia Omniverse Blueprint for real-time digital twins and Blackwell-accelerated CFD solvers on Rescale CAE Hub to design and optimize its new supersonic passenger jet.

The company’s product development cycle, which is almost entirely simulation-driven, will use the Rescale platform accelerated by Blackwell GPUs to test different flight conditions and refine requirements in a continuous loop with simulation.

The adoption of the Rescale CAE Hub powered by Blackwell GPUs expands Boom Supersonic’s collaboration with Nvidia. Through the Nvidia PhysicsNeMo framework and the Rescale AI Physics platform, Boom Supersonic can unlock 4x more design explorations for its supersonic airliner, speeding iteration to improve performance and time to market.

The Nvidia Omniverse Blueprint for real-time digital twins, now generally available, is also part of the Rescale CAE Hub. The blueprint brings together Nvidia CUDA-X libraries, Nvidia PhysicsNeMo AI and the Nvidia Omniverse platform — and is also adding the first Nvidia NIM microservice for external aerodynamics, the study of how air moves around objects.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco, Nvidia team to deliver secure AI factory infrastructure

Hypershield uses AI to dynamically refine security policies based on application identity and behavior. It automates policy creation, optimization, and enforcement across workloads. In addition, Hypershield promises to let organizations autonomously segment their networks when threats are a problem, gain exploit protection without having to patch or revamp firewalls, and

Read More »

Domestic oil and gas needed to drive supply chain through energy transition

The UK must prioritise domestic oil and gas production over imports of LNG to maintain revenues for its energy supply chain as it navigates the energy transition. According to Offshore Energies UK’s (OEUK’s) 2025 supply chain report, 60% of surveyed respondents said they are now active and gaining business revenues from offshore wind, hydrogen and CCS. However, while the supply chain is diversifying, the long timelines of many renewables projects means that these sectors only represent a small proportion of overall revenues at present. Instead, nine companies out of every 10 see more attractive opportunities to grow their business overseas due to uncertainty and a less positive business environment at home. Many supply chain companies continue to rely on revenue from oil and gas operations to fund essential investments in broader energy transition opportunities. © Supplied by OEUKOEUK supply chain and operations director Katy Heidenreich. OEUK supply chain and people director Katy Heidenreich said: “Every business in the UK irrespective of which sector they’re active in needs certainty, stability from all governments. The UK has a brilliant opportunity to lean into the technologies, products and services needed to get the world to net zero, but it is absolutely essential that it is done in a pragmatic way that safeguards jobs and energy security.” She added that while it is “good to export our expertise” this “should never come at a cost to work we need to get done in the UK”. She added: “Around 60% of companies surveyed for the report are diversifying into offshore wind, hydrogen and carbon capture and storage but business revenues from renewables and CCS still represent a relatively low proportion as they make up between zero and a fifth of their turnover.” OEUK said that the UK must prioritise domestic production over imports to safeguard

Read More »

Energy Chief Praises Research Hub That Trump Once Sought to Ax

Energy Secretary Chris Wright praised a division of the Energy Department charged with funding research projects deemed too risky to get private-sector investment, amid questions about its future in the second Trump administration. The Advanced Research Projects Agency-Energy, or ARPA-E, has doled out some $4.2 billion to more than 1,700 energy projects since 2009. Trump proposed eliminating it during his first stint as president, and Republicans at that time criticized it as unnecessary. More recently, the Heritage Foundation’s Project 2025 called for scrapping the agency, which has a budget of $460 million. ARPA-E also appeared on a list of programs being scrutinized by the White House Office of Management and Budget as President Donald Trump and billionaire Elon Musk seek to shrink the US government.  But Wright, who gave the keynote address Monday at ARPA-E’s annual summit in National Harbor, Maryland, cast the agency as necessary to help power an AI race that will be critical for the future of national defense and medical research.  “The only way we can get there is if we grow our energy system faster and faster, and that’s why you are all here,” Wright, a former oil and gas executive, told attendees. “There is a huge, life-changing opportunity for innovation there.” He lauded the potential of energy storage and the increasing efficiency of US solar manufacturing. Wright also promoted small modular reactors and said nuclear fusion could achieve commercialization in the near future. He later toured some of the summit’s project exhibits, including those of Westinghouse Electric Co., which received $6.6 million from the agency to develop a nuclear microreactor, and the company Deep Isolation Inc., which got $3.8 million from ARPA-E to create technology to isolate nuclear waste in deep boreholes underground.  To contact the author of this story:Ari Natter in Washington at [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to

Read More »

Vår Energi begins tow-out of Jotun FPSO to North Sea Balder field

‘New-era’ in North Sea “Balder X…including the sanctioned Balder Phase V project, marks the start of a new era in the North Sea, extending the lifetime of the first production license PL001 on the Norwegian Continental Shelf to 2045 and beyond,” Walker continued.  With the Jotun FPSO installed as an area host, Vår Energi is taking steps to add new production through infill drilling, exploration, and tie-back developments with short time to market, the company said. “We have continued to grow our resource base through successful exploration in the area and are stepping up the pace, moving several tie-back projects forward at speed to capitalize on the Jotun FPSO. This will sustain production longer term and includes Balder Phase V, planned to come on stream later this year, and Balder Phase VI expected to be sanctioned in 2025, together adding a further 45-50 MMboe gross,” said Torger Rød, chief operating officer (OGJ Online, Oct. 25, 2024).  Several early phase projects in in the Greater Balder are progressing toward a final investment decision, including Ringhorne North, Balder future phases, and the King discovery, targeting gross contingent resources of more than 70 MMboe, the company said.  Vår Energi is operator (90%) of Balder field. Kistos Energy Norway AS holds the remaining 10%.

Read More »

US EPA seeks to reverse dozens of environmental, climate rules

The agency also aims to remove many regulations issued during the Obama and Biden administrations related to the Waters of America Clean Air; standards on emissions from new, modified, and reconstructed stationary sources; the 2024 Risk Management Plan rule to increase safety at refineries and chemical plants; National Air Quality Standards for Particulate Matter (PM2.5); and the Waters of the United States Act.  The federal government would remove “trillions of dollars in regulatory costs” on the industry if the changes go through, Zeldin said in a statement. American Petroleum Institute (API) President Mike Sommers said it and other associations advocated for many of the changes.  “Voters sent a clear message in support of affordable, reliable and secure American energy, and the Trump administration is answering the call,” he said. It takes years to change environmental regulations, with most proposed rules requiring two rounds of public comments and environmental studies.  Environmental opposition Environmental groups said they will sue if the administration reverses the endangerment finding.  “Should the EPA undo settled law and irrefutable facts, we expect to see this administration in court,” said Earthjustice president Abigail Dillen said in a statement. “It’s impossible to think that the EPA could develop a contradictory finding that would stand up in court, added David Doniger, a climate expert at the Natural Resources Defense Council, especially “in the face of overwhelming science.” EPA already eliminated its diversity, equity and inclusion programs and nixed most EPA employees focused on environmental justice, Zeldin said. At Trump’s request, Zeldin seeks to eliminate about 65% of EPA staff.

Read More »

TotalEnergies secures green hydrogen for Leuna refinery

TotalEnergies SE and RWE AG, a German renewables energy developer, have signed a long-term agreement for supply of green hydrogen to TotalEnergies’ 227,000-b/d Leuna refinery in central Germany’s state of Saxony-Anhalt. Under the agreement, RWE will supply 30,000 tonnes/year (tpy) of green hydrogen produced from its 300-Mw electrolyzer in Lingen, Germany, to TotalEnergies’ Leuna refinery for a 15-year period beginning in 2030 through yearend 2044, the companies said in separate mid-March releases. The green hydrogen supply—which will result in a 300,000-tpy reduction in the Leuna platform’s emissions of carbon dioxide (CO2) over the duration of the contract—will be delivered directly to the refinery’s gates via a 600-km pipeline, according to the parties. In addition to marking the largest quantity of carbon-neutral hydrogen ever to be contracted from a German electrolyzer, the long-term offtake supply agreement designates TotalEnergies as an anchor customer for the Lingen electrolyzer plant on which RWE took final investment decision to build only 6 months ago for targeted commissioning in 2027, RWE said. The green-hydrogen supply relationship between RWE and TotalEnergies specifically will be enabled by the German hydrogen core network, which will connect hydrogen production sites—such as Lingen in Lower Saxony—with large centers of industrial hydrogen consumption like Leuna, according to RWE. More than 9,000 km long in its entirety, the German hydrogen pipeline network is scheduled to enter phased operations between 2025 and 2032 via a combination of repurposing existing gas pipelines and building sections of new pipelines, RWE said. The newly inked green-hydrogen supply agreement with RWE comes as part of TotalEnergies’ broader plan to decarbonize all hydrogen used in its European refineries by 2030 in line with the operator’s ongoing long-term transformational strategy of gradually pivoting operations away from its traditional oil and gas history in alignment with its aim to achieve carbon

Read More »

Market Focus: How Trump energy policies are reshaping the domestic and global energy landscape

@import url(‘/fonts/fira_sans.css’); a { color: #134e85; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: “Fira Sans”, Arial, sans-serif; } body { letter-spacing: 0.025em; font-family: “Fira Sans”, Arial, sans-serif; } button, .ebm-button-wrapper { font-family: “Fira Sans”, Arial, sans-serif; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #212529 !important; border-color: #212529 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #212529 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #212529 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #212529 !important; border-color: #212529 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #212529 !important; border-color: #212529 !important; background-color: undefined !important; } In this Market Focus episode of the Oil & Gas Journal ReEnterprised podcast, Conglin Xu, Managing Editor-Economics, discusses how energy policies of the new Trump administration are reshaping the domestic and global energy landscape. Since his inauguration, President Trump has has wasted no time in rolling out orders and policies, and the energy sector is already feeling the effects. In this episode, Xu dives into key developments and what they mean for the market, starting with the administration’s stated priority to boost domestic energy production through deregulation and accelerated permitting. 

Read More »

Schneider Electric Adds Data Center and Microgrid Testing Labs to Andover, MA Global R&D Center

Schneider Electric, a global leader in energy management and automation, has established its Global Innovation Hubs as key centers for technological advancement, collaboration, and sustainable development. These hub facilities serve as ecosystems where cutting-edge solutions in energy efficiency, industrial automation, and digital transformation are designed, tested, and deployed to address the world’s most pressing energy and sustainability challenges. Energy Management and Industrial Automation Focus Strategically located around the world, Schneider Electric’s Global Innovation Hubs are positioned to drive regional and global innovation in energy management and industrial automation. The hubs focus on developing smart, connected, and sustainable solutions across various sectors, including data centers, smart buildings, industrial automation, and renewable energy. Key aspects of the Schneider Global Innovation Hubs include: Collaboration and Co-Innovation: Partnering with startups, industry leaders, and research institutions to accelerate innovation. Fostering an open ecosystem where ideas can be rapidly developed and tested. Digital Transformation and Automation: Leveraging IoT, AI, and cloud technologies to enhance energy efficiency. Implementing digital twin technology for real-time monitoring and predictive maintenance. Sustainability and Energy Efficiency: Developing solutions that contribute to decarbonization and net-zero emissions. Creating energy-efficient systems for buildings, industries, and critical infrastructure. Customer-focused Innovation: Offering live demonstrations, simulation environments, and test labs for customers. Customizing solutions to meet specific industry challenges and regulatory requirements. Schneider’s Andover R&D Lab Highlights While there are 11 hubs worldwide to give the global customer base more convenient locations where they can evaluate Schneider product, the new lab facilities have also been added to one of the company’s five global R&D locations. The selected location is co-located with Schneider’s US research labs in Andover, Massachusetts. With the addition of these two new labs there are now 41 labs located in Andover. Over the last year, Schneider Electric has invested approximately $2.4 billion in R&D. The

Read More »

Executive Roundtable: Probing Data Center Power Infrastructure and Energy Resilience in 2025

Ryan Baumann, Rehlko: Industry leaders are taking bold steps to secure long-term energy availability by embracing innovative backup power solutions, forming strategic partnerships, and exploring alternative energy sources. To overcome the challenges ahead, collaboration is key—operators, utilities, OEMs, and technology providers must come together, share insights, and create customized solutions that keep energy both reliable and sustainable as the landscape evolves. One of the most significant strategies is the growing use of alternative energy sources like hydrogen, natural gas, and even nuclear to ensure a steady supply of power. These options provide a more flexible, reliable backup to grid power, especially in markets with fluctuating energy demands or limited infrastructure. Emergency generator systems, when equipped with proper emissions treatment, can also support the grid through peak shaving or interruptible rate programs with utilities. Hydrogen fuel cells, in particular, are becoming a game-changer for backup power. Offering zero-emission, scalable, and efficient solutions, hydrogen is helping data centers move toward their carbon-neutral goals while addressing energy reliability. When integrated into a microgrid, hydrogen fuel cells create a cohesive energy network that can isolate from the main grid during power outages, ensuring continuous energy security for critical infrastructure like data centers. Additionally, natural gas Central Utility Plants (CUPs) are emerging as a key bridging power source, helping large data centers in grid-constrained regions maintain operations until permanent utility power is available. Smart energy solutions, including customized paralleling systems, allow emergency assets to be grid-intertied, enabling utilities and communities to share power burdens during peak periods. By embracing these innovative solutions and fostering collaboration, the industry not only ensures reliable power for today’s data centers but also paves the way for a more sustainable and resilient energy future. Next:  Cooling Imperatives for Managing High-Density AI Workloads 

Read More »

From Billions to Trillions: Data Centers’ New Scale of Investment

With Apple’s announcement to spend $500 billion over the next four years briefly overshadowing the $500 billion joint venture announcement of the Stargate project with the federal government, you can almost be forgiven for losing track of the billions of dollars in data center and tech spending announced by other industry players. Apple’s Four-Year, $500 Billion Spend Resonates with Tech The company’s data center infrastructure will see some collateral improvement to support future AI efforts, as a percentage of the funding will be dedicated to enhancing their existing data center infrastructure, though as yet there has been no public discussion of new data center facilities. Apple has committed to spending over $500 billion in the U.S. during the next four years.  This investment aims to bolster various sectors, including AI infrastructure, data centers, and research and development (R&D) in silicon engineering. The initiative also encompasses expanding facilities and teams across multiple states, such as Texas, California, Arizona, Nevada, Iowa, Oregon, North Carolina, and Washington. The spend will be a combination of investments in new infrastructure components along with the expansion of existing facilities. What has been publicly discussed includes the following: New AI Server Manufacturing Facility in Houston, Texas A significant portion of this investment is allocated to constructing a 250,000-square-foot manufacturing facility in Houston, Texas. Scheduled to open in 2026, this facility will produce servers designed to power Apple Intelligence, the company’s AI system. These servers, previously manufactured abroad, will now be assembled domestically, enhancing energy efficiency and security for Apple’s data centers. The project is expected to create thousands of jobs in the region. Expansion of Data Center Capacity Apple plans to increase its data center capacity in several states, including North Carolina, Iowa, Oregon, Arizona, and Nevada. This expansion aims to support the growing demands of AI

Read More »

Why Geothermal Energy Could Be a Behind-the-Meter Game Changer for Data Center Power Demand

By colocating data centers with geothermal plants, operators could tap into a clean, baseload power source that aligns with their sustainability goals. Operators could reduce transmission losses and enhance energy efficiency. Meanwhile, the paper points out that one of the most promising aspects of geothermal energy is its scalability. The Rhodium Group estimates that the U.S. has the technical potential to generate up to 5,000 GW of geothermal power—far exceeding the current and projected needs of the data center industry. With the right investments and policy support, Rhodium contends that geothermal could become a cornerstone of the industry’s energy strategy. The researchers project that 55-64% of the anticipated growth in hyperscale data center capacity could be met with behind-the-meter geothermal power, representing 15-17 GW of new capacity. In 13 of the 15 largest data center markets, geothermal could meet 100% of projected demand growth using advanced cooling technologies. Even in less favorable markets, geothermal could still meet at least 15% of power needs. Challenges and Opportunities for Geothermal-Driven Data Center Siting Strategies The Rhodium Group report explores two potential siting strategies for data centers: one that follows historical patterns of clustering near population centers and fiber-optic networks, and another that prioritizes proximity to high-quality geothermal resources. In the latter scenario, geothermal energy could easily meet all projected data center load growth by the early 2030s. Geothermal heat pumps also offer an additional benefit by providing efficient cooling for data centers, further reducing their overall electric load. This dual application of geothermal energy—for both power generation and cooling—could significantly enhance the sustainability and resilience of data center operations. However, despite its potential, geothermal energy faces several challenges that must be addressed to achieve widespread adoption. High drilling costs and technical risks associated with EGS development have historically deterred investment. (The report

Read More »

Cerebras Unveils Six Data Centers to Meet Accelerating Demand for AI Inference at Scale

6 Key Adjacent Data Center Industry Developments in Light of Cerebras’s New AI Acceleration Data Center Expansion Cerebras Systems’ announcement of six new U.S. data center sites dedicated to AI acceleration has sent ripples across the data center and AI industries. As the demand for AI compute capacity continues to surge, this move underscores the growing importance of specialized infrastructure to support next-generation workloads.  Here are six important adjacent and competitive developments in the data center industry that are shaping the landscape in light of Cerebras’s expansion. 1. Hyperscalers Doubling Down on AI-Optimized Data CentersMajor cloud providers like Google, AWS, and Microsoft Azure are rapidly expanding their AI-optimized data center footprints. These hyperscalers are investing heavily in GPU- and TPU-rich facilities to support generative AI, large language models (LLMs), and machine learning workloads. Cerebras’s move highlights the competitive pressure on hyperscalers to deliver low-latency, high-performance AI infrastructure. 2. Specialized AI Hardware Ecosystems Gaining TractionCerebras’s Wafer-Scale Engine (WSE) technology is part of a broader trend toward specialized AI hardware. Competitors like NVIDIA (with its Grace Hopper Superchips and DGX systems) and AMD (with its Instinct MI300 series) are also pushing the envelope in AI acceleration. This arms race is driving demand for data centers designed to accommodate these unique architectures, including advanced cooling and power delivery systems. 3. Liquid Cooling Adoption AcceleratesThe power density of AI workloads is forcing data center operators to rethink cooling strategies. Cerebras’s systems, known for their high compute density, will likely require liquid cooling solutions. This aligns with industry-wide adoption of liquid cooling technologies by companies like Equinix, Digital Realty, and EdgeConneX to support AI and HPC workloads efficiently. 4. Regional Data Center Expansion for AI WorkloadsCerebras’s choice to establish six new U.S. sites reflects a growing trend toward regional data center expansion to meet AI

Read More »

Jio teams with AMD, Cisco and Nokia to build AI-enabled telecom platform

Its accessible and flexible design will allow telecom providers, across the globe, to add AI capabilities to various parts of their infrastructure, including radio access networks (RAN), security systems, data centers, and network routing. This will make networks more manageable and responsive to issues, resulting in a smoother user experience. Jio will be the first to try out this new platform in hopes of setting an example for other telecom providers around the world. According to Jio, the goal is to create a model that others can easily follow and put into action going forward. Oommen expects the first innovation from this alliance to come in this calendar year. For Cisco, this alliance is a continuation of the strategy put in place several years ago to be more interoperable and open. Historically, Cisco has been accused of being closed and proprietary and keeping customers locked into their products. While I think the competitive chatter was far greater than the reality, there was some truth to it. This is something Jeetu Patel, who took on the role of chief product officer last year, has changed. “Over the past three to four years, we have made a huge amount of progress in this area. We have partnerships with Microsoft in security; our collaboration products work with Microsoft Teams and Zoom; our XDR takes in telemetry from competitors; and we have partnerships with multiple AI companies,” Patel told me. The willingness to be open is something all vendors should embrace as openness creates better competition and drives innovation and brings costs down. The democratization of AI creates a better world, like the way making the Internet ubiquitous did. The Open Telecom AI Platform allows more companies to participate in AI and share in the upside creating a “rising tide” that will fundamentally transform

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »