Stay Ahead, Stay ONMINE

There can be no winners in a US-China AI arms race

The United States and China are entangled in what many have dubbed an “AI arms race.”  In the early days of this standoff, US policymakers drove an agenda centered on “winning” the race, mostly from an economic perspective. In recent months, leading AI labs such as OpenAI and Anthropic got involved in pushing the narrative of “beating China” in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the US can win in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AI’s scaling laws. But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achieve near equivalent results while using only a small fraction of the compute resources available to the leading Western labs.     The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employed “chokepoint” tactics to limit China’s access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire. Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls to hold back China’s progress on AI and advanced semiconductors is a “fool’s errand.” Ironically, the unprecedented export control packages targeting China’s semiconductor and AI sectors have unfolded alongside tentative bilateral and multilateral engagements to establish AI safety standards and governance frameworks—highlighting a paradoxical desire of both sides to compete and cooperate.  When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends.  Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced models—instead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat. It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severe—undermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island.  Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole. Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. A recent report from the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This “winner takes all” logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of a Manhattan Project for AI and redirection of US military resources from Ukraine toward China.  Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recently  posted on January 17 that he’d restarted direct dialogue with Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be “partners and friends.” The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory. The promise of AI for good Western mass media usually focuses on attention-grabbing issues described in terms like the “existential risks of evil AI.” Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more like collaborative acceleration.  It is important to note the significant difference between the way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects.  Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there we’ll need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) were born or educated in China, according to industry studies. It’s hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers.  The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypothetical—they could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance. Our recommendations for policymakers: Reduce national security dominance over AI policy. Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically.  2. Promote bilateral and multilateral AI governance. Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all. 3. Expand investment in detection and mitigation of AI misuse. The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally. 4. Create incentives for collaborative AI research. Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to the CERN for AI will bring much more value to the world, and a peaceful end, than a Manhattan Project for AI, which is being promoted by many in Washington today.  5. Establish trust-building measures. Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship. 6. Support the development of a global AI safety coalition. A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem. 7. Shift the focus toward AI for global challenges. It is crucial that the world’s two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI.  Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together.  The opportunity to harness AI for the common good is a chance the world cannot afford to miss. Alvin Wang Graylin Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the company’s China president from 2016 to 2023. He is the author of Our Next Reality. Paul Triolo Paul Triolo is a partner for China and technology policy lead at DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.

The United States and China are entangled in what many have dubbed an “AI arms race.” 

In the early days of this standoff, US policymakers drove an agenda centered on “winning” the race, mostly from an economic perspective. In recent months, leading AI labs such as OpenAI and Anthropic got involved in pushing the narrative of “beating China” in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the US can win in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AI’s scaling laws.

But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achieve near equivalent results while using only a small fraction of the compute resources available to the leading Western labs.    

The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employed “chokepoint” tactics to limit China’s access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire.

Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls to hold back China’s progress on AI and advanced semiconductors is a “fool’s errand.” Ironically, the unprecedented export control packages targeting China’s semiconductor and AI sectors have unfolded alongside tentative bilateral and multilateral engagements to establish AI safety standards and governance frameworks—highlighting a paradoxical desire of both sides to compete and cooperate. 

When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends. 

Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced models—instead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat.

It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severe—undermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island. 

Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole.

Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. A recent report from the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This “winner takes all” logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of a Manhattan Project for AI and redirection of US military resources from Ukraine toward China

Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recently  posted on January 17 that he’d restarted direct dialogue with Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be “partners and friends.” The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory.

The promise of AI for good

Western mass media usually focuses on attention-grabbing issues described in terms like the “existential risks of evil AI.” Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more like collaborative acceleration

It is important to note the significant difference between the way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects. 

Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there we’ll need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) were born or educated in China, according to industry studies. It’s hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers. 

The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypothetical—they could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance.

Our recommendations for policymakers:

  1. Reduce national security dominance over AI policy. Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically. 
    • 2. Promote bilateral and multilateral AI governance. Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all.
    • 3. Expand investment in detection and mitigation of AI misuse. The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally.
    • 4. Create incentives for collaborative AI research. Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to the CERN for AI will bring much more value to the world, and a peaceful end, than a Manhattan Project for AI, which is being promoted by many in Washington today. 
    • 5. Establish trust-building measures. Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship.
    • 6. Support the development of a global AI safety coalition. A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem.
    • 7. Shift the focus toward AI for global challenges. It is crucial that the world’s two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI. 

    Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together. 

    The opportunity to harness AI for the common good is a chance the world cannot afford to miss.


    Alvin Wang Graylin

    Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the company’s China president from 2016 to 2023. He is the author of Our Next Reality.

    Paul Triolo

    Paul Triolo is a partner for China and technology policy lead at DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.

    Shape
    Shape
    Stay Ahead

    Explore More Insights

    Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

    Shape

    Aryaka adds AI-powered observability to SASE platform

    Nadkarni explained that Aryaka runs unsupervised machine learning models on the data to identify anomalies and outliers in the data. For example, the models may detect a sudden spike in traffic to a domain that has not been seen before. This unsupervised analysis helps surface potential issues or areas of

    Read More »

    Hackers gain root access to Palo Alto firewalls through chained bugs

    Discovery of CVE-2025-0108 came from post-patch analysis of CVE-2024-9474, a medium-severity flaw (CVSS 6.9/10) that was actively exploited in November. At that time, attackers were seen chaining CVE-2024-9474 with another critical authentication bypass vulnerability (CVE-2024-0012) affecting PAN-OS, and together they allowed executing codes remotely on compromised systems. Now threat actors

    Read More »

    CBRE selects EVPassport to roll out 3,600 chargers

    CBRE has tapped Santa Monica, Calif.-based EVPassport to provide EV charging solutions to its clients. The deal will involve more than 3,600 chargers across some 600 U.S. sites, the company announced Thursday. The agreement will enable property owners and operators to provide charging to residents and tenants. The focus is

    Read More »

    Investment slump in offshore wind has a ‘silver lining’ – Rystad

    Plans for investment in offshore wind have been dealt a number of blows but this offers a “silver lining” to the industry, an analyst has highlighted at a conference in Aberdeen. Petra Manuel, senior analyst and product manager for offshore wind at research firm Rystad provided the firm’s latest analysis on the sector at the opening session at Subsea Expo. “We are going through uncertain times, both in oil and gas but also in renewables and power, especially in the offshore wind sector,” he said. However delays in offshore wind projects could be a “silver lining” for the industry as the market will act as a form of “natural selection”. “This will push out some projects that are not ready yet and that will help the supply bottleneck for components in the market.” © Erikka Askeland/DCT MediaPetra Manuel, senior analyst, Rystad. Image: Erikka Askeland/DCT Media In a presentation focused on the synergies between offshore wind and the oil gas industry, particularly the supply chain, he highlighted positive investment trends. “We are seeing the tally of total installed capacity can reach more that 430 GW by 2035. Yes, we have heard some negative news that a couple of companies, developers, are scaling back their investment – they want to focus on oil and gas. “That is quite understandable because at the moment we are still seeing a high inflationary impact on the overall market and this increases the cost of components including turbines, foundations, and cables where we saw copper surging in price.” But he added the offshore wind industry was still on track for a “huge increase in the years to come”. He said Rystad predicts capital expenditure (capex) on the sector will rise from $16 billion (£12.7bn) in 2024 to $74bn in by 2030. Rystad, whose clients include a

    Read More »

    Occidental to Sell $1.2 Billion Assets in Permian, Rockies

    Occidental Petroleum Corp. announced Tuesday two agreements to divest several United States assets in the Permian Basin and the Rocky Mountains to undisclosed buyers for a combined price of $1.2 billion as part of its debt management plan. The sale, expected to close this quarter, involves stakes not included in the Houston, Texas-based company’s near-term development plan, Occidental said in an online statement. “The resulting proceeds will be applied to the company’s remaining 2025 debt maturities”, the hydrocarbon and chemical producer said. Billionaire Warren Buffett-backed Occidental said it had achieved its near-term debt repayment goal of $4.5 billion in the fourth quarter of 2024. Occidental launched a $4.5 billion-$6 billion divestiture program when it announced its merger with CrownRock LP late 2023. It announced the completion of the $12.4 billion purchase August 1, 2024. “We were pleased to reach the near-term deleveraging milestone in the fourth quarter of 2024, within five months of closing the CrownRock acquisition, and seven months ahead of our goal”, commented president and chief executive Vicki Hollub. “The transactions announced today continue to high-grade our portfolio and accelerate the progress toward achieving both our medium-term balance sheet deleveraging target and shareholder return pathway”. The company said, “Occidental will continue to advance deleveraging via free cash flow and divestitures”. It owed $1.14 billion in current maturities from long-term debt as of the end of 2024. Occidental accrued total current liabilities of $9.52 billion, according to annual results it filed with the Securities and Exchange Commission Tuesday. It ended the year with $2.13 billion in cash and cash equivalents, while its total current assets stood at $9.07 billion. Occidental logged a net loss of $297 million, or $0.32 per share, and adjusted income of $792 million, or $0.8 per share, for the fourth quarter of 2024. The adjustment was

    Read More »

    Cable maker XLCC to bring 300 jobs to Kilmarnock

    Subsea cable manufacturer XLCC will base its sales and project delivery headquarters at the HALO Enterprise and Innovation Centre Kilmarnock, Ayrshire. The new partnership with HALO Kilmarnock could bring more than 300 high-quality jobs to the town. XLCC is developing the UK’s first high-voltage, direct current (HVDC) cable factory in Hunterston, Ayrshire, creating 1,200 jobs (including the 300 in Kilmarnock) as part of the £1.4 billion project. The partners will also explore the creation of a green economy sustainability centre, which will include a green accelerator and education hub for SME businesses and entrepreneurs. Founder and executive chair of HALO Kilmarnock Dr Marie Macklin said: “XLCC and HALO Kilmarnock have a successful track record of working together since 2023 when XLCC chose HALO as the base for its apprentices before moving to its training factory in Irvine in 2024, for which I performed the official opening. “This initiative puts us at the forefront of driving new, just transition opportunities for the former industrial heartlands of Ayrshire. I look forward to working with XLCC and our key stakeholders, including private sector partners, Scottish Enterprise, East Ayrshire Council, the UK and Scottish governments, to deliver for all in our communities at pace.” XLCC won planning permission in 2022 to develop the factory, which will be based on a disused Pel Ports coal yard near the site of the Hunterston B nuclear power plant. It secured its first factor order to supply Xlinks with one of the longest subsea cables in the world, part of a link between a massive 3.6GW solar farm in Morocco and the Alverdiscott substation in North Devon. In December, Xlinks chief executive James Humfrey told Energy Voice that the Morocco-UK Power Project could help balance the UK grid and improve resiliency when it launches in the early 2030s.

    Read More »

    Wood wins Dutch hydrogen FEED contract

    Wood (LON:WG) has received a front-end engineering design (FEED) scope from Vattenfall and Copenhagen Infrastructure Partners (CIP) for the Zeevonk hydrogen facility in Rotterdam, the Netherlands. The hydrogen plant will use electricity from the Zeevonk development, which includes the 2GW IJmuiden Ver Beta offshore wind farm and a 50MW floating offshore solar plant. The hydrogen plant will base its electrolyser at the Maasvlakte at the Port of Rotterdam. Once completed, the produced hydrogen will be transported via pipeline to the nearby hydrogen grid, Hydrogen Network Rotterdam. This network is the first phase of the new Dutch hydrogen infrastructure centred in the Port of Rotterdam. Wood president of projects for the eastern hemisphere Gerry Traynor said the project will use the group’s “extensive expertise in large-scale green hydrogen projects, which are crucial to the world’s energy transition. “Wood is delivering a design that maximises value engineering and applies our operability knowledge, ensuring a reliable and cost-effective solution. “Our role in delivering this project underpins Wood’s commitment to delivering low-carbon solutions for clients and driving forward the accessibility and scalability of low-emission energy sources around the globe.” The deal marks Wood’s third transformative project with CIP, having previously been selected as owner’s engineer for their Coalburn Storage project in Scotland, and providing engineering services for CIP’s green hydrogen Catalina project in Spain. Zeevonk project director Claus Vissing-Jørgensen added: “The awarding of our FEED represents a significant milestone for our large-scale hydrogen plant planned in the Maasvlakte area. Over the next ten months, the FEED will provide detailed cost estimates and lay the groundwork for our upcoming EPC tender process, expected in Q2 this year.” Wood suffered a “disappointing” financial performance last year, as it expects to make around $450 million for full-year 2024. This comes after the company suffered its first-ever loss

    Read More »

    Tokyo Gas Invests in Philippine LNG Sector

    Tokyo Gas Co. Ltd. has acquired a 20 percent stake in FGEN LNG Corp., which owns one of two operational liquefied natural gas (LNG) receiving terminals in the Philippines. The FGEN LNG facility in Batangas province, south of Manila, “marks Tokyo Gas’ first investment in a commercially operational overseas LNG terminal project”, the Japanese company said in an online statement. It said it had already helped with the development of the terminal, completed 2023, via earlier agreements with First Gen Corp., the 80 percent local owner of FGEN LNG. “Tokyo Gas will leverage its extensive expertise in the optimal operation of LNG terminals, accumulated over many years in Japan, to support the operation and maintenance of the Terminal”, Tokyo Gas said. The facility regasifies LNG for feeding into First Gen’s gas-fired power plants, which have a total generating capacity of 2,107 megawatts, according to First Gen. “This subscription will deepen our partnership and enhance synergy that will boost our efforts in support of the Philippines’ energy security and stability, even as we all pursue decarbonization,” Giles Puno, vice chair and chief executive of FGEN LNG and president of First Gen, said in a separate statement. Tokyo Gas added, “In the Philippines, robust economic growth and population increase are expected to drive higher demand for electricity”. “By participating in the Terminal project, Tokyo Gas aims to contribute to the expansion of natural gas utilization and the establishment of an LNG value chain in the country”, it said. Last month President Ferdinand Marcos Jr. signed a law to establish a downstream gas industry in the Southeast Asian country. The legislation aims to raise the share of gas in the domestic energy mix and position the archipelago as an LNG transshipment hub in the Asia-Pacific. The Philippine Natural Gas Industry Development Act seeks

    Read More »

    USA EIA Forecasts WTI Oil Price Drop in 2025 and 2026

    In its latest short term energy outlook (STEO), which was released on February 11, the U.S. Energy Information Administration (EIA) projected that the West Texas Intermediate (WTI) spot price average will drop this year and next year. According to its February STEO, the EIA sees the WTI spot price averaging $70.62 per barrel in 2025 and $62.46 per barrel in 2026. The WTI spot price averaged $76.60 per barrel in 2024, the STEO highlighted. In its previous STEO, which was released in January, the EIA projected that the WTI spot price would average $70.31 per barrel in 2025 and $62.46 per barrel in 2026. That STEO also highlighted that the 2024 WTI spot price average was $76.60 per barrel. The EIA’s February STEO forecast that the WTI spot price will come in at $73.62 per barrel in the first quarter of this year, $71.00 per barrel in the second quarter, $70.00 per barrel in the third quarter, $68 per barrel in the fourth quarter, $64.97 per barrel in the first quarter of 2025, $63.33 per barrel in the second quarter, $61.68 per barrel in the third quarter, and $60.00 per barrel in the fourth quarter of 2026. In its January STEO, the EIA projected that the WTI spot price would average $72.34 per barrel in the first quarter of 2025, $71.00 per barrel in the second quarter, $70.00 per barrel in the third quarter, $68 per barrel in the fourth quarter, $64.97 per barrel in the in the first quarter of 2026, $63.33 per barrel in the second quarter, $61.68 per barrel in the third quarter, and $60.00 per barrel in the fourth quarter. A research note sent to Rigzone by the JPM Commodities Research team on February 14 showed that J.P. Morgan is forecasting that the WTI crude price

    Read More »

    Data center spending to top $1 trillion by 2029 as AI transforms infrastructure

    His projections account for recent advances in AI and data center efficiency, he says. For example, the open-source AI model from Chinese company DeepSeek seems to have shown that an LLM can produce very high-quality results at a very low cost with some clever architectural changes to how the models work. These improvements are likely to be quickly replicated by other AI companies. “A lot of these companies are trying to push out more efficient models,” says Fung. “There’s a lot of effort to reduce costs and to make it more efficient.” In addition, hyperscalers are designing and building their own chips, optimized for their AI workloads. Just the accelerator market alone is projected to reach $392 billion by 2029, Dell’Oro predicts. By that time, custom accelerators will outpace commercially available accelerators such as GPUs. The deployment of dedicated AI servers also has an impact on networking, power and cooling. As a result, spending on data center physical infrastructure (DCPI) will also increase, though at a more moderate pace, growing by 14% annually to $61 billion in 2029.  “DCPI deployments are a prerequisite to support AI workloads,” says Tam Dell’Oro, founder of Dell’Oro Group, in the report. The research firm raised its outlook in this area due to the fact that actual 2024 results exceeded its expectations, and demand is spreading from tier one to tier two cloud service providers. In addition, governments and tier one telecom operators are getting involved in data center expansion, making it a long-term trend.

    Read More »

    The Future of Property Values and Power in Virginia’s Loudoun County and ‘Data Center Alley’

    Loudoun County’s FY 2026 Proposed Budget Is Released This week, Virginia’s Loudoun County released its FY 2026 Proposed Budget. The document notes how data centers are a major driver of revenue growth in Loudoun County, contributing significantly to both personal and real property tax revenues. As noted above, data centers generate almost 50% of Loudoun County property tax revenues. Importantly, Loudoun County has now implemented measures such as a Revenue Stabilization Fund (RSF) to manage the risks associated with this revenue dependency. The FY 2026 budget reflects the strong growth in data center-related revenue, allowing for tax rate reductions while still funding critical services and infrastructure projects. But the county is mindful of the potential volatility in data center revenue and is planning for long-term fiscal sustainability. The FY 2026 Proposed Budget notes how Loudoun County’s revenue from personal property taxes, particularly from data centers, has grown significantly. From FY 2013 to FY 2026, revenue from this source has increased from $60 million to over $800 million. Additionally, the county said its FY 2026 Proposed Budget benefits from $150 million in new revenue from the personal property tax portfolio, with $133 million generated specifically from computer equipment (primarily data centers). The county said data centers have also significantly impacted the real property tax portfolio. In Tax Year (TY) 2025, 73% of the county’s commercial portfolio is composed of data centers. The county said its overall commercial portfolio experienced a 50% increase in value between TY 2024 and TY 2025, largely driven by the appreciation of data center properties. RSF Meets Positive Economic Outlook The Loudoun County Board of Supervisors created the aformentioned Revenue Stabilization Fund (RSF) to manage the risks associated with the county’s reliance on data center-related revenue. The RSF targets 10% of data center-related real and personal property tax

    Read More »

    Deep Diving on DeepSeek: AI Disruption and the Future of Liquid Cooling

    We know that the data center industry is currently undergoing a period of rapid transformation, driven by the increasing demands of artificial intelligence (AI) workloads and evolving cooling technologies. And it appears that the recent emergence of DeepSeek, a Chinese AI startup, alongside supply chain issues for NVIDIA’s next-generation GB200 AI chips, may be prompting data center operators to reconsider their cooling strategies. Angela Taylor, Chief of Staff at LiquidStack, provided insights to Data Center Frontier on these developments, outlining potential shifts in the industry and the future of liquid cooling adoption. DeepSeek’s Market Entry and Supply Chain Disruptions Taylor told DCF, “DeepSeek’s entry into the market, combined with NVIDIA’s GB200 supply chain delays, is giving data center operators a lot to think about.” At issue here is how DeepSeek’s R1 chatbot came out of the box positioned an energy-efficient AI model that reportedly requires significantly less power than many of its competitors. This development raises questions about whether current data center cooling infrastructures are adequate, particularly as AI workloads become more specialized and diverse. At the same time, NVIDIA’s highly anticipated GB200 NVL72 AI servers, designed to handle next-generation AI workloads, are reportedly facing supply chain bottlenecks. Advanced design requirements, particularly for high-bandwidth memory (HBM) and power-efficient cooling systems, have delayed shipments, with peak availability now expected between Q2 and Q3 of 2025.  This combination of a new AI player and delayed hardware supply has created uncertainty, compelling data center operators to reconsider their near-term cooling infrastructure investments. A Temporary Slowdown in AI Data Center Retrofits? Taylor also observed, “We may see a short-term slowdown in AI data center retrofits as operators assess whether air cooling can now meet their needs.” The efficiency of DeepSeek’s AI models suggests that some AI workloads may require less power and generate less heat, making air

    Read More »

    Georgia Follows Ohio’s Lead in Moving Energy Costs to Data Centers

    The rule also mandates that any new contracts between Georgia Power and large-load customers exceeding 100 MW be submitted to the PSC for review. This provision ensures regulatory oversight and transparency in agreements that could significantly impact the state’s power grid and ratepayers. Commissioner Lauren “Bubba” McDonald points out that this is one of a number of actions that the PSC is planning to protect ratepayers, and that the PSC’s 2025 Integrated Resource Plan will further address data center power usage. Keeping Ahead of Anticipated Energy Demand This regulatory change reflects Georgia’s proactive approach to managing the increasing energy demands associated with the state’s growing data center industry, aiming to balance economic development with the interests of all electricity consumers. Georgia Power has been trying very hard to develop generation capacity to meet it’s expected usage pattern, but the demand is increasing at an incredible rate. In their projection for increased energy demand, the 2022 number was 400 MW by 2030. A year later, in their 2023 Integrated Resource Plan, the anticipated increase had grown to 6600 MW by 2030. Georgia Power recently brought online two new nuclear reactors at the Vogtle Electric Generating Plant, significantly increasing its nuclear generation capacity giving the four unit power generation station a capacity of over 4.5 GW. This development has contributed to a shift in Georgia’s energy mix, with clean energy sources surpassing fossil fuels for the first time. But despite the commitment to nuclear power, the company is also in the process of developing three new power plants at the Yates Steam Generating Plant. According to the AJC newspaper, regulators had approved the construction of fossil fuel power, approving natural gas and oil-fired power plants. Designed as “peaker” plants to come online at times of increased the demand, the power plants will

    Read More »

    Chevron, GE Vernova, Engine No.1 Join Race to Co-Locate Natural Gas Plants for U.S. Data Centers

    Other Recent Natural Gas Developments for Data Centers As of February 2025, the data center industry has seen a host of significant developments in natural gas plant technologies and strategic partnerships aimed at meeting the escalating energy demands driven by AI and cloud computing. In addition to the partnership between Chevron, Engine No. 1, and GE Vernova, other consequential initiatives include the following: ExxonMobil’s Entry into the Electricity Market ExxonMobil has announced plans to build natural gas-fired power plants to supply electricity to AI data centers. The company intends to leverage carbon capture and storage technology to minimize emissions, positioning its natural gas solutions as competitive alternatives to nuclear power. This announcement in particular seemed to herald a notable shift in industry as fossil fuel companies venture into the electricity market to meet the rising demand for low-carbon power. Powerconnex Inc.’s Natural Gas Plant in Ohio An Ohio data center in New Albany, developed by Powerconnex Inc., plans to construct a natural gas-fired power plant on-site to meet its electricity needs amidst the AI industry’s increasing energy demands. The New Albany Energy Center is expected to generate up to 120 megawatts (MW) of electricity, with construction beginning in Q4 2025 and operations commencing by Q1 2026. Crusoe and Kalina Distributed Power Partnership in Alberta, Canada AI data center developer Crusoe has entered into a multi-year framework agreement with Kalina Distributed Power to develop multiple co-located AI data centers powered by natural gas power plants in Alberta, Canada. Crusoe will own and operate the data centers, purchasing power from three Kalina-owned 170 MW gas-fired power plants through 15-year Power Purchase Agreements (PPAs). Entergy’s Natural Gas Power Plants for Data Centers Entergy plans to deploy three new natural gas power plants, providing over 2,200 MW of energy over 15 years, pending approval

    Read More »

    Podcast: Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers

    In the latest episode of the Data Center Frontier Show podcast, DCF Editor-in-Chief Matt Vincent sits down with Phill Lawson-Shanks, Chief Innovation Officer at Aligned Data Centers, for a wide-ranging discussion that touches on some of the most pressing trends and challenges shaping the future of the data center industry. From the role of nuclear energy and natural gas in addressing the sector’s growing power demands, to the rapid expansion of Aligned’s operations in Latin America (LATAM), in the course of the podcast Lawson-Shanks provides deep insight into where the industry is headed. Scaling Sustainability: Tracking Embodied Carbon and Scope 3 Emissions A key focus of the conversation is sustainability, where Aligned continues to push boundaries in carbon tracking and energy efficiency. Lawson-Shanks highlights the company’s commitment to monitoring embodied carbon—an effort that began four years ago and has since positioned Aligned as an industry leader. “We co-authored and helped found the Climate Accord with iMasons—taking sustainability to a whole new level,” he notes, emphasizing how Aligned is now extending its carbon traceability standards to ODATA’s facilities in LATAM. By implementing lifecycle assessments (LCAs) and tracking Scope 3 emissions, Aligned aims to provide clients with a detailed breakdown of their environmental impact. “The North American market is still behind in lifecycle assessments and environmental product declarations. Where gaps exist, we look for adjacencies and highlight them—helping move the industry forward,” Lawson-Shanks explains. The Nuclear Moment: A Game-Changer for Data Center Power One of the most compelling segments of the discussion revolves around the growing interest in nuclear energy—particularly small modular reactors (SMRs) and microreactors—as a viable long-term power solution for data centers. Lawson-Shanks describes the recent industry buzz surrounding Oklo’s announcement of a 12-gigawatt deployment with Switch as a significant milestone, calling the move “inevitable.” “There are dozens of nuclear

    Read More »

    Microsoft will invest $80B in AI data centers in fiscal 2025

    And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

    Read More »

    John Deere unveils more autonomous farm machines to address skill labor shortage

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

    Read More »

    2025 playbook for enterprise AI success, from agents to evals

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

    Read More »

    OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

    Read More »