Stay Ahead, Stay ONMINE

Why the world is looking to ditch US AI models

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. This week’s edition of The Algorithm is brought to you not by your usual host, James O’Donnell, but Eileen Guo, an investigative reporter at MIT Technology Review.  A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government. As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones.  As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI. One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.”  Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.”  Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context.  Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.    For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages. First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation.  Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development.  AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers.  This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.” But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.”  It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching. Deeper Learning OpenAI has released its first research into how using ChatGPT affects people’s emotional well-being OpenAI released two pieces of research last week that explore how ChatGPT affects people who engage with it on emotional issues, yielding some interesting results. Female study participants were slightly less likely to socialize with people than their male counterparts who used the chatbot for the same period of time, our reporter Rhiannon Williams writes. And people who used voice mode in a gender that was not their own reported higher levels of loneliness at the end of the experiment.  Why it matters: AI companies have raced to build chatbots that act not just as productivity tools but also as companions, romantic partners, friends, therapists, and more. Legally, it’s largely still a Wild West landscape. Some have instructed users to harm themselves, and others have offered sexually charged conversations as underage characters represented by deepfakes. More research into how people, especially children, are using these AI models is essential. OpenAI’s work is only a start. Read more from Rhiannon Williams. Bits and Bytes Opinion – Why handing over total control to AI agents would be a huge mistake Companies like OpenAI and Butterfly Effect (the startup in China that made Manus) are racing to build AI agents that can do tasks for you by taking over your computer. In this op-ed, some top AI researchers detail the potential missteps that could occur if we cede more control of our digital lives to decision-making AIs.   A provocative experiment pitted AI against federal judges Research has long shown that judges are influenced by many factors, like how sympathetic they are to defendants, or when their last meal was. Despite AI models’ inherent problems with biases and hallucinations, researchers at the University of Chicago Law School wondered if they can present more objective opinions. They can, but that doesn’t make them better judges, the researchers say. (The Washington Post) Elon Musk’s “truth-seeking” chatbot often disagrees with him Musk promised his company xAI’s model Grok would be an antidote to the “woke” and politically influenced chatbots that he says dominate today. But in tests done by the Washington Post, the model contradicted many of Musk’s claims about specific issues. (The Washington Post) A Disney employee downloaded an AI tool that contained malware, and it ruined his life MIT Technology Review has long predicted that the proliferation of AI will enable scammers to up their productivity as never before. One victim of this trend is Matthew Van Andel, a Disney employee who downloaded malware disguised as an AI tool. It led to his firing. (Wall Street Journal) The facial recognition company Clearview attempted to buy Social Security numbers and mugshots for its database Three years ago, Clearview was fined for scraping images of individuals’ faces from the internet. Now, court records reveal that the company was attempting to buy 690 million arrest records and 390 million arrest photos in the US—records that also contained Social Security numbers, emails, and physical addresses. The deal fell through, but Clearview nonetheless holds one of the largest databases of facial images, and its tools are used by police and federal agencies. (404 Media)

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This week’s edition of The Algorithm is brought to you not by your usual host, James O’Donnell, but Eileen Guo, an investigative reporter at MIT Technology Review

A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.

As I wrote in my dispatch, the Trump administration’s shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. 

As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.

One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: “Since Trump’s second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.” 

Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” 

Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. 

Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPT’s response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.   

For many at RightsCon, this validates their calls for more community-driven approaches to AI—both in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify “reclaimed language” (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.

First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. 

Then there’s the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, there’s been a steady stream of announcements about “sovereign AI” initiatives that aim to give a country (or organization) full control over all aspects of AI development. 

AI sovereignty is just one part of the desire for broader “tech sovereignty” that’s also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a “Euro Stack,” or “digital public infrastructure.” The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europe’s efforts are partly modeled after “India Stack,” that country’s digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers. 

This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is “causing Europe to move faster … to come to the realization that Europe needs to regain its tech sovereignty.” This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply “because tech is where the future economic growth of any country is.”

But just because governments get involved doesn’t mean that issues around inclusion in language models will go away. “I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides ‘These are the languages we want to advance’ or ‘These are the types of views we want represented in a data set,’” Bhatia says. “Fundamentally, the training data a model trains on is akin to the worldview it develops.” 

It’s still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space we’ll be watching.


Deeper Learning

OpenAI has released its first research into how using ChatGPT affects people’s emotional well-being

OpenAI released two pieces of research last week that explore how ChatGPT affects people who engage with it on emotional issues, yielding some interesting results. Female study participants were slightly less likely to socialize with people than their male counterparts who used the chatbot for the same period of time, our reporter Rhiannon Williams writes. And people who used voice mode in a gender that was not their own reported higher levels of loneliness at the end of the experiment. 

Why it matters: AI companies have raced to build chatbots that act not just as productivity tools but also as companions, romantic partners, friends, therapists, and more. Legally, it’s largely still a Wild West landscape. Some have instructed users to harm themselves, and others have offered sexually charged conversations as underage characters represented by deepfakes. More research into how people, especially children, are using these AI models is essential. OpenAI’s work is only a start. Read more from Rhiannon Williams.

Bits and Bytes

Opinion – Why handing over total control to AI agents would be a huge mistake

Companies like OpenAI and Butterfly Effect (the startup in China that made Manus) are racing to build AI agents that can do tasks for you by taking over your computer. In this op-ed, some top AI researchers detail the potential missteps that could occur if we cede more control of our digital lives to decision-making AIs.  

A provocative experiment pitted AI against federal judges

Research has long shown that judges are influenced by many factors, like how sympathetic they are to defendants, or when their last meal was. Despite AI models’ inherent problems with biases and hallucinations, researchers at the University of Chicago Law School wondered if they can present more objective opinions. They can, but that doesn’t make them better judges, the researchers say. (The Washington Post)

Elon Musk’s “truth-seeking” chatbot often disagrees with him

Musk promised his company xAI’s model Grok would be an antidote to the “woke” and politically influenced chatbots that he says dominate today. But in tests done by the Washington Post, the model contradicted many of Musk’s claims about specific issues. (The Washington Post)

A Disney employee downloaded an AI tool that contained malware, and it ruined his life

MIT Technology Review has long predicted that the proliferation of AI will enable scammers to up their productivity as never before. One victim of this trend is Matthew Van Andel, a Disney employee who downloaded malware disguised as an AI tool. It led to his firing. (Wall Street Journal)

The facial recognition company Clearview attempted to buy Social Security numbers and mugshots for its database

Three years ago, Clearview was fined for scraping images of individuals’ faces from the internet. Now, court records reveal that the company was attempting to buy 690 million arrest records and 390 million arrest photos in the US—records that also contained Social Security numbers, emails, and physical addresses. The deal fell through, but Clearview nonetheless holds one of the largest databases of facial images, and its tools are used by police and federal agencies. (404 Media)

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

The US will install these country-specific tariffs Aug. 7

The U.S. plans to lift its pause on country-specific tariffs while implementing a range of new rates for specific trading partners on Aug. 7, per an executive order President Donald Trump signed Thursday.  The order lists rates for over 60 trading partners, ranging from 10% to 41%. The list includes

Read More »

Spotlight report: How AI is reshaping IT

The emergence of AI as the next big game changer has IT leaders rethinking not just how IT is staffed, organized, and funded, but also how the IT team works with the business to capture the value and promise of AI. Learn more in this Spotlight Report from the editors

Read More »

SD-WAN reality check: Why enterprise ‘rip-and-replace’ isn’t happening

However, despite aggressive vendor positioning around complete infrastructure overhauls, ISG’s research shows that overlay approaches are winning. Even the most technologically advanced organizations are taking a more cautious approach to SD-WAN deployments. “Honestly, even the digitally mature enterprises are favoring controlled, phased transitions due to operational complexity, embedded legacy contracts,

Read More »

Oil Drops on Weak U.S. Data

Oil sank as the outlook for the world’s largest economy darkened after a barrage of poor US data and tariff announcements, weighing on the prospects for energy demand growth. West Texas Intermediate crude fell 2.8% to settle near $67 a barrel on Friday, the biggest plunge in a single day since June 24. Prices also came under pressure as investors widely anticipate that OPEC and its allies will decide to add more supplies to the market during an upcoming weekend meeting. US jobs growth cooled sharply over the past three months, while factory activity contracted in July at the fastest clip in nine months, in a sign the economy is shifting into a lower gear amid widespread uncertainty. The swath of bearish data increased investor concerns that the impact of US President Donald Trump’s ever-changing tariff rates — which had so far been muted — has finally begun to weigh on economic growth. The weaker data come as Trump finalized plans for tariffs on several countries, including a higher rate on neighbor Canada, though oil is exempt. “Tariffs are now officially a part of daily life. With the catalyst in the rearview, focus must shift to the fallout,” said Daniel Ghali, a commodity strategist at TD Securities. Oil traders had been forced to the sidelines in recent weeks as numerous wild cards surrounding US trade policy and OPEC+ production confounded supply-and-demand outlooks. The unpredictable environment, which initially caused wild price swings earlier in the year, has dampened risk-on sentiment and sapped volatility from the market. The potential onset of an economic slowdown threatens to coincide with a period for oversupply widely expected for later this year. Second-quarter earnings for oil industry giants blew out expectations, with record oil production blunting the impact of lower crude prices. Exxon Mobil Corp. pumped

Read More »

Xcel Energy ‘prepared to go to trial’ to fight Marshall Fire liability

Dive Brief: Xcel Energy has emerged from court-ordered mediation without a settlement and will go to trial for its role in the 2021 Marshall Fire in Colorado, company executives said during a Thursday earnings call. A 2023 investigation by the Boulder County Sheriff attributed the fire to the merging of two independent ignitions: kindling from an old fire on a property owned by the Twelve Tribes, a religious organization, and sparks from an Xcel Energy power line. President and CEO Bob Frenzel said the company believes it can prove its equipment did not start the fire. The company is already paying claims on another fire, the 2024 Smokehouse Creek Fire in the Texas panhandle, on which it faces an estimated $290 million in liability. Dive Insight: Xcel Energy remains open to settling with the more than 500 parties suing the utility for the Marshall Fire. But any settlement, Frenzel said Thursday, must “start with the idea that our equipment didn’t cause that second” ignition. Hearings are set to begin on Sept. 25 and will likely continue through November now that the July 31 deadline for court-ordered mediation has passed, Frenzel said. When the fire started, he said, the embers on the Twelve Tribes property were fanned by 100 miles per hour winds for over an hour and 20 minutes, allowing it to spread into nearby towns long before the second ignition at Xcel Energy’s power line is alleged to have taken place, he said. “As we step back and think about the trial broadly and the fire broadly, we continue to maintain that our equipment didn’t start the second ignition in the wildfire, and we’re prepared to go to court,” Frenzel said, later adding that “we feel very good about the facts and circumstances of our trial.” Insurance data suggest the

Read More »

PPL Electric ‘advanced-stage’ data center pipeline grows 32%, to 14 GW

Dive Brief: PPL Electric Utilities has advanced-stage agreements to interconnect about 14 GW of data centers in its Pennsylvania service territory, up 32% from three months ago, Vincent Sorgi, president and CEO of PPL Corp., said Thursday during an earnings conference call. Under signed agreements, PPL Electric Utilities’ data center load could grow from 800 MW in 2026 to 14.4 GW in 2034, according to a second-quarter earnings presentation. PPL Electric Utilities has a 60-GW data center interconnection queue, according to Sorgi. PPL’s data center strategy includes an unregulated joint venture with Blackstone Infrastructure to build power plants in Pennsylvania to directly serve data centers. “The joint venture is actively engaged with hyperscalers, landowners, natural gas pipeline companies and turbine manufacturers and has secured multiple land parcels to enable this new generation buildout,” Sorgi said. Dive Insight: However, discussions on potential electricity service agreements aren’t far enough along for the joint venture to commit to buying turbines and it is unclear when it would be able to announce any news, according to Sorgi. “We’ve made no material financial commitments to date as it relates to the joint venture,” he said. PPL intends to make sure that the joint venture’s deals don’t change the company’s credit risk profile, Sorgi said. PPL supports pending legislation in Pennsylvania — H.B. 1272 and S.B. 897 — that would allow regulated utilities like PPL Electric Utilities to build and own generation to address a resource adequacy need, Sorgi said. The bills would also encourage utilities to enter into agreements with independent power producers to help “derisk” their new generation investments, according to Sorgi.  “We are primed to act quickly once this proposed legislation becomes law,” he said. PPL Electric Utilities estimates it will need about 7.5 GW of new generation in the next five to seven

Read More »

Indian Refiner Snaps Up USA Oil

India’s largest oil refiner has snapped up millions of barrels of crude from the US and United Arab Emirates, with the South Asian nation facing mounting pressure from Washington and Europe over its purchases from Russia. State-owned Indian Oil Corp. bought at least 5 million barrels US crude, on top of 2 million barrels of supplies from Abu Dhabi, according to traders who asked not to be identified as they aren’t authorized to speak publicly. The purchases were both large and for relatively immediate delivery by the company’s usual standards. State-owned processors were told to come up with plans for buying non-Russian crude earlier this week.  An Indian Oil spokesman didn’t respond to a request for comment. India’s refiners have been in the spotlight over the past two weeks, after being singled out by the European Union and the US for supporting Moscow during its war in Ukraine by buying Russian oil. US President Donald Trump has repeatedly threatened to impose secondary tariffs on buyers of Russian oil, and in a post earlier this week singled out India for criticism, saying that it would pay an additional economic penalty for its ongoing purchases.  “We are interpreting the increased buying activity from India as a sign of diversification away from Russian supply,” said Livia Gallarati, global crude lead at consultant Energy Aspects. “Physical players are unlikely to gamble on buying Russian barrels, especially at current high prices, even if skepticism remains over whether US President Donald Trump will follow through with these threats.” This week, IOC sought crude supplies in multiple back-to-back purchase tenders, which traders said was unusual for the company and pointed to relatively urgent demand for crude. Earlier in the week, it also purchased 4 million barrels of West African barrels, as well as the UAE’s Murban crude for delivery

Read More »

Reliance to explore offshore India under new agreement

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: #c19a06; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; background-color: undefined !important; } Reliance Industries Ltd. entered into a joint operating agreement with Oil and Natural Gas Corp. (ONGC) and BP Exploration (Alpha) Ltd. to explore Block GS-OSHP-2022/2 off the western coast of India in Saurashtra basin. The three firms jointly bid for the block in the 9th bid round of Open Acreage Licensing Policy (OALP) last year. The block spans an area of 5,454 sq km and is classified under Category-II basins. The consortium will explore the block to assess its hydrocarbon potential. ONGC will be operator of the block.

Read More »

Expand says efficiency lets executives trim capex by $100 million

The leaders of Expand Energy Corp., Oklahoma City, have trimmed the 2025 capital spending forecast by $100 million after posting record drilling performance during the second quarter. The company, formed last October via the merger of Chesapeake Energy and Southwestern Energy, is continuing with plans to build about 300 million cu ft equivalent/day (MMcfed) of potential capacity for 2026. Expand produced an average of just over 7.2 bcfed from its operations in the Haynesville basin as well southwest and northeast Appalachia, up from nearly 6.8 bcfed in the first three months of this year. President and chief executive officer Nick Dell’Osso and his team expect third-quarter production to also be around 7.2 bcfed, with Haynesville output growing about 2% and that from Appalachia ticking down. That production will use 11 rigs, down from the 12 executives had planned 3 months ago. Expand teams are expected to turn in line the same number of wells for the year as before—they added 59 to the company’s count during the second quarter, down from 89 in the year’s first quarter—but they’re doing so more efficiently: All three of the company’s regions drilled at least 20% more ft/day in the second quarter than early this year and set records. That rising efficiency is translating into the $100 million capex cut, which includes plans to set up Expand’s growth in 2026. Three months ago, executives expected they’d spend $300 million and end 2025 with 15 rigs to set up an additional 300 MMcfed of production next year. Those figures are now $275 million and 12 rigs—and Dell’Osso said recent gyrations in the price of natural gas won’t lead to major changes. “We’re just not bothered by the volatility that we’re seeing here this summer,” Dell’Osso said. “If you think about where we are in the

Read More »

DOE announces site selection for AI data centers

“The DOE is positioned to lead on advanced AI infrastructure due to its historical mandate and decades of expertise in extreme-scale computing for mission-critical science and national security challenges,” he said. “National labs are central hubs for advancing AI by providing researchers with unparalleled access to exascale supercomputers and a vast, interdisciplinary technical workforce.” “The Department of Energy is actually a very logical choice to lead on advanced AI data centers in my opinion,” said Wyatt Mayham, lead consultant at Northwest AI, which specializes in enterprise AI integration. “They already operate the country’s most powerful supercomputers. Frontier at Oak Ridge and Sierra at Lawrence Livermore are not experimental machines, they are active systems that the DOE built and continues to manage.” These labs have the physical and technical capacity to handle the demands of modern AI. Running large AI data centers takes enormous electrical capacity, sophisticated cooling systems, and the ability to manage high and variable power loads. DOE labs have been handling that kind of infrastructure for decades, says Mayham. “DOE has already built much of the surrounding ecosystem,” he says. “These national labs don’t just house big machines. They also maintain the software, data pipelines, and research partnerships that keep those machines useful. NSF and Commerce play important roles in the innovation system, but they don’t have the hands-on operational footprint the DOE has.” And Tanmay Patange, founder of AI R&D firm Fourslash, says the DOE’s longstanding expertise in high-performance computing and energy infrastructure directly overlap with the demands we have seen from AI development in places. “And the foundation the DOE has built is essentially the precursor to modern AI workloads that often require gigawatts of reliable energy,” he said. “I think it’s a strategic play, and I won’t be surprised to see the DOE pair their

Read More »

Data center survey: AI gains ground but trust concerns persist

Cost issues: 76% Forecasting future data center capacity requirements: 71% Improving energy performance for facilities equipment: 67% Power availability: 63% Supply chain disruptions: 65% A lack of qualified staff: 67% With respect to capacity planning, there’s been a notable increase in the number of operators who describe themselves as “very concerned” about forecasting future data center capacity requirements. Andy Lawrence, Uptime’s executive director of research, said two factors are contributing to this concern: ongoing strong growth for IT demand, and the often-unpredictable demand that AI workloads are creating. “There’s great uncertainty about … what the impact of AI is going to be, where it’s going to be located, how much of the power is going to be required, and even for things like space and cooling, how much of the infrastructure is going to be sucked up to support AI, whether it’s in a colocation, whether it’s in an enterprise or even in a hyperscale facility,” Lawrence said during a webinar sharing the survey results. The survey found that roughly one-third of data center owners and operators currently perform some AI training or inference, with significantly more planning to do so in the future. As the number of AI-based software deployments increases, information about the capabilities and limitations of AI in the workplace is becoming available. The awareness is also revealing AI’s suitability for certain tasks. According to the report, “the data center industry is entering a period of careful adoption, testing, and validation. Data centers are slow and careful in adopting new technologies, and AI will not be an exception.”

Read More »

Micron unveils PCIe Gen6 SSD to power AI data center workloads

Competitive positioning With the launch of the 9650 SSD PCIe Gen 6, Micron competes with Samsung and SK Hynix enterprise SSD offerings, which are the dominant players in the SSD market. In December last year, SK Hynix announced the development of PS1012 U.2 Gen5 PCIe SSD, for massive high-capacity storage for AI data centers.  The PM1743 is Samsung’s PCIe Gen5 offering in the market, with 14,000 MBps sequential read, designed for high-performance enterprise workloads. According to Faruqui, PCIe Gen6 data center SSDs are best suited for AI inference performance enhancement. However, we’re still months away from large-scale adoption as no current CPU platforms are available with PCIe 6.0 support. Only Nvidia’s Blackwell-based GPUs have native PCIe 6.0 x16 support with interoperability tests in progress. He added that PCIe Gen 6 SSDs will see very delayed adoption in the PC segment and imminent 2025 2H adoption in AI, data centers, high-performance computing (HPC), and enterprise storage solutions. Micron has also introduced two additional SSDs alongside the 9650. The 6600 ION SSD delivers 122TB in an E3.S form factor and is targeted at hyperscale and enterprise data centers looking to consolidate server infrastructure and build large AI data lakes. A 245TB variant is on the roadmap. The 7600 PCIe Gen5 SSD, meanwhile, is aimed at mixed workloads that require lower latency.

Read More »

AI Deployments are Reshaping Intra-Data Center Fiber and Communications

Artificial Intelligence is fundamentally changing the way data centers are architected, with a particular focus on the demands placed on internal fiber and communications infrastructure. While much attention is paid to the fiber connections between data centers or to end-users, the real transformation is happening inside the data center itself, where AI workloads are driving unprecedented requirements for bandwidth, low latency, and scalable networking. Network Segmentation and Specialization Inside the modern AI data center, the once-uniform network is giving way to a carefully divided architecture that reflects the growing divergence between conventional cloud services and the voracious needs of AI. Where a single, all-purpose network once sufficed, operators now deploy two distinct fabrics, each engineered for its own unique mission. The front-end network remains the familiar backbone for external user interactions and traditional cloud applications. Here, Ethernet still reigns, with server-to-leaf links running at 25 to 50 gigabits per second and spine connections scaling to 100 Gbps. Traffic is primarily north-south, moving data between users and the servers that power web services, storage, and enterprise applications. This is the network most people still imagine when they think of a data center: robust, versatile, and built for the demands of the internet age. But behind this familiar façade, a new, far more specialized network has emerged, dedicated entirely to the demands of GPU-driven AI workloads. In this backend, the rules are rewritten. Port speeds soar to 400 or even 800 gigabits per second per GPU, and latency is measured in sub-microseconds. The traffic pattern shifts decisively east-west, as servers and GPUs communicate in parallel, exchanging vast datasets at blistering speeds to train and run sophisticated AI models. The design of this network is anything but conventional: fat-tree or hypercube topologies ensure that no single link becomes a bottleneck, allowing thousands of

Read More »

ABB and Applied Digital Build a Template for AI-Ready Data Centers

Toward the Future of AI Factories The ABB–Applied Digital partnership signals a shift in the fundamentals of data center development, where electrification strategy, hyperscale design and readiness, and long-term financial structuring are no longer separate tracks but part of a unified build philosophy. As Applied Digital pushes toward REIT status, the Ellendale campus becomes not just a development milestone but a cornerstone asset: a long-term, revenue-generating, AI-optimized property underpinned by industrial-grade power architecture. The 250 MW CoreWeave lease, with the option to expand to 400 MW, establishes a robust revenue base and validates the site’s design as AI-first, not cloud-retrofitted. At the same time, ABB is positioning itself as a leader in AI data center power architecture, setting a new benchmark for scalable, high-density infrastructure. Its HiPerGuard Medium Voltage UPS, backed by deep global manufacturing and engineering capabilities, reimagines power delivery for the AI era, bypassing the limitations of legacy low-voltage systems. More than a component provider, ABB is now architecting full-stack electrification strategies at the campus level, aiming to make this medium-voltage model the global standard for AI factories. What’s unfolding in North Dakota is a preview of what’s coming elsewhere: AI-ready campuses that marry investment-grade real estate with next-generation power infrastructure, built for a future measured in megawatts per rack, not just racks per row. As AI continues to reshape what data centers are and how they’re built, Ellendale may prove to be one of the key locations where the new standard was set.

Read More »

Amazon’s Project Rainier Sets New Standard for AI Supercomputing at Scale

Supersized Infrastructure for the AI Era As AWS deploys Project Rainier, it is scaling AI compute to unprecedented heights, while also laying down a decisive marker in the escalating arms race for hyperscale dominance. With custom Trainium2 silicon, proprietary interconnects, and vertically integrated data center architecture, Amazon joins a trio of tech giants, alongside Microsoft’s Project Stargate and Google’s TPUv5 clusters, who are rapidly redefining the future of AI infrastructure. But Rainier represents more than just another high-performance cluster. It arrives in a moment where the size, speed, and ambition of AI infrastructure projects have entered uncharted territory. Consider the past several weeks alone: On June 24, AWS detailed Project Rainier, calling it “a massive, one-of-its-kind machine” and noting that “the sheer size of the project is unlike anything AWS has ever attempted.” The New York Times reports that the primary Rainier campus in Indiana could include up to 30 data center buildings. Just two days later, Fermi America unveiled plans for the HyperGrid AI campus in Amarillo, Texas on a sprawling 5,769-acre site with potential for 11 gigawatts of power and 18 million square feet of AI data center capacity. And on July 1, Oracle projected $30 billion in annual revenue from a single OpenAI cloud deal, tied to the Project Stargate campus in Abilene, Texas. As Data Center Frontier founder Rich Miller has observed, the dial on data center development has officially been turned to 11. Once an aspirational concept, the gigawatt-scale campus is now materializing—15 months after Miller forecasted its arrival. “It’s hard to imagine data center projects getting any bigger,” he notes. “But there’s probably someone out there wondering if they can adjust the dial so it goes to 12.” Against this backdrop, Project Rainier represents not just financial investment but architectural intent. Like Microsoft’s Stargate buildout in

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »