Stay Ahead, Stay ONMINE

Three reasons Meta will struggle with community fact-checking

Earlier this month, Mark Zuckerberg announced that Meta will cut back on its content moderation efforts and eliminate fact-checking in the US in favor of the more “democratic” approach that X (formerly Twitter) calls Community Notes, rolling back protections that he claimed had been developed only in response to media and government pressure. The move is raising alarm bells, and rightly so. Meta has left a trail of moderation controversies in its wake, from overmoderating images of breastfeeding women to undermoderating hate speech in Myanmar, contributing to the genocide of Rohingya Muslims. Meanwhile, ending professional fact-checking creates the potential for misinformation and hate to spread unchecked. Enlisting volunteers is how moderation started on the Internet, long before social media giants realized that centralized efforts were necessary. And volunteer moderation can be successful, allowing for the development of bespoke regulations aligned with the needs of particular communities. But without significant commitment and oversight from Meta, such a system cannot contend with how much content is shared across the company’s platforms, and how fast. In fact, the jury is still out on how well it works at X, which is used by 21% of Americans (Meta’s are significantly more popular—Facebook alone is used by 70% of Americans, according to Pew).   Community Notes, which started in 2021 as Birdwatch, is a community-driven moderation system on X that allows users who sign up for the program to add context to posts. Having regular users provide public fact-checking is relatively new, and so far results are mixed. For example, researchers have found that participants are more likely to challenge content they disagree with politically and that flagging content as false does not reduce engagement, but they have also found that the notes are typically accurate and can help reduce the spread of misleading posts.  I’m a community moderator who researches community moderation. Here’s what I’ve learned about the limitations of relying on volunteers for moderation—and what Meta needs to do to succeed:  1. The system will miss falsehoods and could amplify hateful content There is a real risk under this style of moderation that only posts about things that a lot of people know about will get flagged in a timely manner—or at all. Consider how a post with a picture of a death cap mushroom and the caption “Tasty” might be handled under Community Notes–style moderation. If an expert in mycology doesn’t see the post, or sees it only after it’s been widely shared, it may not get flagged as “Poisonous, do not eat”—at least not until it’s too late. Topic areas that are more esoteric will be undermoderated. This could have serious impacts on both individuals (who may eat a poisonous mushroom) and society (if a falsehood spreads widely).  Crucially, X’s Community Notes aren’t visible to readers when they are first added. A note becomes visible to the wider user base only when enough contributors agree that it is accurate by voting for it. And not all votes count. If a note is rated only by people who tend to agree with each other, it won’t show up. X does not make a note visible until there’s agreement from people who have disagreed on previous ratings. This is an attempt to reduce bias, but it’s not foolproof. It still relies on people’s opinions about a note and not on actual facts. Often what’s needed is expertise.I moderate a community on Reddit called r/AskHistorians. It’s a public history site with over 2 million members and is very strictly moderated. We see people get facts wrong all the time. Sometimes these are straightforward errors. But sometimes there is hateful content that takes experts to recognize. One time a question containing a Holocaust-denial dog whistle escaped review for hours and ended up amassing hundreds of upvotes before it was caught by an expert on our team. Hundreds of people—probably with very different voting patterns and very different opinions on a lot of topics—not only missed the problematic nature of the content but chose to promote it through upvotes. This happens with answers to questions, too. People who aren’t experts in history will upvote outdated, truthy-sounding answers that aren’t actually correct. Conversely, they will downvote good answers if they reflect viewpoints that are tough to swallow.  r/AskHistorians works because most of its moderators are expert historians. If Meta wants its Community Notes–style program to work, it should  make sure that the people with the knowledge to make assessments see the posts and that expertise is accounted for in voting, especially when there’s a misalignment between common understanding and expert knowledge.  2. It won’t work without well-supported volunteers   Meta’s paid content moderators review the worst of the worst—including gore, sexual abuse and exploitation, and violence. As a result, many have suffered severe trauma, leading to lawsuits and unionization efforts. When Meta cuts resources from its centralized moderation efforts, it will be increasingly up to unpaid volunteers to keep the platform safe.  Community moderators don’t have an easy job. On top of exposure to horrific content, as identifiable members of their communities, they are also often subject to harassment and abuse—something we experience daily on r/AskHistorians. However, community moderators moderate only what they can handle. For example, while I routinely manage hate speech and violent language, as a moderator of a text-based community I am rarely exposed to violent imagery. Community moderators also work as a team. If I do get exposed to something I find upsetting or if someone is being abusive, my colleagues take over and provide emotional support. I also care deeply about the community I moderate. Care for community, supportive colleagues, and self-selection all help keep volunteer moderators’ morale high(ish).  It’s unclear how Meta’s new moderation system will be structured. If volunteers choose what content they flag, will that replicate X’s problem, where partisanship affects which posts are flagged and how? It’s also unclear what kind of support the platform will provide. If volunteers are exposed to content they find upsetting, will Meta—the company that is currently being sued for damaging the mental health of its paid content moderators—provide social and psychological aid? To be successful, the company will need to ensure that volunteers have access to such resources and are able to choose the type of content they moderate (while also ensuring that this self-selection doesn’t unduly influence the notes).     3. It can’t work without protections and guardrails  Online communities can thrive when they are run by people who deeply care about them. However, volunteers can’t do it all on their own. Moderation isn’t just about making decisions on what’s “true” or “false.” It’s also about identifying and responding to other kinds of harmful content. Zuckerberg’s decision is coupled with other changes to its community standards that weaken rules around hateful content in particular. Community moderation is part of a broader ecosystem, and it becomes significantly harder to do it when that ecosystem gets poisoned by toxic content.  I started moderating r/AskHistorians in 2020 as part of a research project to learn more about the behind-the-scenes experiences of volunteer moderators. While Reddit had started addressing some of the most extreme hate on its platform by occasionally banning entire communities, many communities promoting misogyny, racism, and all other forms of bigotry were permitted to thrive and grow. As a result, my early field notes are filled with examples of extreme hate speech, as well as harassment and abuse directed at moderators. It was hard to keep up with.  But halfway through 2020, something happened. After a milquetoast statement about racism from CEO Steve Huffman, moderators on the site shut down their communities in protest. And to its credit, the platform listened. Reddit updated its community standards to explicitly prohibit hate speech and began to enforce the policy more actively. While hate is still an issue on Reddit, I see far less now than I did in 2020 and 2021. Community moderation needs robust support because volunteers can’t do it all on their own. It’s only one tool in the box.  If Meta wants to ensure that its users are safe from scams, exploitation, and manipulation in addition to hate, it cannot rely solely on community fact-checking. But keeping the user base safe isn’t what this decision aims to do. It’s a political move to curry favor with the new administration. Meta could create the perfect community fact-checking program, but because this decision is coupled with weakening its wider moderation practices, things are going to get worse for its users rather than better.  Sarah Gilbert is research director for the Citizens and Technology Lab at Cornell University.

Earlier this month, Mark Zuckerberg announced that Meta will cut back on its content moderation efforts and eliminate fact-checking in the US in favor of the more “democratic” approach that X (formerly Twitter) calls Community Notes, rolling back protections that he claimed had been developed only in response to media and government pressure.

The move is raising alarm bells, and rightly so. Meta has left a trail of moderation controversies in its wake, from overmoderating images of breastfeeding women to undermoderating hate speech in Myanmar, contributing to the genocide of Rohingya Muslims. Meanwhile, ending professional fact-checking creates the potential for misinformation and hate to spread unchecked.

Enlisting volunteers is how moderation started on the Internet, long before social media giants realized that centralized efforts were necessary. And volunteer moderation can be successful, allowing for the development of bespoke regulations aligned with the needs of particular communities. But without significant commitment and oversight from Meta, such a system cannot contend with how much content is shared across the company’s platforms, and how fast. In fact, the jury is still out on how well it works at X, which is used by 21% of Americans (Meta’s are significantly more popular—Facebook alone is used by 70% of Americans, according to Pew).  

Community Notes, which started in 2021 as Birdwatch, is a community-driven moderation system on X that allows users who sign up for the program to add context to posts. Having regular users provide public fact-checking is relatively new, and so far results are mixed. For example, researchers have found that participants are more likely to challenge content they disagree with politically and that flagging content as false does not reduce engagement, but they have also found that the notes are typically accurate and can help reduce the spread of misleading posts

I’m a community moderator who researches community moderation. Here’s what I’ve learned about the limitations of relying on volunteers for moderation—and what Meta needs to do to succeed: 

1. The system will miss falsehoods and could amplify hateful content

There is a real risk under this style of moderation that only posts about things that a lot of people know about will get flagged in a timely manner—or at all. Consider how a post with a picture of a death cap mushroom and the caption “Tasty” might be handled under Community Notes–style moderation. If an expert in mycology doesn’t see the post, or sees it only after it’s been widely shared, it may not get flagged as “Poisonous, do not eat”—at least not until it’s too late. Topic areas that are more esoteric will be undermoderated. This could have serious impacts on both individuals (who may eat a poisonous mushroom) and society (if a falsehood spreads widely). 

Crucially, X’s Community Notes aren’t visible to readers when they are first added. A note becomes visible to the wider user base only when enough contributors agree that it is accurate by voting for it. And not all votes count. If a note is rated only by people who tend to agree with each other, it won’t show up. X does not make a note visible until there’s agreement from people who have disagreed on previous ratings. This is an attempt to reduce bias, but it’s not foolproof. It still relies on people’s opinions about a note and not on actual facts. Often what’s needed is expertise.

I moderate a community on Reddit called r/AskHistorians. It’s a public history site with over 2 million members and is very strictly moderated. We see people get facts wrong all the time. Sometimes these are straightforward errors. But sometimes there is hateful content that takes experts to recognize. One time a question containing a Holocaust-denial dog whistle escaped review for hours and ended up amassing hundreds of upvotes before it was caught by an expert on our team. Hundreds of people—probably with very different voting patterns and very different opinions on a lot of topics—not only missed the problematic nature of the content but chose to promote it through upvotes. This happens with answers to questions, too. People who aren’t experts in history will upvote outdated, truthy-sounding answers that aren’t actually correct. Conversely, they will downvote good answers if they reflect viewpoints that are tough to swallow. 

r/AskHistorians works because most of its moderators are expert historians. If Meta wants its Community Notes–style program to work, it should  make sure that the people with the knowledge to make assessments see the posts and that expertise is accounted for in voting, especially when there’s a misalignment between common understanding and expert knowledge. 

2. It won’t work without well-supported volunteers  

Meta’s paid content moderators review the worst of the worst—including gore, sexual abuse and exploitation, and violence. As a result, many have suffered severe trauma, leading to lawsuits and unionization efforts. When Meta cuts resources from its centralized moderation efforts, it will be increasingly up to unpaid volunteers to keep the platform safe. 

Community moderators don’t have an easy job. On top of exposure to horrific content, as identifiable members of their communities, they are also often subject to harassment and abuse—something we experience daily on r/AskHistorians. However, community moderators moderate only what they can handle. For example, while I routinely manage hate speech and violent language, as a moderator of a text-based community I am rarely exposed to violent imagery. Community moderators also work as a team. If I do get exposed to something I find upsetting or if someone is being abusive, my colleagues take over and provide emotional support. I also care deeply about the community I moderate. Care for community, supportive colleagues, and self-selection all help keep volunteer moderators’ morale high(ish). 

It’s unclear how Meta’s new moderation system will be structured. If volunteers choose what content they flag, will that replicate X’s problem, where partisanship affects which posts are flagged and how? It’s also unclear what kind of support the platform will provide. If volunteers are exposed to content they find upsetting, will Meta—the company that is currently being sued for damaging the mental health of its paid content moderators—provide social and psychological aid? To be successful, the company will need to ensure that volunteers have access to such resources and are able to choose the type of content they moderate (while also ensuring that this self-selection doesn’t unduly influence the notes).    

3. It can’t work without protections and guardrails 

Online communities can thrive when they are run by people who deeply care about them. However, volunteers can’t do it all on their own. Moderation isn’t just about making decisions on what’s “true” or “false.” It’s also about identifying and responding to other kinds of harmful content. Zuckerberg’s decision is coupled with other changes to its community standards that weaken rules around hateful content in particular. Community moderation is part of a broader ecosystem, and it becomes significantly harder to do it when that ecosystem gets poisoned by toxic content. 

I started moderating r/AskHistorians in 2020 as part of a research project to learn more about the behind-the-scenes experiences of volunteer moderators. While Reddit had started addressing some of the most extreme hate on its platform by occasionally banning entire communities, many communities promoting misogyny, racism, and all other forms of bigotry were permitted to thrive and grow. As a result, my early field notes are filled with examples of extreme hate speech, as well as harassment and abuse directed at moderators. It was hard to keep up with. 

But halfway through 2020, something happened. After a milquetoast statement about racism from CEO Steve Huffman, moderators on the site shut down their communities in protest. And to its credit, the platform listened. Reddit updated its community standards to explicitly prohibit hate speech and began to enforce the policy more actively. While hate is still an issue on Reddit, I see far less now than I did in 2020 and 2021. Community moderation needs robust support because volunteers can’t do it all on their own. It’s only one tool in the box. 

If Meta wants to ensure that its users are safe from scams, exploitation, and manipulation in addition to hate, it cannot rely solely on community fact-checking. But keeping the user base safe isn’t what this decision aims to do. It’s a political move to curry favor with the new administration. Meta could create the perfect community fact-checking program, but because this decision is coupled with weakening its wider moderation practices, things are going to get worse for its users rather than better. 

Sarah Gilbert is research director for the Citizens and Technology Lab at Cornell University.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Dell bolsters PowerStore array with capacity, security features

Dell Technology has updated its PowerStore unified file and block storage array with increased capacity; stronger digital resilience; and simpler file management. The company claims that through these enhancements, organizations can increase storage density per rack unit and reduce total cost of ownership by 15%. Dell stated that its 30TB

Read More »

Yea or nay: Will Nvidia H200 chips go to China?

He noted, “the broader implications and potential impacts may signal to enterprise customers of Nvidia that perhaps they don’t need the latest and greatest GPUs from [them] either to achieve acceptable results across select AI workloads. It is doubtful that Nvidia would commission additional production issues for H200 without China

Read More »

Hamm Says Oil Producers Need Guarantees to Work in VEN

Shale billionaire Harold Hamm said oil companies need guarantees that their assets won’t someday be seized by Venezuela if they help revive the nation’s crude production. “There is a lot of geopolitical risk in Venezuela. Exxon had been there twice and been nationalized, ” Hamm said in an interview Thursday with Bloomberg Television. “There have got to be guarantees against that. We have seen other companies burned real bad there.” Hamm, one of the oil industry’s most outspoken supporters of US President Donald Trump, said Venezuela is a much safer place as a result of the US removing former President Nicolás Maduro from power. The wildcatter was part of a group of oil executives who met with Trump at the White House last week to talk about boosting output in the South American nation, home to some of the world’s largest crude reserves. Trump has called on US oil companies to invest at least $100 billion in order to revive production in Venezuela after years of corruption, underinvestment and neglect ravaged output. Crude producers, however, are moving cautiously. Exxon Mobil Corp. Chief Executive Officer Darren Woods told Trump at the meeting that the country is currently “uninvestable.” Guarantees against Venezuela nationalizing US oil assets should come in the form of physical security but shouldn’t need to include financial assurances, Hamm said in a separate phone interview Thursday. As for companies that have had their assets in the country nationalized, “I’m sure they’re pretty dang cautious,” he added.  Asked if his Oklahoma City-based company, Continental Resources Inc., planned to enter Venezuela, Hamm told Bloomberg Television that he was monitoring the geopolitical situation and looking at the geology.  “That is what I have related to the president, and I think a lot of other executives did the same thing,” he said, adding in the

Read More »

Oil Falls Sharply as US Pauses Iran Action

Oil fell the most since June after the US held off on attacking Iran for now as the country pledged not to execute protesters. West Texas Intermediate slumped 4.6% to settle near $59 a barrel on Thursday after a 10% jump over the past week. The New York Times reported that Israel’s Prime Minister Benjamin Netanyahu asked US President Donald Trump to postpone plans for a military attack on Iran. The news reduced the likelihood of an immediate US response to a crackdown on a domestic uprising against the regime and of disruptions to Iranian production or key shipping lanes. That comes after Trump told reporters Wednesday that he had been informed the “killing in Iran is stopping,” adding he would be “very upset” if the country continued to execute protesters. White House Press Secretary Karoline Leavitt reiterated on Thursday that there would be “grave consequences” if the violence continued, keeping investors on their toes. The US Treasury Department, meantime, announced sanctions on Iran’s Secretary of the Supreme National Security Council and 18 individuals and entities part of what it says is a shadow bank network, reinforcing expectations conflict is not immediate. “Developments in Iran, so very much part of the driving force behind the recent rally, have taken a much less anxious turn overnight,” said John Evans, an analyst at brokerage PVM. “The ladder of risk premium has been lost.” Oil has pushed higher in the new year as turmoil in OPEC’s fourth-largest producer, along with upheaval in Venezuela, added geopolitical risk to prices. There’s also been material disruption to Kazakh exports in the Black Sea due to a combination of drone attacks, maintenance and bad weather, which has also bolstered prices. Crude’s major swings in recent days have also been driven by financial flows. Bullish options volumes are

Read More »

MidOcean Energy in Talks to Join Argentina LNG

MidOcean Energy LLC, a liquefied natural gas company founded by EIG, is in talks to join Argentina’s signature LNG venture, according to people familiar with the matter. The $20 billion project led by state-run YPF SA and Italy’s Eni SpA envisages construction of at least two floating liquefaction vessels with annual capacity for 12 million tons off Argentina’s Atlantic coast. YPF executives have ambitions to incorporate a third vessel. Talks are at an early stage and MidOcean may yet walk away from the project known as Argentina LNG, said the people, who asked not to be named because the information is private. Saudi Aramco is an investor in MidOcean. President Javier Milei met MidOcean executives in Buenos Aires this week, according to a statement from his office that didn’t provide details of the meeting. “We were pleased to meet with President Milei to discuss opportunities in Argentina’s energy sector as part of our regular assessment of business development opportunities globally, and we look forward to continued engagement,” EIG said in an email, without providing further comment on the talks.  YPF declined to comment. Eni didn’t immediately reply to a request for comment. MidOcean holds stakes in gas-export plants in Australia, Peru and Canada, and has been looking to expand its portfolio. Abu Dhabi National Oil Co.’s overseas investment arm, XRG, agreed in November to join as an equity partner in Argentina LNG but hasn’t yet inked a binding deal. Argentina LNG is a key part of efforts to turn Argentina’s booming Vaca Muerta shale patch into a significant global provider of oil and gas. The exports could, in turn, drive a generational shift to stabilize the country’s crisis-prone economy. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network

Read More »

Chapo Sees Total LNG Project Restart Within Weeks

Mozambican President Daniel Chapo expects TotalEnergies SE’s $20 billion liquefied natural gas project to restart as early as this month, reviving a potentially key revenue source for the cash-strapped government. There are already signs of construction preparation at the project that shut in 2021 after insurgent attacks near the site, Chapo said in an interview in Abu Dhabi on Wednesday. He played down the risk that Islamic State-linked militants will halt construction again. The Mozambique LNG project promised to transform one of the world’s poorest economies and bring in billions of dollars in revenue to the state, some of which it earmarked to pay down debt. Repeated setbacks caused by security concerns have pushed out first exports to 2029, just as a wave of supply from Qatar and the US is forecast to hit the global market. A restart would be welcome news for Mozambique’s economy that’s also struggling to recover from post-election unrest. “If everything goes well, we may be able to resume effectively in late January or early February,” said Chapo, who was sworn in as the nation’s leader a year ago. “We are pleased with the way work is progressing.” Total, the operator of Mozambique LNG with a 26.5% stake, previously said it wants the government to approve $4.5 billion in additional costs incurred during the almost five-year freeze before fully relaunching the project. The sides have agreed to allow the development to proceed while the approval process is under way, said Chapo. He was unable to give a timeline for Mozambique’s audit of costs because of the complexity of the process. Total, in a response to a request for comment, referred to remarks by Chief Executive Officer Patrick Pouyanné in December that the company is mobilizing workers and contractors at the development. Chapo said he has an “excellent” relationship with

Read More »

Analyst Explains Why Feb NatGas Contract Collapsed Wednesday

In an EBW Analytics Group report sent to Rigzone by the EBW team on Thursday, Eli Rubin, an energy analyst at the company, highlighted that the February natural gas contract “collapsed” yesterday. Rubin outlined in the report that the February natural gas contract fell to $3.068 per million British thermal units (MMBtu) on Wednesday “on (i) chances for a dissipating Alaska ridge opening milder February risks and (ii) a Webber Research report that Golden Pass LNG Trains 2-3 may be delayed until 2027”. “Weakness was compounded by volatility: yesterday’s $3.120 close is within 1.1 cents of Friday’s low,” Rubin added. In the report, Rubin pointed out that daily LNG demand “dropped to a two month low” yesterday, “mitigating weather driven Henry Hub spot price upside to clear at $3.12 per MMBtu”. He also noted that “LNG could jump 3.5 billion cubic feet per day – adding to a 12.4 billion cubic foot per day increase in weather-driven demand into Tuesday”. Rubin went on to outline in the report that “consensus projections” for the U.S. Energy Information Administration’s (EIA) next weekly natural gas storage report – which is scheduled to be released later today and will include data for the week ending January 9 – “are for an 87-91 billion cubic foot draw”. “The bigger story is likely to be rising physical market strength into a cold Martin Luther King holiday weekend,” Rubin added. “Healthy storage surpluses suggest NYMEX futures may try to continue to look past near term cold, however,” he continued. The EBW report highlighted that the February natural gas contract closed at $3.120 per MMBtu on Wednesday. It outlined that this marked a 29.9 cent, or 8.7 percent, decrease from Tuesday’s close. In Thursday’s report, EBW predicted a “mixed signals” trend for the NYMEX front-month natural gas contract

Read More »

ENAP Secures Deal to Use Oxiquim Terminal

Chile’s National Petroleum Company (ENAP) has signed an agreement with Oxiquim allowing ENAP to transfer and store fuels at Oxiquim’s terminal in the municipality of Coronel, Región del Biobío. The agreement allows ENAP to use the Escuadrón Maritime Terminal starting this quarter, ENAP said in a press release. Oxiquim general manager Cecilia Pardo, in comments about the deal, noted of “limited infrastructure available for fuel storage and difficulty in executing new projects”.  Escuadrón is one of three terminals operated by Oxiquim for bulk liquid raw materials and chemicals, fuels and liquefied petroleum gas in Chile, Oxiquim says on its website. The other two are the Mejillones and Quintero terminals. “The agreement is consistent with ENAP’s recent announcement to consolidate its logistics business as a key pillar of the company, incorporating strategic partners and high service standards”, ENAP said. “This is based on the fact that ENAP has logistics assets exceeding $3 billion and that its operations span from Arica to Punta Arenas”. ENAP general manager Julio Friedmann said, “[S]ome of the main areas of work in the immediate term are to generate greater storage capacity and to develop future partnerships with third parties, always for the purpose of strengthening fuel supply throughout the territory”. Last year ENAP consolidated its logistics assets under a “Corporate Logistics Management” and announced an investment plan of $540 million over the next five years for its logistics business. “Among the main areas of focus of this new management are operational excellence, efficiency and profitability of the logistics chain, and the adaptation of infrastructure to meet current and future needs”, ENAP said in an online statement September 5, 2025. “Currently, ENAP manages logistics assets valued at over $3 billion, including a large land fleet, 11 ships, 830 kilometers [515.74 miles] of pipelines, three million cubic meters [105.94 million

Read More »

Google warns transmission delays are now the biggest threat to data center expansion

The delays stem from aging transmission infrastructure unable to handle concentrated power demands. Building regional transmission lines currently takes seven to eleven years just for permitting, Hanna told the gathering. Southwest Power Pool has projected 115 days of potential loss of load if transmission infrastructure isn’t built to match demand growth, he added. These systemic delays are forcing enterprises to reconsider fundamental assumptions about cloud capacity. Regions including Northern Virginia and Santa Clara that were prime locations for hyperscale builds are running out of power capacity. The infrastructure constraints are also reshaping cloud competition around power access rather than technical capabilities. “This is no longer about who gets to market with the most GPU instances,” Gogia said. “It’s about who gets to the grid first.” Co-location emerges as a faster alternative to grid delays Unable to wait years for traditional grid connections, hyperscalers are pursuing co-location arrangements that place data centers directly adjacent to power plants, bypassing the transmission system entirely. Pricing for these arrangements has jumped 20% in power-constrained markets as demand outstrips availability, with costs flowing through to cloud customers via regional pricing differences, Gogia said. Google is exploring such arrangements, though Hanna said the company’s “strong preference is grid-connected load.” “This is a speed to power play for us,” he said, noting Google wants facilities to remain “front of the meter” to serve the broader grid rather than operating as isolated power sources. Other hyperscalers are negotiating directly with utilities, acquiring land near power plants, and exploring ownership stakes in power infrastructure from batteries to small modular nuclear reactors, Hanna said.

Read More »

OpenAI turns to Cerebras in a mega deal to scale AI inference infrastructure

Analysts expect AI workloads to grow more varied and more demanding in the coming years, driving the need for architectures tuned for inference performance and putting added pressure on data center networks. “This is prompting hyperscalers to diversify their computing systems, using Nvidia GPUs for general-purpose AI workloads, in-house AI accelerators for highly optimized tasks, and systems such as Cerebras for specialized low-latency workloads,” said Neil Shah, vice president for research at Counterpoint Research. As a result, AI platforms operating at hyperscale are pushing infrastructure providers away from monolithic, general-purpose clusters toward more tiered and heterogeneous infrastructure strategies. “OpenAI’s move toward Cerebras inference capacity reflects a broader shift in how AI data centers are being designed,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “This move is less about replacing Nvidia and more about diversification as inference scales.” At this level, infrastructure begins to resemble an AI factory, where city-scale power delivery, dense east–west networking, and low-latency interconnects matter more than peak FLOPS, Ram added. “At this magnitude, conventional rack density, cooling models, and hierarchical networks become impractical,” said Manish Rawat, semiconductor analyst at TechInsights. “Inference workloads generate continuous, latency-sensitive traffic rather than episodic training bursts, pushing architectures toward flatter network topologies, higher-radix switching, and tighter integration of compute, memory, and interconnect.”

Read More »

Cisco’s 2026 agenda prioritizes AI-ready infrastructure, connectivity

While most of the demand for AI data center capacity today comes from hyperscalers and neocloud providers, that will change as enterprise customers delve more into the AI networking world. “The other ecosystem members and enterprises themselves are becoming responsible for an increasing proportion of the AI infrastructure buildout as inferencing and agentic AI, sovereign cloud, and edge AI become more mainstream,” Katz wrote. More enterprises will move to host AI on premises via the introduction of AI agents that are designed to inject intelligent insight into applications and help improve operations. That’s where the AI impact on enterprise network traffic will appear, suggests Nolle. “Enterprises need to host AI to create AI network impact. Just accessing it doesn’t do much to traffic. Having cloud agents access local data center resources (RAG etc.) creates a governance issue for most corporate data, so that won’t go too far either,” Nolle said.  “Enterprises are looking at AI agents, not the way hyperscalers tout agentic AI, but agents running on small models, often open-source, and are locally hosted. This is where real AI traffic will develop, and Cisco could be vulnerable if they don’t understand this point and at least raise it in dialogs where AI hosting comes up,” Nolle said. “I don’t expect they’d go too far, because the real market for enterprise AI networking is probably a couple years out.” Meanwhile, observers expect Cisco to continue bolstering AI networking capabilities for enterprise branch, campus and data centers as well as hyperscalers, including through optical support and other gear.

Read More »

Microsoft tells communities it will ‘pay its way’ as AI data center resource usage sparks backlash

It will work with utilities and public commissions to set the rates it pays high enough to cover data center electricity costs (including build-outs, additions, and active use). “Our goal is straightforward: To ensure that the electricity cost of serving our data centers is not passed on to residential customers,” Smith emphasized. For example, the company is supporting a new rate structure Wisconsin that would charge a class of “very large customers,” including data centers, the true cost of the electricity required to serve them. It will collaborate “early, closely, and transparently” with local utilities to add electricity and supporting infrastructure to existing grids when needed. For instance, Microsoft has contracted with the Midcontinent Independent System Operator (MISO) to add 7.9GW of new electricity generation to the grid, “more than double our current consumption,” Smith noted. It will pursue ways to make data centers more efficient. For example, it is already experimenting with AI to improve planning, extract more electricity from existing infrastructure, improve system resilience, and speed development of new infrastructure and technologies (like nuclear energy). It will advocate for state and national public policies that ensure electricity access that is affordable, reliable, and sustainable in neighboring communities. Microsoft previously established priorities for electricity policy advocacy, Smith noted, but “progress has been uneven. This needs to change.” Microsoft is similarly committed when it comes to data center water use, promising four actions: Reducing the overall amount of water its data centers use, initially improving it by 40% by 2030. The company is exploring innovations in cooling, including closed-loop systems that recirculate cooling liquids. It will collaborate with local utilities to map out water, wastewater, and pressure needs, and will “fully fund” infrastructure required for growth. For instance, in Quincy, Washington, Microsoft helped construct a water reuse utility that recirculates

Read More »

Can retired naval power plants solve the data center power crunch?

HGP’s plan includes a revenue share with the government, and the company would create a decommissioning fund, according to Bloomberg. The alternative? After a lengthy decommissioning process, the reactors are shipped to a remote storage facility in Washington state together dust along with dozens of other retired nuclear reactors. So the carrier itself isn’t going to be turned into a data center, but its power plants are being proposed for a data center on land. And even with the lengthening decommissioning process, that’s still faster than building a nuclear power plant from scratch. Don’t hold your breath, says Kristen Vosmaer, managing director, JLL Work Dynamics Data Center team. The idea of converting USS Nimitz’s nuclear reactors to power AI data centers sounds compelling but faces insurmountable obstacles, he argues. “Naval reactors use weapons-grade uranium that civilian entities cannot legally possess, and the Nuclear Regulatory Commission has no pathway to license such facilities. Even setting aside the fuel issue, these military-designed systems would require complete reconstruction to meet civilian safety standards, eliminating any cost advantages over purpose-built nuclear plants,” Vosmaer said. The maritime concept itself, however, does have some merit, said Vosmaer. “Ocean cooling can reduce energy consumption compared to land-based data centers, and floating platforms offer positioning flexibility that fixed facilities cannot match,” Vosmaer said.

Read More »

What exactly is an AI factory?

Others, however, seem to use the word to mean something smaller than a data center, referring more to the servers, software, and other systems used to run AI. For example, the AWS AI Factory is a combination of hardware and software that runs on-premises but is managed by AWS and comes with AWS services such as Bedrock, networking, storage and databases, and security.  At Lenovo, AI factories appear to be packaged servers designed to be used for AI. “We’re looking at the architecture being a fixed number of racks, all working together as one design,” said Scott Tease, vice president and general manager of AI and high-performance computing at Lenovo’s infrastructure solutions group. That number of racks? Anything from a single rack to hundreds, he told Computerworld. Each rack is a little bigger than a refrigerator, comes fully assembled, and is often fully preconfigured for the customer’s use case. “Once it arrives at the customer site, we’ll have service personnel connect power and networking,” Tease said. For others, the AI factory concept is more about the software.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »