Stay Ahead, Stay ONMINE

Meta retreats from fact checking content: what it means for businesses

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Facebook creator and Meta CEO Mark “Zuck” Zuckerberg shook the world again today when he announced sweeping changes to the way his company moderates and handles user-generated posts and content in the U.S. Citing the “recent […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Facebook creator and Meta CEO Mark “Zuck” Zuckerberg shook the world again today when he announced sweeping changes to the way his company moderates and handles user-generated posts and content in the U.S.

Citing the “recent elections” as a “cultural tipping point,” Zuck explained in a roughly 5-minute long video posted to his Facebook and Instagram accounts this morning (Tuesday, January 7) that Meta would cease using independent third-party fact checkers and fact-checking organizations to help moderate and append notes to user posts shared across the company’s suite of social networking and messaging apps, including Facebook, Instagram, WhatsApp, Threads, and more.

Instead, Zuck said that Meta would rely on a “Community Notes” style approach, crowdsourcing information from the users across Meta’s apps to give context and veracity to posts, similar to (and Zuck acknowledged this in his video) the rival social network X (formerly Twitter).

Zuck casted the changes as a return to Facebook’s “roots” around free expression and a reduction of over-broad “censorship.” See the full transcript of his remarks at the bottom of this article.

Why this announcement and policy change matters to businesses

With more than 3 billion users across its services and products worldwide, Meta remains the largest social network to date. In addition, as of 2022, more than 200 million businesses worldwide used the company’s apps and services — most of them small — and 10 million were active paying advertisers on the platform, according to one executive.

Meta’s new chief global affairs officer Joe Kaplan, a former Deputy Chief of Staff for Republican President George W. Bush who recently took on the role in what many viewed as a signal to lawmakers and the wider world of Meta’s willingness to work with the GOP-led Congress and White House following the 2024 election, also published a note to Meta’s corporate website describing some of the changes in greater detail.

Already, some business executives such as Shopify’s CEO Tobi Lutke have seemingly embraced the announcement. As Lutke wrote on X today: “Huge and important change.”

Founders Fund chief marketing officer and tech influencer Mike Solana also hailed the move, writing in a post on X: “There’s already been a dramatic decrease in censorship across the meta platforms. but a public statement of this kind plainly speaking truth (the “fact checkers” were biased, and the policy was immoral) is really and finally the end of a golden age for the worst people alive.”

However, others are less optimistic and receptive to the changes, viewing them as less about freedom of expression, and more about currying favor with the incoming Republican presidential administration of recently elected President Donald J. Trump (to his second non-consecutive term) and GOP-led Congress, as other business executives and firms have seemingly moved to do.

“More free expression on social media is a good thing,” wrote the non-profit Freedom of the Press Foundation on the social network BlueSky (disclosure: my wife is a board member of the non-profit). “But based on Meta’s track record, it seems more likely that this is about sucking up to Donald Trump than it is about free speech.”

George Washington University political communication professor Dave Karpf seemed to agree, writing on BlueSky: “Two salient facts about Facebook replacing its fact-checking program with community notes: (1) community notes are cheaper. (2) the incoming political regime dislikes fact-checking. So community notes are less trouble. The rest is just framing. Zuck’s sole principle is to do what’s best for Zuck.”

And Kate Starbird, professor at the University of Washington and co-founder of the UW Center for an Informed Public, wrote on BlueSky that: “Meta is dropping its support for fact-checking, which, in addition to degrading users’ ability to verify content, will essentially defund all of the little companies that worked to identify false content online. But our FB feeds are basically just AI slop at this point, so?”

When will the changes take place?

Both Zuck and Kaplan stated in their respective video and text posts that the changes to Meta’s content moderation policies and practices would be coming to the U.S. in “the next couple of months.”

Meta will discontinue its independent fact-checking program in the United States, launched in 2016, in favor of a Community Notes model inspired by X (formerly Twitter). This system will rely on users to write and rate notes, requiring agreement across diverse perspectives to ensure balance and prevent bias.

According to its website, Meta had been working with a variety of organizations “certified through the non-partisan International Fact-Checking Network (IFCN) or European Fact-Checking Standards Network (EFCSN) to identify, review and take action” on content deemed “misinformation.”

However, as Zuck opined in his video post, “after Trump first got elected in 2016 the legacy media wrote non-stop about how misinformation was a threat to democracy. We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the U.S.”

Zuck also added that: “There’s been widespread debate about potential harms from online content. Governments and legacy media have pushed to censor more and more. A lot of this is clearly political.”

According to Kaplan, the shift aims to reduce the perceived censorship that arose from the previous fact-checking program, which often applied intrusive labels to legitimate political speech.

Loosening restrictions on political and sensitive topics

Meta is revising its content policies to allow more discourse on politically sensitive topics like immigration and gender identity. Kaplan pointed out that it is inconsistent for such topics to be debated in public forums like Congress or on television but restricted on Meta’s platforms.

Automated systems, which have previously been used to enforce policies across a wide range of issues, will now focus primarily on tackling illegal and severe violations, such as terrorism and child exploitation.

For less critical issues, the platform will rely more on user reports and human reviewers. Meta will also reduce content demotions for material flagged as potentially problematic unless there is strong evidence of a violation.

However, the reduction of automated systems would seem to fly in the face of Meta’s promotion of AI as a valuable tool in its own business offerings — why should anyone else trust Meta’s AI models such as the Llama family if Meta itself isn’t content to use them to moderate content?

A reduction in content takedowns coming?

As Zuck put it, a big problem with Facebook’s automated systems is overly broad censorship.

He stated in his video address, “we built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes, even if they accidentally censor just 1% posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.”

Meta acknowledges that mistakes in content moderation have been a persistent issue. Kaplan noted that while less than 1% of daily content is removed, an estimated 10-20% of these actions may be errors. To address this, Meta plans to:

• Publish transparency reports detailing moderation mistakes and progress.

• Require multiple reviewers to confirm decisions before content is removed.

• Use advanced AI systems, including large language models, for second opinions on enforcement actions.

Additionally, the company is relocating its trust and safety teams from California to other U.S. locations, including Texas, to address perceptions of bias — which already, some have poked fun at on various social channels: are people in Texas really less biased than those in California?

The return of political content…and ‘fake news’?

Since 2021, Meta has limited the visibility of civic and political content on its platforms in response to user feedback.

However, the company now plans to reintroduce this content in a more personalized manner.

Users who wish to see more political content will have greater control over their feeds, with Meta using explicit signals like likes and implicit behaviors such as post views to determine preferences.

However, this reinstating of political content could run the risk of once again allowing for the spread of politically charged misinformation from U.S. adversaries — as we saw in the run-up to the 2016 election, when numerous Facebook pages spewed disinformation and conspiracy theories that favored Republicans and disfavored Democratic candidates and policies.

One admitted “fake news” creator told NPR that while they had tried to create content for both liberal and conservative audiences, the latter were more interested and gullible towards sharing and re-sharing fake content that aligned with their views.

Such “fake news” was so widespread, it was even joked about on social media itself and in The Onion.

My analysis on what it means for businesses and brand pages

I’ve never owned a business, but I have managed several Facebook and Instagram accounts on behalf of large corporate and smaller startup/non-profit organizations, so I know firsthand about the work that goes into maintaining them, posting, and growing their audiences/followings.

I think that while Meta’s stated commitment to restoring more freedom of expression to its products is laudable, the jury is out on how it will actually impact the desire for businesses to speak to their fans and customers using said products.

At best, it will be a double-edged sword: less strict content moderation policies will give brands and businesses the chance to post more controversial, experimental, and daring content — and those that take advantage of this may see their messages reach wider audiences, i.e., “go viral.”

On the flip side, brands and businesses may now struggle to get their posts seen and reacted upon in the face of other pages posting even more controversial, politically pointed content.

In addition, the changes could make it easier for users to criticize brands or implicate them in conspiracies, and it may be harder for the brands to force takedowns of such unflattering content about them — even when untrue.

What’s next?

The rollout of Community Notes and policy adjustments is expected to begin in the coming months in the U.S. Meta plans to improve and refine these systems throughout the year.

These initiatives, Kaplan said, aim to balance the need for safety and accuracy with the company’s core value of enabling free expression.

Kaplan said Meta is focused on creating a platform where individuals can freely express themselves. He also acknowledged the challenges of managing content at scale, describing the process as “messy” but essential to Meta’s mission.

For users, these changes promise fewer intrusive interventions and a greater opportunity to shape the conversation on Meta’s platforms.

Whether the new approach will succeed in reducing frustration and fostering open dialogue remains to be seen.

Hey, everyone. I want to talk about something important today, because it’s time to get back to our roots around free expression on Facebook and Instagram. I started building social media to give people a voice. I gave a speech at Georgetown five years ago about the importance of protecting free expression, and I still believe this today, but a lot has happened over the last several years.

There’s been widespread debate about potential harms from online content. Governments and legacy media have pushed to censor more and more. A lot of this is clearly political, but there’s also a lot of legitimately bad stuff out there: drugs, terrorism, child exploitation. These are things that we take very seriously, and I want to make sure that we handle responsibly. So we built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes. Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.

The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech. So we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms. More specifically, here’s what we’re going to do.

First, we’re going to get rid of fact-checkers and replace them with community notes similar to X, starting in the US. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US. So over the next couple of months, we’re going to phase in a more comprehensive community notes system.

Second, we’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse. What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it’s gone too far. So I want to make sure that people can share their beliefs and experiences on our platforms.

Third, we’re changing how we enforce our policies to reduce the mistakes that account for the vast majority of censorship on our platforms. We used to have filters that scanned for any policy violation. Now we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a tradeoff. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.

Fourth, we’re bringing back civic content. For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts. But it feels like we’re in a new era now, and we’re starting to get feedback that people want to see this content again. So we’re going to start phasing this back into Facebook, Instagram, and Threads, while working to keep the communities friendly and positive.

Fifth, we’re going to move our trust and safety and content moderation teams out of California, and our US-based content review is going to be based in Texas. As we work to promote free expression, I think that will help us build trust to do this work in places where there is less concern about the bias of our teams.

Finally, we’re going to work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever-increasing number of laws institutionalizing censorship and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years. When even the US government has pushed for censorship by going after us and other American companies, it has emboldened other governments to go even further. But now we have the opportunity to restore free expression, and I am excited to take it.

It’ll take time to get this right, and these are complex systems. They’re never going to be perfect. There’s also a lot of illegal stuff that we still need to work very hard to remove. But the bottom line is that after years of having our content moderation work focused primarily on removing content, it is time to focus on reducing mistakes, simplifying our systems, and getting back to our roots about giving people voice.

I’m looking forward to this next chapter. Stay good out there and more to come soon.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

IBM expands professional services for Cisco firewalls

With the expanded Cisco partnership, IBM TLS can now support the lifecycle of these Cisco firewalls, whether physical, cloud or virtual, by planning, designing, purchasing, installing, de-installing, and supporting them, helping clients to optimize their core or AI infrastructure, according to Atul Dhall, vice president of product management and global

Read More »

Nvidia looks to power AI factory networks

Nvidia also introduced BlueField 4, a next-generation processor that acts as the operating system for AI factories. It delivers 800Gbit/sec of throughput, double the throughput of its predecessor BlueField 3, and six times more compute than BlueField 3. BlueField 4 combines Arm-based CPUs with the ConnectX-9 SuperNIC to accelerate storage,

Read More »

Noble Quarterly Revenue Falls

Noble Corp on Monday reported $798 million in revenue for the third quarter, down from $849 million for the prior three-month period as lower rig utilization offset lower contract drilling services costs. “Utilization of the 35 marketed rigs was 65 percent in the third quarter of 2025 compared to 73

Read More »

Google Cloud targets enterprise AI builders with upgraded Vertex AI Training

Enterprises can quickly set up managed Slurm environments with automated resiliency and cost optimization through the Dynamic Workload Scheduler. The platform also includes hyperparameter tuning, data optimization, and built-in recipes with frameworks like NVIDIA NeMo to streamline model development. Enterprises weigh AI training gains Building and scaling generative AI models

Read More »

Energy Department Announces Loan for Indiana Coal-Powered Fertilizer Facility

WASHINGTON—U.S. Secretary of Energy Chris Wright today announced the Department of Energy’s (DOE) Loan Programs Office (LPO) closed a loan to support independent, American-made, and coal-powered fertilizer production. The $1.5 billion loan to Wabash Valley Resources, LLC, will help finance a coal and ammonia fertilizer facility in West Terre Haute, Indiana. The project will restart and repurpose a coal gasification plant idled since 2016 to produce 500,000 metric tons of anhydrous ammonia per year by using coal from a nearby Southern Indiana mine and petcoke as feedstock. “For too long, America has been dependent on foreign sources of fertilizer,” said U.S. Energy Secretary Chris Wright. “Under President Trump’s leadership, we are changing that by putting America first, relying on American coal, American workers, and American innovation to power our farms and feed our families.” By investing in a coal community, the Wabash project will bring the gasification plant back online to produce ammonia fertilizer – a vital resource for farmers across the Corn Belt, which currently relies on imports from Canada, the Caribbean, the Middle East, and Russia. The project will strengthen domestic supply chains, lower costs for farmers and consumers, and strengthen national food security by producing cost-competitive ammonia for the Eastern Corn Belt while creating hundreds of American jobs. The loan, which was carefully evaluated under the new LPO guidance directed by Secretary Wright, delivers on the Trump administration’s promise to responsibly steward taxpayer dollars and unleash American energy dominance. The Wabash financial close is the second closed loan under the Energy Dominance Financing (EDF) Program created by the Working Families Tax Cut, also known as the One Big Beautiful Bill Act. Today’s announcement highlights DOE’s commitment to achieving President Trump’s national security and energy dominance goals by securing domestic fertilizer supply for farmers in the Corn Belt

Read More »

Mozambique Unrest Flares as $25B LNG Work Set to Resume

An Islamist insurgency that froze TotalEnergies SE’s $24.5-billion gas project in Mozambique four years ago is intensifying, just as the French oil major prepares to restart development. Militants affiliated with Islamic State have in recent months carried out raids across the northeastern Cabo Delgado province that hosts the Total project and another led by Exxon Mobil Corp. The number of attacks against civilians in the region has almost doubled in 2025 to the highest in years, according to the United Nations. TotalEnergies’ announcement last weekend that it’s restarting the project — seen as pivotal to transforming one of the world’s poorest nations — lifted Mozambique’s eurobonds 2.9% on Monday. At the same time, there are growing fears among local communities that the security situation near the project site at Afungi is deteriorating — with more than 90,000 people fleeing attacks since the last week of September. “Right now, people are living in fear,” Andrew Bogrand, a senior policy adviser at Oxfam America, said after a visit to the region that included a resettlement village adjacent to the project. “Folks in Quitunda, police officers, contractors — they don’t see how this project can work if people are concerned about security.” The militants have in recent weeks made incursions into both Palma — neighboring the LNG site — and nearby Mocimboa da Praia, where they filmed themselves preaching in a local mosque. That was symbolic: The port town is where the extremist insurgency began in 2017 as a ragtag army of local youth in one of the poorest parts of Mozambique. They later occupied the town for about a year. Total has estimated charges of $4.5 billion during the halt to construction, adding to the original $20 billion project cost — revisions that require government approval before work resumes. The company paid

Read More »

Indian Oil, Vitol to Launch Trading JV in 2026

Indian Oil Corp. plans to start a joint venture with commodities trader Vitol Inc. in Singapore to trade oil and fuel products early next year, a person familiar with the development said. The agreement will include a clause requiring Vitol’s exit after five to seven years, said the person, who asked not to be named as talks aren’t public. The New Delhi-based refiner had held talks with several other companies, including BP Plc and TotalEnergies SE, but finally decided to move ahead with Vitol, the person said. Global trading giants have lost some hold over India’s market as the country turned to cheaper Russian barrels for its spot purchases. With India expected to lead global oil demand growth, they are now trying to regain ground. Teaming up with Indian Oil could give Vitol a stronger presence in the country as it cuts back on Russian oil due to sanctions from the West. Indian Oil and Vitol didn’t respond to messages seeking comment.  Indian Oil will capitalize on the strength of the global trader which has access to real-time market intelligence, in addition to its wider reach and established risk management systems. That may help the Indian company source crude at lower prices.  The refiner, which meets nearly 90% of its oil demand via imports, is likely to see a surge in consumption as its crude processing capacity is expected to expand by 346,000 barrels a day to 1.76 million barrels next year. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

FERC rejects Tri-State’s large load tariff over retail jurisdiction issues

Dive Brief: The Federal Energy Regulatory Commission on Monday rejected a large load tariff proposed by Tri-State Generation and Transmission Association, saying it intruded on retail rate regulation, which falls under state jurisdiction and is outside of FERC’s authority. “We find that Tri-State has not provided a sufficient basis for the Commission to find that its proposal does not regulate the terms and conditions of a [High Impact Load] Customer’s retail service in ways that are beyond the Commission’s authority,” FERC said. With FERC beginning to consider the U.S. Department of Energy’s request that the agency develop rules for interconnecting large loads to the transmission system, the Tri-State decision provides insights into the commission’s thinking about its jurisdiction over the issue, according to Steven Shparber, a member at the law firm of Mintz, Levin, Cohn, Ferris, Glovsky and Popeo. “FERC’s decision relied on longstanding U.S. Supreme Court precedent noting that FERC may not regulate retail sales, which are exclusively within the states’ jurisdiction,” he said in an email. Dive Insight: Tri-State’s Aug. 28 proposal came as utilities like AEP Ohio and Dominion Energy Virginia have adopted tariffs that set out terms and conditions for interconnecting data centers and other large loads. In its proposal, Tri-State — a wholesale cooperative based in Westminster, Colorado, with 40 utility members in Colorado, Nebraska, New Mexico and Wyoming — said there is growing interest in building data centers in the Mountain West region. It has received 10 requests from member utilities to add loads ranging from 45 MW to 650 MW that could each grow to 300 MW to 1,000 MW, according to Tri-State, which has a 2,500 MW peak load. Tri-State said its proposed High Impact Load, or HIL, tariff and High Impact Load Agreement would protect its utility members from a range of risks

Read More »

In an era of rising rates, policies to strengthen power system flexibility can lower costs

Rising ratepayer burdens from costs to meet new large loads from hyperscalers and other load growth can be limited by utilities that use newly available flexibility to manage demand peaks, power system stakeholders told Utility Dive. Due to growing demand for power from data centers and industrial customers, total U.S. generation by the electric power sector will grow by 2.3% in 2025, according to September data from the U.S. Energy Information Administration. And the average U.S. residential electricity rate rose over 5% from July 2024 to July 2025, EIA’s data browser showed. Investing in an expanded power system to meet new large load demand and limiting rate impacts while paying for the investments may seem contradictory, but it can be done, stakeholders said. Large loads increase rates if they induce utilities to make expensive grid infrastructure upgrades to meet higher demand peaks, said University of California, Berkeley, Haas School of Business Professor Severin Borenstein.  But with policy incentives to “restrain” peak demand, large loads can avoid the costs that drive higher rates, he added. State policymakers and stakeholders can design and enact those incentives and other policies to strengthen power system flexibility, affordability and reliability, analysts said. “Rate increases are capturing politicians’ attention because costs, and especially energy costs, are emerging consumer issues,” said former Arkansas Public Service Commission Chair Ted Thomas, founder of Energize Strategies. “That makes resisting rate increases important enough for politicians to expend political capital on now.”  Though the amount of load growth is still uncertain, investments to meet it already threaten affordability. But data shows the right regulation can limit its impacts and potentially stabilize customer costs. The rising prices Investor-owned utilities, which meet 57% of U.S. electricity use, will invest of about $1.1 trillion from 2025 to 2029 in infrastructure, up from $765 billion

Read More »

API ‘Strengthens Offshore Safety Standards’

In a statement sent to Rigzone recently, the American Petroleum Institute (API) said it had “strengthen[ed]… offshore safety standards with new updates”. The API announced the release of the third edition of API Standard 2RD, Dynamic Risers for Floating Production Systems, in that statement, adding that this standard “addresses the design and integrity management requirements for risers that carry oil and gas between the seafloor and floating production systems, such as spars, semi-submersibles and tension leg platforms”. The industry body noted in the statement that the standard applies to all risers from these systems, including steel catenary risers and top-tensioned risers.  “Dynamic risers are critical for offshore energy production,” the API highlighted in the statement. “Their safe performance supports the reliable flow of energy while minimizing risks to the environment and personnel,” it added. “This updated edition incorporates advances in riser integrity management and addresses evolving offshore conditions, enhancing clarity, consistency, and safety,” it continued. The API pointed out in its statement that the third edition of API Standard 2RD unifies design methods from earlier editions, “integrating multiple methods for measuring loading in a way that should reduce confusion among end users and regulators and improve clarity around permitting”. “It also introduces strengthened robustness requirements, requiring risers to be tested against extreme environmental conditions beyond what is anticipated in normal operations,” it said. “These provisions are designed to ensure that even in severe events, risers are more likely to require repair rather than suffer catastrophic failure,” the API noted. The industry body also highlighted in its statement that the new edition “provides clear guidance for reassessing existing risers in coordination with API RP 2RIM, Riser Integrity Management, 1st edition, supporting ongoing integrity management and extending service life”. “By expanding on previous editions and coordinating with API RP 2RIM, the third edition of API

Read More »

Cisco, Nvidia strengthen AI ties with new data center switch, reference architectures

The new box extends Cisco Nexus 9000 Series portfolio of high-density 800G aggregation switches for the data center fabric, Cisco stated. The Nexus 9000 data center switches are a core component of the vendor’s enterprise AI offerings. They support congestion-management and flow-control algorithms and deliver the right latency and telemetry to meet the design requirements of AI/ML fabrics, Cisco stated. With the Cisco N9100 Series, Cisco now supports Nvidia Cloud Partner (NCP)-compliant reference architecture. “This development is particularly significant for neocloud and sovereign cloud customers building data centers with capacities ranging from thousands to potentially hundreds of thousands of GPUs, as it allows them to diversify their supply chains effectively,” wrote Will Eatherton, senior vice president of Cisco networking engineering, in a blog post about the news. An add-on license lets customers extend the NCP reference architecture to define how customers can mix and mingle Nvidia Spectrum-X adaptive routing capability with Cisco Nexus 9300 Series switches and Nvidia Spectrum-X Ethernet SuperNICs. “The combination of low latency and congestion-aware, per-packet load balancing on Cisco 9300 switches, along with out-of-order packet handling and end-to-end congestion management on Nvidia SuperNICs, significantly enhances network performance. These improvements are essential for AI networks, optimizing critical metrics such as job completion time,” Eatherton wrote. In addition to neoclouds and sovereign buildouts, enterprise customers are a target, according to Futuriom’s Raynovich.

Read More »

IT shortcuts curb AI returns

Organizations must ensure the infrastructure is AI ready Infrastructure is another area where Cisco found a major difference. Pacesetters are designing their networks for future demands. Seventy-one percent say their networks can scale instantly for new AI projects. Roughly three-quarters of pacesetters are investing in new data center capacity over the next year. Currently, about two-thirds say their infrastructure can accommodate AI workloads. Most pacesetters (93%) also have data systems that are fully prepared for AI, compared with 34% of other companies. About 76% have fully centralized their in-house data, while only 19% of other companies have done the same. Eighty-four percent report strong governance readiness, while 95% have mature processes to measure the impact of AI. If ever there was a technological shift that requires the right infrastructure, it’s AI. AI generates a significant amount of data, needs large amounts of processes and low latency, high-capacity networks. Historically, businesses could operate with networks that operated on the premise of “best effort,” but that’s no longer the case. From the data center to campus to branch offices, in most companies, the network will require a refresh. Scaling AI requires the right processes When it comes to being disciplined, 62% of pacesetters have an established process for generating, piloting, and scaling AI use cases. Only 13% of other organizations (non-pacesetters) have reached this level of maturity. Most pacesetters say their AI models achieve at least 75% accuracy. Almost half also expect a 50% to 100% return on investment (ROI) within a year, far above the average. Cisco notes that over the past six months, pressure has been building for companies to show tangible ROI. Executives and IT leaders are pushing for results, and so are competitors. By contrast, most other companies are in early stages of readiness. Although 83% plan to

Read More »

Qualcomm goes all-in on inferencing with purpose-built cards and racks

From a strategy perspective, there is a longer term enterprise play here, noted Moor’s Kimball; Humain is Qualcomm’s first customer, and a cloud service provider (CSP) or hyperscaler will likely be customer number two. However, at some point, these rack-scale systems will find their way into the enterprise. “If I were the AI200 product marketing lead, I would be thinking about how I demonstrate this as a viable platform for those enterprise workloads that will be getting ‘agentified’ over the next several years,” said Kimball. It seems a natural step, as Qualcomm saw success with its AI100 accelerator, a strong inference chip, he noted. Right now, Nvidia and AMD dominate the training market, with CUDA and ROCm enjoying a “stickiness” with customers. “If I am a semiconductor giant like Qualcomm that is so good at understanding the performance-power balance, this inference market makes perfect sense to really lean in on,” said Kimball. He also pointed to the company’s plans to re-enter the datacenter CPU space with its Oryon CPU, which is featured in Snapdragon and loosely based on technology it acquired with its $1.4 billion Nuvia acquisition. Ultimately, Qualcomm’s move demonstrates how wide open the inference market is, said Kimball. The company, he noted, has been very good at choosing target markets and has seen success when entering those markets. “That the company would decide to go more ‘in’ on the inference market makes sense,” said Kimball. He added that, from an ROI perspective, inferencing will “dwarf” training in terms of volume and dollars.

Read More »

AI data center building boom risks fueling future debt bust, bank warns

However, that’s only one part of the problem. Meeting the power demands of AI data centers will require the energy sector to make large investments. Then there’s data center demand for microprocessors, rare earth elements, and other valuable metals such as copper, which could, in a bust, make data centers the most expensively-assembled unwanted assets in history. “Financial stability consequences of an AI-related asset price fall could arise through multiple channels. If forecasted debt-financed AI infrastructure growth materializes, the potential financial stability consequences of such an event are likely to grow,” warned the BoE blog post. “For companies who depend on the continued demand for massive computational capacity to train and run inference on AI models, an algorithmic breakthrough or other event which challenges that paradigm could cause a significant re-evaluation of asset prices,” it continued. According to Matt Hasan, CEO of AI consultancy aiRESULTS, the underlying problem is the speed with which AI has emerged. “What we’re witnessing isn’t just an incremental expansion, it’s a rush to construct power-hungry, mega-scale data centers,” he told Network World. The dot.com reversal might be the wrong comparison; it dented the NASDAQ and hurt tech investment, but the damage to organizations investing in e-commerce was relatively limited. AI, by contrast, might have wider effects for large enterprises because so many have pinned their business prospects on its potential. “Your reliance on these large providers means you are indirectly exposed to the stability of their debt. If a correction occurs, the fallout can impact the services you rely on,” said Hasan.

Read More »

Intel sees supply shortage, will prioritize data center technology

“Capacity constraints, especially on Intel 10 and Intel 7 [Intel’s semiconductor manufacturing process], limited our ability to fully meet demand in Q3 for both data center and client products,” said Zinsner, adding that Intel isn’t about to add capacity to Intel 10 and 7 when it has moved beyond those nodes. “Given the current tight capacity environment, which we expect to persist into 2026, we are working closely with customers to maximize our available output, including adjusting pricing and mix to shift demand towards products where we have supply and they have demand,” said Zinsner. For that reason, Zinzner projects that the fourth quarter will be roughly flat versus the third quarter in terms of revenue. “We expect Intel products up modestly sequentially but below customer demand as we continue to navigate supply environment,” said Zinsner. “We expect CCG to be down modestly and PC AI to be up strongly sequentially as we prioritize wafer capacity for server shipments over entry-level client parts.”

Read More »

How to set up an AI data center in 90 days

“Personally, I think that a brownfield is very creative way to deal with what I think is the biggest problem that we’ve got right now, which is time and speed to market,” he said. “On a brownfield, I can go into a building that’s already got power coming into the building. Sometimes they’ve already got chiller plants, like what we’ve got with the building I’m in right now.” Patmos certainly made the most of the liquid facilities in the old printing press building. The facility is built to handle anywhere from 50 to over 140 kilowatts per cabinet, a leap far beyond the 1–2 kW densities typical of legacy data centers. The chips used in the servers are Nvidia’s Grace Blackwell processors, which run extraordinarily hot. To manage this heat load, Patmos employs a multi-loop liquid cooling system. The design separates water sources into distinct, closed loops, each serving a specific function and ensuring that municipal water never directly contacts sensitive IT equipment. “We have five different, completely separated water loops in this building,” said Morgan. “The cooling tower uses city water for evaporation, but that water never mixes with the closed loops serving the data hall. Everything is designed to maximize efficiency and protect the hardware.” The building taps into Kansas City’s district chilled water supply, which is sourced from a nearby utility plant. This provides the primary cooling resource for the facility. Inside the data center, a dedicated loop circulates a specialized glycol-based fluid, filtered to extremely low micron levels and formulated to be electronically safe. Heat exchangers transfer heat from the data hall fluid to the district chilled water, keeping the two fluids separate and preventing corrosion or contamination. Liquid-to-chip and rear-door heat exchangers are used for immediate heat removal.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »