Stay Ahead, Stay ONMINE

Meta retreats from fact checking content: what it means for businesses

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Facebook creator and Meta CEO Mark “Zuck” Zuckerberg shook the world again today when he announced sweeping changes to the way his company moderates and handles user-generated posts and content in the U.S. Citing the “recent […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Facebook creator and Meta CEO Mark “Zuck” Zuckerberg shook the world again today when he announced sweeping changes to the way his company moderates and handles user-generated posts and content in the U.S.

Citing the “recent elections” as a “cultural tipping point,” Zuck explained in a roughly 5-minute long video posted to his Facebook and Instagram accounts this morning (Tuesday, January 7) that Meta would cease using independent third-party fact checkers and fact-checking organizations to help moderate and append notes to user posts shared across the company’s suite of social networking and messaging apps, including Facebook, Instagram, WhatsApp, Threads, and more.

Instead, Zuck said that Meta would rely on a “Community Notes” style approach, crowdsourcing information from the users across Meta’s apps to give context and veracity to posts, similar to (and Zuck acknowledged this in his video) the rival social network X (formerly Twitter).

Zuck casted the changes as a return to Facebook’s “roots” around free expression and a reduction of over-broad “censorship.” See the full transcript of his remarks at the bottom of this article.

Why this announcement and policy change matters to businesses

With more than 3 billion users across its services and products worldwide, Meta remains the largest social network to date. In addition, as of 2022, more than 200 million businesses worldwide used the company’s apps and services — most of them small — and 10 million were active paying advertisers on the platform, according to one executive.

Meta’s new chief global affairs officer Joe Kaplan, a former Deputy Chief of Staff for Republican President George W. Bush who recently took on the role in what many viewed as a signal to lawmakers and the wider world of Meta’s willingness to work with the GOP-led Congress and White House following the 2024 election, also published a note to Meta’s corporate website describing some of the changes in greater detail.

Already, some business executives such as Shopify’s CEO Tobi Lutke have seemingly embraced the announcement. As Lutke wrote on X today: “Huge and important change.”

Founders Fund chief marketing officer and tech influencer Mike Solana also hailed the move, writing in a post on X: “There’s already been a dramatic decrease in censorship across the meta platforms. but a public statement of this kind plainly speaking truth (the “fact checkers” were biased, and the policy was immoral) is really and finally the end of a golden age for the worst people alive.”

However, others are less optimistic and receptive to the changes, viewing them as less about freedom of expression, and more about currying favor with the incoming Republican presidential administration of recently elected President Donald J. Trump (to his second non-consecutive term) and GOP-led Congress, as other business executives and firms have seemingly moved to do.

“More free expression on social media is a good thing,” wrote the non-profit Freedom of the Press Foundation on the social network BlueSky (disclosure: my wife is a board member of the non-profit). “But based on Meta’s track record, it seems more likely that this is about sucking up to Donald Trump than it is about free speech.”

George Washington University political communication professor Dave Karpf seemed to agree, writing on BlueSky: “Two salient facts about Facebook replacing its fact-checking program with community notes: (1) community notes are cheaper. (2) the incoming political regime dislikes fact-checking. So community notes are less trouble. The rest is just framing. Zuck’s sole principle is to do what’s best for Zuck.”

And Kate Starbird, professor at the University of Washington and co-founder of the UW Center for an Informed Public, wrote on BlueSky that: “Meta is dropping its support for fact-checking, which, in addition to degrading users’ ability to verify content, will essentially defund all of the little companies that worked to identify false content online. But our FB feeds are basically just AI slop at this point, so?”

When will the changes take place?

Both Zuck and Kaplan stated in their respective video and text posts that the changes to Meta’s content moderation policies and practices would be coming to the U.S. in “the next couple of months.”

Meta will discontinue its independent fact-checking program in the United States, launched in 2016, in favor of a Community Notes model inspired by X (formerly Twitter). This system will rely on users to write and rate notes, requiring agreement across diverse perspectives to ensure balance and prevent bias.

According to its website, Meta had been working with a variety of organizations “certified through the non-partisan International Fact-Checking Network (IFCN) or European Fact-Checking Standards Network (EFCSN) to identify, review and take action” on content deemed “misinformation.”

However, as Zuck opined in his video post, “after Trump first got elected in 2016 the legacy media wrote non-stop about how misinformation was a threat to democracy. We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the U.S.”

Zuck also added that: “There’s been widespread debate about potential harms from online content. Governments and legacy media have pushed to censor more and more. A lot of this is clearly political.”

According to Kaplan, the shift aims to reduce the perceived censorship that arose from the previous fact-checking program, which often applied intrusive labels to legitimate political speech.

Loosening restrictions on political and sensitive topics

Meta is revising its content policies to allow more discourse on politically sensitive topics like immigration and gender identity. Kaplan pointed out that it is inconsistent for such topics to be debated in public forums like Congress or on television but restricted on Meta’s platforms.

Automated systems, which have previously been used to enforce policies across a wide range of issues, will now focus primarily on tackling illegal and severe violations, such as terrorism and child exploitation.

For less critical issues, the platform will rely more on user reports and human reviewers. Meta will also reduce content demotions for material flagged as potentially problematic unless there is strong evidence of a violation.

However, the reduction of automated systems would seem to fly in the face of Meta’s promotion of AI as a valuable tool in its own business offerings — why should anyone else trust Meta’s AI models such as the Llama family if Meta itself isn’t content to use them to moderate content?

A reduction in content takedowns coming?

As Zuck put it, a big problem with Facebook’s automated systems is overly broad censorship.

He stated in his video address, “we built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes, even if they accidentally censor just 1% posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.”

Meta acknowledges that mistakes in content moderation have been a persistent issue. Kaplan noted that while less than 1% of daily content is removed, an estimated 10-20% of these actions may be errors. To address this, Meta plans to:

• Publish transparency reports detailing moderation mistakes and progress.

• Require multiple reviewers to confirm decisions before content is removed.

• Use advanced AI systems, including large language models, for second opinions on enforcement actions.

Additionally, the company is relocating its trust and safety teams from California to other U.S. locations, including Texas, to address perceptions of bias — which already, some have poked fun at on various social channels: are people in Texas really less biased than those in California?

The return of political content…and ‘fake news’?

Since 2021, Meta has limited the visibility of civic and political content on its platforms in response to user feedback.

However, the company now plans to reintroduce this content in a more personalized manner.

Users who wish to see more political content will have greater control over their feeds, with Meta using explicit signals like likes and implicit behaviors such as post views to determine preferences.

However, this reinstating of political content could run the risk of once again allowing for the spread of politically charged misinformation from U.S. adversaries — as we saw in the run-up to the 2016 election, when numerous Facebook pages spewed disinformation and conspiracy theories that favored Republicans and disfavored Democratic candidates and policies.

One admitted “fake news” creator told NPR that while they had tried to create content for both liberal and conservative audiences, the latter were more interested and gullible towards sharing and re-sharing fake content that aligned with their views.

Such “fake news” was so widespread, it was even joked about on social media itself and in The Onion.

My analysis on what it means for businesses and brand pages

I’ve never owned a business, but I have managed several Facebook and Instagram accounts on behalf of large corporate and smaller startup/non-profit organizations, so I know firsthand about the work that goes into maintaining them, posting, and growing their audiences/followings.

I think that while Meta’s stated commitment to restoring more freedom of expression to its products is laudable, the jury is out on how it will actually impact the desire for businesses to speak to their fans and customers using said products.

At best, it will be a double-edged sword: less strict content moderation policies will give brands and businesses the chance to post more controversial, experimental, and daring content — and those that take advantage of this may see their messages reach wider audiences, i.e., “go viral.”

On the flip side, brands and businesses may now struggle to get their posts seen and reacted upon in the face of other pages posting even more controversial, politically pointed content.

In addition, the changes could make it easier for users to criticize brands or implicate them in conspiracies, and it may be harder for the brands to force takedowns of such unflattering content about them — even when untrue.

What’s next?

The rollout of Community Notes and policy adjustments is expected to begin in the coming months in the U.S. Meta plans to improve and refine these systems throughout the year.

These initiatives, Kaplan said, aim to balance the need for safety and accuracy with the company’s core value of enabling free expression.

Kaplan said Meta is focused on creating a platform where individuals can freely express themselves. He also acknowledged the challenges of managing content at scale, describing the process as “messy” but essential to Meta’s mission.

For users, these changes promise fewer intrusive interventions and a greater opportunity to shape the conversation on Meta’s platforms.

Whether the new approach will succeed in reducing frustration and fostering open dialogue remains to be seen.

Hey, everyone. I want to talk about something important today, because it’s time to get back to our roots around free expression on Facebook and Instagram. I started building social media to give people a voice. I gave a speech at Georgetown five years ago about the importance of protecting free expression, and I still believe this today, but a lot has happened over the last several years.

There’s been widespread debate about potential harms from online content. Governments and legacy media have pushed to censor more and more. A lot of this is clearly political, but there’s also a lot of legitimately bad stuff out there: drugs, terrorism, child exploitation. These are things that we take very seriously, and I want to make sure that we handle responsibly. So we built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes. Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.

The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech. So we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms. More specifically, here’s what we’re going to do.

First, we’re going to get rid of fact-checkers and replace them with community notes similar to X, starting in the US. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US. So over the next couple of months, we’re going to phase in a more comprehensive community notes system.

Second, we’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse. What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it’s gone too far. So I want to make sure that people can share their beliefs and experiences on our platforms.

Third, we’re changing how we enforce our policies to reduce the mistakes that account for the vast majority of censorship on our platforms. We used to have filters that scanned for any policy violation. Now we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a tradeoff. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.

Fourth, we’re bringing back civic content. For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts. But it feels like we’re in a new era now, and we’re starting to get feedback that people want to see this content again. So we’re going to start phasing this back into Facebook, Instagram, and Threads, while working to keep the communities friendly and positive.

Fifth, we’re going to move our trust and safety and content moderation teams out of California, and our US-based content review is going to be based in Texas. As we work to promote free expression, I think that will help us build trust to do this work in places where there is less concern about the bias of our teams.

Finally, we’re going to work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever-increasing number of laws institutionalizing censorship and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years. When even the US government has pushed for censorship by going after us and other American companies, it has emboldened other governments to go even further. But now we have the opportunity to restore free expression, and I am excited to take it.

It’ll take time to get this right, and these are complex systems. They’re never going to be perfect. There’s also a lot of illegal stuff that we still need to work very hard to remove. But the bottom line is that after years of having our content moderation work focused primarily on removing content, it is time to focus on reducing mistakes, simplifying our systems, and getting back to our roots about giving people voice.

I’m looking forward to this next chapter. Stay good out there and more to come soon.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

TotalEnergies farms out 40% participating interest in certain licenses offshore Nigeria to Chevron

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style

Read More »

AI-driven network management gains enterprise trust

The way the full process works is that the raw data feed comes in, and machine learning is used to identify an anomaly that could be a possible incident. That’s where the generative AI agents step up. In addition to the history of similar issues, the agents also look for

Read More »

New Supertankers Sail Empty to Collect Oil

A shortage of oil tankers is becoming so acute that newly built vessels, which usually carry refined fuels on their maiden voyages, are instead racing empty to pick up crude as soon as possible.  Six supertankers that were delivered this year have traveled without cargoes from East Asia to load crude in the Middle East, Africa or the Americas, ship-tracking and fixtures data reviewed by Bloomberg and Signal Ocean show. That compares with just one such journey last year. The Atrebates was delivered in early November. It sailed empty from China to the Middle East to pick up a crude cargo from Iraq, and is now headed for Gibraltar. Tanker owners about to receive new ships almost always use them to carry fuels like gasoline on their maiden voyages to pick up crude. This makes both economic and geographical sense, given that oil products are cleaner than crude and the vessels won’t need to be washed after carrying them, and also because many of the ships are built in East Asia, which imports a lot of unprocessed oil and exports refined fuels. A severe shortage of tankers is now upending that logic. Oil producers — both within and outside OPEC — have ramped up output this year. Western sanctions on Russia and the risk of traveling through the Red Sea, meanwhile, have disrupted traditional routes, resulting in longer voyages and more ships being used. Smaller product tankers have also been drawn into the oil trade, while some traders have had to break up cargoes due to the lack of larger vessels, pushing up transport costs even further. The Baltic Dirty Tanker Index, which tracks rates to carry crude oil on 12 major routes, has jumped more than 50% since the end of July, while the Baltic Clean Tanker Index only rose 12%. “When very large crude carriers

Read More »

Analyst Looks at Natural Gas Price Moves

In a natural gas focused EBW Analytics Group report sent to Rigzone by the EBW team on Friday, Eli Rubin, an energy analyst at the company, warned that late December heating demand “continues to disintegrate”. “Yesterday’s 177 billion cubic foot withdrawal did little to stop the massive sell-off in natural gas, with the NYMEX front-month plummeting to close at a seven-week low of $4.231 [per million British thermal units (MMBtu)],” Rubin said in the report. “Although a frigid early December may erode storage surpluses over the next two EIA [U.S. Energy Information Administration] reports, the market is focused on eroding late-December heating demand,” he added. In the report, Rubin noted that the week leading into Christmas “shed another seven gHDDs over the past 24 hours, with exceptionally mild weather anticipated across the country in the back half of the month”. “Daily demand may still surge into Sunday’s peak – but is expected to plunge 26 billion cubic feet per day [Bcfpd] into mid-next week, likely delivering a blow to physical gas prices,” he added. Rubin went on to warn in the report that technicals also appear weak, “with prices falling below the 20-day, 50-day, 100-day and 200-day moving averages”. “Shorts may take profits off the table ahead of the weekend, and medium to long term fundamentals appear more supportive than recent price action suggests, but momentum is bearish and this week’s 133 billion cubic foot loss of weather-driven demand will leave an enduring mark on NYMEX futures,” he said. This EBW report highlighted that the January natural gas contract closed at $4.231 per MMBtu on Thursday. It outlined that this was down 36.4 cents, or 7.9 percent, from Wednesday’s close. In an EBW report sent to Rigzone by the EBW team on December 10, Rubin highlighted that a “weather collapse

Read More »

EIA Ups Brent Price Forecast, Still Sees Drop in 2026

In its latest short term energy outlook (STEO), which was released on December 9, the U.S. Energy Information Administration (EIA) increased its Brent price forecast for 2025 and 2026 but still projected that the commodity will drop next year compared to 2025. According to its December STEO, the EIA now sees the Brent spot price averaging $68.91 per barrel this year and $55.08 per barrel next year. In its previous STEO, which was released in November, the EIA projected that the Brent spot price would average $68.76 per barrel in 2025 and $54.92 per barrel in 2026. The EIA’s October STEO forecast that the commodity would average $68.64 per barrel this year and $52.16 per barrel next year, and its September STEO saw the commodity coming in at $67.80 per barrel in 2025 and $51.43 per barrel in 2026. A quarterly breakdown included in the EIA’s latest STEO projected that the Brent spot price will average $63.10 per barrel in the fourth quarter of this year, $54.93 per barrel in the first quarter of next year, $54.02 per barrel in the second quarter, $55.32 per barrel in the third quarter, and $56.00 per barrel in the fourth quarter of 2026. The commodity averaged $75.83 per barrel in the first quarter of this year, $68.01 per barrel in the second quarter, and $69.00 per barrel in the third quarter, the EIA’s December STEO showed. It also pointed out that the Brent spot price averaged $80.56 per barrel overall last year. In its December STEO, the EIA highlighted that the Brent crude oil spot price averaged $64 per barrel in November, which it pointed out was $11 per barrel lower than in November 2024. “Crude oil prices continue to fall as growing crude oil production outweighs the effect of increased drone attacks

Read More »

BP Starts Up Atlantis Expansion Project in US Gulf

BP PLC said Thursday it has put onstream an expansion project in the Atlantis field in the Gulf of America that will add 15,000 barrels of oil equivalent a day to the deepwater development’s production capacity. Atlantis Drill Center 1 Expansion, BP’s “seventh upstream major project startup of the year”, ties back two wells to the subsea hub via new pipelines, according to the British operator. Atlantis, discovered 1998, has been producing for nearly 20 years and has one of BP’s longest-running platforms in the U.S. Gulf. The field also contains BP’s deepest moored floating platform in the U.S. Gulf, operating in 7,074 feet of water about 150 miles south of New Orleans, according to the company. Atlantis currently has a declared peak output of 200,000 barrels of oil and 180 million cubic feet of gas per day. “Atlantis Drill Center 1 caps off an excellent year of seven major project start-ups for BP. This project supports our plans to safely grow our upstream business, which includes increasing U.S. production to around one million barrels of oil equivalent per day by 2030”, Gordon Birrell, BP executive vice president for production and operations, said in an online statement. BP said, “BP delivered the Atlantis Drill Center 1 expansion project two months ahead of its original schedule by utilizing existing subsea inventory, drilling and completing wells more efficiently, and streamlining offshore execution planning. This is BP’s fifth major startup that has been delivered ahead of schedule this year”. Atlantis Drill Center 1 Expansion is one of three U.S. Gulf projects on a list of 10 upstream projects across BP’s global portfolio that the company aims to complete by 2027. On August 4 BP announced the start of production at Argos Southwest Extension, adding 20,000 bpd of capacity to the Argos platform, which started

Read More »

USA Crude Oil Stocks Drop Nealy 2MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 1.8 million barrels from the week ending November 28 to the week ending December 5, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. That report was published on December 10 and included data for the week ending December 5. The report showed that crude oil stocks, not including the SPR, stood at 425.7 million barrels on December 5, 427.5 million barrels on November 28, and 422.0 million barrels on December 6, 2024. Crude oil in the SPR stood at 411.9 million barrels on December 5, 411.7 million barrels on November 28, and 392.5 million barrels on December 6, 2024, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.684 billion barrels on December 5, the report showed. Total petroleum stocks were down 2.9 million barrels week on week and up 55.8 million barrels year on year, the report pointed out. “At 425.7 million barrels, U.S. crude oil inventories are about four percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 6.4 million barrels from last week and are about one percent below the five year average for this time of year. Finished gasoline and blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 2.5 million barrels last week and are about seven percent below the five year average for this time of year. Propane/propylene inventories decreased 1.8 million barrels from last week and are about 15 percent above the five year average for this

Read More »

Brazil Oil Output Rebounding from Outages

Brazilian oil output is rebounding from outages that removed more than 300,000 barrels a day last month, highlighting how Latin America’s largest crude producer can confound OPEC efforts to micromanage the market.    Brazil’s daily oil production slid roughly 8 percent to an average of 3.696 million barrels in November, according to Bloomberg calculations based on preliminary figures from oil regulator ANP. The drop from an all-time high the previous month stemmed from platform outages at offshore fields such as the mammoth Buzios. The volatility in output from a key non-OPEC oil producer highlights the challenges involved in assessing global crude-supply trends. The Saudi-led group of heavyweight oil nations is preparing to boost volumes early next year, and on Thursday predicted a balanced global crude market in 2026. OPEC’s outlook differs markedly from the likes of the International Energy Agency and influential traders such as Trafigura Group that are warning of an imminent supply glut.  In Brazil, the data indicate that at least some of the affected platforms came back online in recent weeks. About one-fifth of last month’s lost production had been restored by late last week, according to the most recent ANP figures. The agency’s data is subject to subsequent adjustments. The November drop highlights how Brazil’s shift to “super platforms” that can pump more than 200,000 barrels a day each leaves the nation’s output vulnerable to sharp fluctuations, said Marcelo De Assis, a Rio de Janeiro-based independent oil consultant.  The temporary blip, however, won’t derail the longer-term upward trend for Brazil and regional giants Guyana and Argentina, he said. Although the IEA trimmed estimates for a global oil surplus on Thursday, the agency is still expecting world supplies to exceed demand by 3.815 million barrels a day in 2026. What do you think? We’d love to hear from you,

Read More »

FinOps Foundation sharpens FOCUS to reduce cloud cost chaos

“The big change that’s really started to happen in late 2024 early 2025 is that the FinOps practice started to expand past the cloud,” Storment said. “A lot of organizations got really good at using FinOps to manage the value of cloud, and then their organizations went, ‘oh, hey, we’re living in this happily hybrid state now where we’ve got cloud, SaaS, data center. Can you also apply the FinOps practice to our SaaS? Or can you apply it to our Snowflake? Can you apply it to our data center?’” The FinOps Foundation’s community has grown to approximately 100,000 practitioners. The organization now includes major cloud vendors, hardware providers like Nvidia and AMD, data center operators and data cloud platforms like Snowflake and Databricks. Some 96 of the Fortune 100 now participate in FinOps Foundation programs. The practice itself has shifted in two directions. It has moved left into earlier architectural and design processes, becoming more proactive rather than reactive. It has also moved up organizationally, from director-level cloud management roles to SVP and COO positions managing converged technology portfolios spanning multiple infrastructure types. This expansion has driven the evolution of FOCUS beyond its original cloud billing focus. Enterprises are implementing FOCUS as an internal standard for chargeback reporting even when their providers don’t generate native FOCUS data. Some newer cloud providers, particularly those focused on AI infrastructure, are using the FOCUS specification to define their billing data structures from the ground up rather than retrofitting existing systems. The FOCUS 1.3 release reflects this maturation, addressing technical gaps that have emerged as organizations apply cost management practices across increasingly complex hybrid environments. FOCUS 1.3 exposes cost allocation logic for shared infrastructure The most significant technical enhancement in FOCUS 1.3 addresses a gap in how shared infrastructure costs are allocated and

Read More »

Aetherflux joins the race to launch orbital data centers by 2027

Enterprises will connect to and manage orbital workloads “the same way they manage cloud workloads today,” using optical links, the spokesperson added. The company’s approach is to “continuously launch new hardware and quickly integrate the latest architectures,” with older systems running lower-priority tasks to serve out the full useful lifetime of their high-end GPUs. The company declined to disclose pricing. Aetherflux plans to launch about 30 satellites at a time on SpaceX Falcon 9 rockets. Before the data center launch, the company will launch a power-beaming demonstration satellite in 2026 to test transmission of one kilowatt of energy from orbit to ground stations, using infrared lasers. Competition in the sector has intensified in recent months. In November, Starcloud launched its Starcloud-1 satellite carrying an Nvidia H100 GPU, which is 100 times more powerful than any previous GPU flown in space, according to the company, and demonstrated running Google’s Gemma AI model in orbit. In the same month, Google announced Project Suncatcher, with a 2027 demonstration mission planned. Analysts see limited near-term applications Despite the competitive activity, orbital data centers won’t replace terrestrial cloud regions for general hosting through 2030, said Ashish Banerjee, senior principal analyst at Gartner. Instead, they suit specific workloads, including meeting data sovereignty requirements for jurisdictionally complex scenarios, offering disaster recovery immune to terrestrial risks, and providing asynchronous high-performance computing, he said. “Orbital centers are ideal for high-compute, low-I/O batch jobs,” Banerjee said. “Think molecular folding simulations for pharma, massive Monte Carlo financial simulations, or training specific AI model weights. If the job takes 48 hours, the 500ms latency penalty of LEO is irrelevant.” One immediate application involves processing satellite-generated data in orbit, he said. Earth observation satellites using synthetic aperture radar generate roughly 10 gigabytes per second, but limited downlink bandwidth creates bottlenecks. Processing data in

Read More »

Here’s what Oracle’s soaring infrastructure spend could mean for enterprises

He said he had earlier told analysts in a separate call that margins for AI workloads in these data centers would be in the 30% to 40% range over the life of a customer contract. Kehring reassured that there would be demand for the data centers when they were completed, pointing to Oracle’s increasing remaining performance obligations, or services contracted but not yet delivered, up $68 billion on the previous quarter, saying that Oracle has been seeing unprecedented demand for AI workloads driven by the likes of Meta and Nvidia. Rising debt and margin risks raise flags for CIOs For analysts, though, the swelling debt load is hard to dismiss, even with Oracle’s attempts to de-risk its spend and squeeze more efficiency out of its buildouts. Gogia sees Oracle already under pressure, with the financial ecosystem around the company pricing the risk — one of the largest debts in corporate history, crossing $100 billion even before the capex spend this quarter — evident in the rising cost of insuring the debt and the shift in credit outlook. “The combination of heavy capex, negative free cash flow, increasing financing cost and long-dated revenue commitments forms a structural pressure that will invariably finds its way into the commercial posture of the vendor,” Gogia said, hinting at an “eventual” increase in pricing of the company’s offerings. He was equally unconvinced by Magouyrk’s assurances about the margin profile of AI workloads as he believes that AI infrastructure, particularly GPU-heavy clusters, delivers significantly lower margins in the early years because utilisation takes time to ramp.

Read More »

New Nvidia software gives data centers deeper visibility into GPU thermals and reliability

Addressing the challenge Modern AI accelerators now draw more than 700W per GPU, and multi-GPU nodes can reach 6kW, creating concentrated heat zones, rapid power swings, and a higher risk of interconnect degradation in dense racks, according to Manish Rawat, semiconductor analyst at TechInsights. Traditional cooling methods and static power planning increasingly struggle to keep pace with these loads. “Rich vendor telemetry covering real-time power draw, bandwidth behavior, interconnect health, and airflow patterns shifts operators from reactive monitoring to proactive design,” Rawat said. “It enables thermally aware workload placement, faster adoption of liquid or hybrid cooling, and smarter network layouts that reduce heat-dense traffic clusters.” Rawat added that the software’s fleet-level configuration insights can also help operators catch silent errors caused by mismatched firmware or driver versions. This can improve training reproducibility and strengthen overall fleet stability. “Real-time error and interconnect health data also significantly accelerates root-cause analysis, reducing MTTR and minimizing cluster fragmentation,” Rawat said. These operational pressures can shape budget decisions and infrastructure strategy at the enterprise level.

Read More »

Arista goes big with campus wireless tech

In a white paper describing how VESPA works, Arista wrote: The first component of VESPA involves Arista access points creating VXLAN tunnels to Arista switches serving as WLAN Gateways…. Second, as device packets arrive via the AP, it dynamically creates an Ethernet Segment Identifier (Type 6 ESI) based on the AP’s VTEP IP address. These dynamically created tunnels can scale to 30K ESI’s spread across paired switches in the cluster which provide active/active load sharing (performance+HA) to the APs. Third, the gateway switches use Type 2 EVPN NLRI (Network Layer Reachability Information) to learn and exchange end point MAC addresses across the cluster. … With this architecture, adding more EVPN WLAN gateways scales both AP and user connections, to tens of thousands of end points. To manage the forwarding information for hundreds of thousands of clients (e.g: FIB next hop and rewrite) would prove very complex and expensive if using conventional networking solutions. Arista’s innovation is to distribute this function across the WiFi access points with a unique MAC Rewrite Offload feature (MRO). With MRO, the access point is responsible for servicing mobile client ARP requests (using its own mac address), building a localized MAC-IP binding table, and forwarding client IP addresses to the WLAN gateways with the APs MAC address. The WLAN Gateways therefore only learns one (MAC) address for all the clients associated with the AP. This improves the gateway’s scaling from 10X to 100X, allowing these cost effective gateways to support hundreds of thousands of clients attached to the APs. AVA system gets a boost In addition to the new wireless technology, Arista is also bolstering the capabilities of its natural-language, generative AI-based Autonomous Virtual Assist (AVA) system for delivering network insights and AIOps.  AVA is aimed at providing an intelligent assistant that’s not there to replace

Read More »

Most significant networking acquisitions of 2025

Cisco makes two AI deals: EzDubs and NeuralFabric Last month Cisco completed its acquisition of EzDubs, a privately held AI software company with speech-to-speech translation technology. EzDubs translates conversations across 31 languages and will accelerate Cisco’s delivery of next-generation features, such as live voice translation that preserves the characteristics of speech, the vendor stated. Cisco plans to incorporate EzDubs’ technology in its Cisco Collaboration portfolio. Also in November, Cisco bought AI platform company NeuralFabric, which offers a generative AI platform that lets organizations develop domain-specific small language models using their own proprietary data. Coreweave buys Core Scientific Nvidia-backed AI cloud provider CoreWeave acquired crypto miner Core Scientific for about $9 billion, giving it access to 1.3 gigawatts of contracted power to support growing demand for AI and high-performance computing workloads. CoreWeave said the deal augments its vertical integration by expanding its owned and operated data center footprint, allowing it to scale GPU-powered services for enterprise and research customers. F5 picks up three: CalypsoAI, Fletch and MantisNet F5 acquired Dublin, Ireland-based CalypsoAI for $180 million. CalypsoAI’s platform creates what the company calls an Inference Perimeter that protects across models, vendors, and environments. F5 says it will integrate CalypsoAI’s adaptive AI security capabilities into its F5 Application Delivery and Security Platform (ADSP). F5’s ADSP also stands to gain from F5’s acquisition of agentic AI and threat management startup Fletch. Fletch’s technology turns external threat intelligence and internal logs into real-time, prioritized insights; its agentic AI capabilities will be integrated into ADSP, according to F5. Lastly, F5 grabbed startup MantisNet to enhance cloud-native observability in F5’s ADSP. MantisNet leverages extended Berkeley Packet Filer (eBPF)-powered, kernel-level telemetry to provide real-time insights into encrypted protocol activity and allow organizations “to gain visibility into even the most elusive traffic, all without performance overhead,” according to an F5 blog

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »