Stay Ahead, Stay ONMINE

Meta retreats from fact checking content: what it means for businesses

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Facebook creator and Meta CEO Mark “Zuck” Zuckerberg shook the world again today when he announced sweeping changes to the way his company moderates and handles user-generated posts and content in the U.S. Citing the “recent […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Facebook creator and Meta CEO Mark “Zuck” Zuckerberg shook the world again today when he announced sweeping changes to the way his company moderates and handles user-generated posts and content in the U.S.

Citing the “recent elections” as a “cultural tipping point,” Zuck explained in a roughly 5-minute long video posted to his Facebook and Instagram accounts this morning (Tuesday, January 7) that Meta would cease using independent third-party fact checkers and fact-checking organizations to help moderate and append notes to user posts shared across the company’s suite of social networking and messaging apps, including Facebook, Instagram, WhatsApp, Threads, and more.

Instead, Zuck said that Meta would rely on a “Community Notes” style approach, crowdsourcing information from the users across Meta’s apps to give context and veracity to posts, similar to (and Zuck acknowledged this in his video) the rival social network X (formerly Twitter).

Zuck casted the changes as a return to Facebook’s “roots” around free expression and a reduction of over-broad “censorship.” See the full transcript of his remarks at the bottom of this article.

Why this announcement and policy change matters to businesses

With more than 3 billion users across its services and products worldwide, Meta remains the largest social network to date. In addition, as of 2022, more than 200 million businesses worldwide used the company’s apps and services — most of them small — and 10 million were active paying advertisers on the platform, according to one executive.

Meta’s new chief global affairs officer Joe Kaplan, a former Deputy Chief of Staff for Republican President George W. Bush who recently took on the role in what many viewed as a signal to lawmakers and the wider world of Meta’s willingness to work with the GOP-led Congress and White House following the 2024 election, also published a note to Meta’s corporate website describing some of the changes in greater detail.

Already, some business executives such as Shopify’s CEO Tobi Lutke have seemingly embraced the announcement. As Lutke wrote on X today: “Huge and important change.”

Founders Fund chief marketing officer and tech influencer Mike Solana also hailed the move, writing in a post on X: “There’s already been a dramatic decrease in censorship across the meta platforms. but a public statement of this kind plainly speaking truth (the “fact checkers” were biased, and the policy was immoral) is really and finally the end of a golden age for the worst people alive.”

However, others are less optimistic and receptive to the changes, viewing them as less about freedom of expression, and more about currying favor with the incoming Republican presidential administration of recently elected President Donald J. Trump (to his second non-consecutive term) and GOP-led Congress, as other business executives and firms have seemingly moved to do.

“More free expression on social media is a good thing,” wrote the non-profit Freedom of the Press Foundation on the social network BlueSky (disclosure: my wife is a board member of the non-profit). “But based on Meta’s track record, it seems more likely that this is about sucking up to Donald Trump than it is about free speech.”

George Washington University political communication professor Dave Karpf seemed to agree, writing on BlueSky: “Two salient facts about Facebook replacing its fact-checking program with community notes: (1) community notes are cheaper. (2) the incoming political regime dislikes fact-checking. So community notes are less trouble. The rest is just framing. Zuck’s sole principle is to do what’s best for Zuck.”

And Kate Starbird, professor at the University of Washington and co-founder of the UW Center for an Informed Public, wrote on BlueSky that: “Meta is dropping its support for fact-checking, which, in addition to degrading users’ ability to verify content, will essentially defund all of the little companies that worked to identify false content online. But our FB feeds are basically just AI slop at this point, so?”

When will the changes take place?

Both Zuck and Kaplan stated in their respective video and text posts that the changes to Meta’s content moderation policies and practices would be coming to the U.S. in “the next couple of months.”

Meta will discontinue its independent fact-checking program in the United States, launched in 2016, in favor of a Community Notes model inspired by X (formerly Twitter). This system will rely on users to write and rate notes, requiring agreement across diverse perspectives to ensure balance and prevent bias.

According to its website, Meta had been working with a variety of organizations “certified through the non-partisan International Fact-Checking Network (IFCN) or European Fact-Checking Standards Network (EFCSN) to identify, review and take action” on content deemed “misinformation.”

However, as Zuck opined in his video post, “after Trump first got elected in 2016 the legacy media wrote non-stop about how misinformation was a threat to democracy. We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the U.S.”

Zuck also added that: “There’s been widespread debate about potential harms from online content. Governments and legacy media have pushed to censor more and more. A lot of this is clearly political.”

According to Kaplan, the shift aims to reduce the perceived censorship that arose from the previous fact-checking program, which often applied intrusive labels to legitimate political speech.

Loosening restrictions on political and sensitive topics

Meta is revising its content policies to allow more discourse on politically sensitive topics like immigration and gender identity. Kaplan pointed out that it is inconsistent for such topics to be debated in public forums like Congress or on television but restricted on Meta’s platforms.

Automated systems, which have previously been used to enforce policies across a wide range of issues, will now focus primarily on tackling illegal and severe violations, such as terrorism and child exploitation.

For less critical issues, the platform will rely more on user reports and human reviewers. Meta will also reduce content demotions for material flagged as potentially problematic unless there is strong evidence of a violation.

However, the reduction of automated systems would seem to fly in the face of Meta’s promotion of AI as a valuable tool in its own business offerings — why should anyone else trust Meta’s AI models such as the Llama family if Meta itself isn’t content to use them to moderate content?

A reduction in content takedowns coming?

As Zuck put it, a big problem with Facebook’s automated systems is overly broad censorship.

He stated in his video address, “we built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes, even if they accidentally censor just 1% posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.”

Meta acknowledges that mistakes in content moderation have been a persistent issue. Kaplan noted that while less than 1% of daily content is removed, an estimated 10-20% of these actions may be errors. To address this, Meta plans to:

• Publish transparency reports detailing moderation mistakes and progress.

• Require multiple reviewers to confirm decisions before content is removed.

• Use advanced AI systems, including large language models, for second opinions on enforcement actions.

Additionally, the company is relocating its trust and safety teams from California to other U.S. locations, including Texas, to address perceptions of bias — which already, some have poked fun at on various social channels: are people in Texas really less biased than those in California?

The return of political content…and ‘fake news’?

Since 2021, Meta has limited the visibility of civic and political content on its platforms in response to user feedback.

However, the company now plans to reintroduce this content in a more personalized manner.

Users who wish to see more political content will have greater control over their feeds, with Meta using explicit signals like likes and implicit behaviors such as post views to determine preferences.

However, this reinstating of political content could run the risk of once again allowing for the spread of politically charged misinformation from U.S. adversaries — as we saw in the run-up to the 2016 election, when numerous Facebook pages spewed disinformation and conspiracy theories that favored Republicans and disfavored Democratic candidates and policies.

One admitted “fake news” creator told NPR that while they had tried to create content for both liberal and conservative audiences, the latter were more interested and gullible towards sharing and re-sharing fake content that aligned with their views.

Such “fake news” was so widespread, it was even joked about on social media itself and in The Onion.

My analysis on what it means for businesses and brand pages

I’ve never owned a business, but I have managed several Facebook and Instagram accounts on behalf of large corporate and smaller startup/non-profit organizations, so I know firsthand about the work that goes into maintaining them, posting, and growing their audiences/followings.

I think that while Meta’s stated commitment to restoring more freedom of expression to its products is laudable, the jury is out on how it will actually impact the desire for businesses to speak to their fans and customers using said products.

At best, it will be a double-edged sword: less strict content moderation policies will give brands and businesses the chance to post more controversial, experimental, and daring content — and those that take advantage of this may see their messages reach wider audiences, i.e., “go viral.”

On the flip side, brands and businesses may now struggle to get their posts seen and reacted upon in the face of other pages posting even more controversial, politically pointed content.

In addition, the changes could make it easier for users to criticize brands or implicate them in conspiracies, and it may be harder for the brands to force takedowns of such unflattering content about them — even when untrue.

What’s next?

The rollout of Community Notes and policy adjustments is expected to begin in the coming months in the U.S. Meta plans to improve and refine these systems throughout the year.

These initiatives, Kaplan said, aim to balance the need for safety and accuracy with the company’s core value of enabling free expression.

Kaplan said Meta is focused on creating a platform where individuals can freely express themselves. He also acknowledged the challenges of managing content at scale, describing the process as “messy” but essential to Meta’s mission.

For users, these changes promise fewer intrusive interventions and a greater opportunity to shape the conversation on Meta’s platforms.

Whether the new approach will succeed in reducing frustration and fostering open dialogue remains to be seen.

Hey, everyone. I want to talk about something important today, because it’s time to get back to our roots around free expression on Facebook and Instagram. I started building social media to give people a voice. I gave a speech at Georgetown five years ago about the importance of protecting free expression, and I still believe this today, but a lot has happened over the last several years.

There’s been widespread debate about potential harms from online content. Governments and legacy media have pushed to censor more and more. A lot of this is clearly political, but there’s also a lot of legitimately bad stuff out there: drugs, terrorism, child exploitation. These are things that we take very seriously, and I want to make sure that we handle responsibly. So we built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes. Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.

The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech. So we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms. More specifically, here’s what we’re going to do.

First, we’re going to get rid of fact-checkers and replace them with community notes similar to X, starting in the US. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US. So over the next couple of months, we’re going to phase in a more comprehensive community notes system.

Second, we’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse. What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it’s gone too far. So I want to make sure that people can share their beliefs and experiences on our platforms.

Third, we’re changing how we enforce our policies to reduce the mistakes that account for the vast majority of censorship on our platforms. We used to have filters that scanned for any policy violation. Now we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a tradeoff. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.

Fourth, we’re bringing back civic content. For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts. But it feels like we’re in a new era now, and we’re starting to get feedback that people want to see this content again. So we’re going to start phasing this back into Facebook, Instagram, and Threads, while working to keep the communities friendly and positive.

Fifth, we’re going to move our trust and safety and content moderation teams out of California, and our US-based content review is going to be based in Texas. As we work to promote free expression, I think that will help us build trust to do this work in places where there is less concern about the bias of our teams.

Finally, we’re going to work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever-increasing number of laws institutionalizing censorship and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years. When even the US government has pushed for censorship by going after us and other American companies, it has emboldened other governments to go even further. But now we have the opportunity to restore free expression, and I am excited to take it.

It’ll take time to get this right, and these are complex systems. They’re never going to be perfect. There’s also a lot of illegal stuff that we still need to work very hard to remove. But the bottom line is that after years of having our content moderation work focused primarily on removing content, it is time to focus on reducing mistakes, simplifying our systems, and getting back to our roots about giving people voice.

I’m looking forward to this next chapter. Stay good out there and more to come soon.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Chronosphere unveils logging package with cost control features

According to a study by Chronosphere, enterprise log data is growing at 250% year-over-year, and Chronosphere Logs helps engineers and observability teams to resolve incidents faster while controlling costs. The usage and volume analysis and proactive recommendations can help reduce data before it’s stored, the company says. “Organizations are drowning

Read More »

Cisco CIO on the future of IT: AI, simplicity, and employee power

AI can democratize access to information to deliver a “white-glove experience” once reserved for senior executives, Previn said. That might include, for example, real-time information retrieval and intelligent process execution for every employee. “Usually, in a large company, you’ve got senior executives, and you’ve got early career hires, and it’s

Read More »

AMI MegaRAC authentication bypass flaw is being exploitated, CISA warns

The spoofing attack works by manipulating HTTP request headers sent to the Redfish interface. Attackers can add specific values to headers like “X-Server-Addr” to make their external requests appear as if they’re coming from inside the server itself. Since the system automatically trusts internal requests as authenticated, this spoofing technique

Read More »

Energy Secretary Announces Updated NEPA Procedures to End Permitting Paralysis and Unleash American Energy

WASHINGTON— The U.S. Department of Energy (DOE) today announced new updates to the Department’s National Environmental Policy Act (NEPA) procedures, fixing the broken permitting process and delivering on President Trump’s pledge to unleash American energy dominance and accelerate critical energy infrastructure. As part of a government-wide effort to restore common sense to permitting, DOE published an interim final rule rescinding all NEPA regulations and published new NEPA guidance procedures for the Department of Energy. “President Trump promised to break the permitting logjam, and he is delivering,” said Energy Secretary Chris Wright. “America can and will build big things again, but we must cut the red tape that has brought American energy innovation to a standstill and end this era of permitting paralysis. These reforms replace outdated rules with clear deadlines, restore agency authority, and put us back on the path to energy dominance, job creation, and commonsense action. Build, baby, build!” “This overhaul restores NEPA to the role originally envisioned by Congress—informing agency decision makers, not needlessly obstructing the development of critical infrastructure,” said Deputy Energy Secretary James Danly. “We’re eliminating the accretion of decades of unnecessary procedure and reestablishing a legally sound permitting regime that is disciplined, predictable, and fast. Agencies finally have the authority to conduct reviews efficiently, avoid duplicative reviews, and deliver timely decisions consistent with the law.” With President Trump’s leadership, the Council on Environmental Quality coordinated a historic, interagency effort to simplify NEPA compliance, lower construction costs, eliminate years-long delays, and ensure environmental reviews can no longer be used to stall American energy production and infrastructure development. Today’s action fulfils President Trump’s Executive Order 14154, Unleashing American Energy, and implementing reforms enacted by Congress under the 2023 BUILDER Act. Background: This effort builds on President Trump’s January 2025 action to rescind CEQ’s outdated NEPA regulations

Read More »

Oil Tanker Suffers Unexplained Blast Weeks After Russia Call

A tanker hauling 1 million barrels of oil suffered an explosion while near Libya, its manager said Monday.  The Vilamoura is being towed to Greece, where damage will be assessed upon arrival. The blast caused water intake and the vessel’s engine room is flooded, a spokesperson for TMS Tankers said. Exactly what caused the explosion was unclear, they added.  In recent months, a series of mystery blasts hit oil tankers that had previously called at Russian ports. In the aftermath, shipowners began checking their ships’ hulls for mines with human divers and underwater vehicles. The Vilamoura has sailed to Russian oil terminals twice since April to pick up Kazakh oil, rather than Russian barrels.  The Vilamoura called at the Russian port of Ust-Luga in early April, where it loaded Kazakh-origin barrels, according to vessel-tracking data compiled by Bloomberg. It also called at the Caspian Pipeline Consortium terminal near the Russian port of Novorossiysk in May, which also loads mostly Kazakh barrels. There was no pollution from the blast and the crew are all safe, the spokesperson said. Four more vessels have suffered explosions since the start of the year, according to Vanguard Tech, a maritime risk consultancy company. All had recently called at Russian ports, it added.  Ukraine has repeatedly sought to undermine Russia’s energy infrastructure since Moscow’s full-scale invasion of the country. In February, Ukrainian drones attacked the Caspian Pipeline Consortium link, through which 80% of Kazakh oil exports must pass. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Oil Slips as Traders Weigh OPEC+ Hike

Oil edged down after its biggest weekly loss in two years as traders speculated on the volume of an expected OPEC+ supply hike and fears of Middle East tensions disrupting global oil flows dissipated. West Texas Intermediate crude shed 0.6% to settle near $65 a barrel, following a 13% slump last week. Key members of the Organization of the Petroleum Exporting Countries and its allies are ready to consider another 411,000 barrel-a-day increase for August when they meet Sunday, according to several delegates. It would be the fourth month in a row the group agreed on such a bumper hike, triple the initially planned volumes. “Crude futures continue in a consolidation pattern, seemingly finding equilibrium around the $65 a barrel area,” said Dennis Kissler, senior vice president for trading at BOK Financial Securities. “The real test will be if summer travel demand can take up the extra supplies OPEC will be adding. The wild card again focuses on Iran and their compliance.” Hedge funds piled into bearish bets after a fragile Iran-Israel truce removed a super-sized geopolitical risk premium from the market last week. Though Iran remains skeptical the US-brokered ceasefire will last, US President Donald Trump suggested he might back sanctions relief for the Islamic Republic “if they can be peaceful.” Meanwhile, trend-following commodity trading advisers, which tend to exacerbate price swings, liquidated long positions to sit at 45% net long in WTI on Monday, compared with 55% on June 27, according to data from Bridgeton Research Group. Oil is back near where it was before Israel initially attacked Iran on June 13, with futures on track for a 9% loss this quarter, as focus returns to supply and demand balances. Apart from the potential OPEC+ increase, which may worsen a glut forecast for later this year, investors will

Read More »

Israeli Gas Flows to Egypt Return to Normal as Iran Truce Holds

Israeli natural gas flows to Egypt returned to normal levels after a truce with Iran allowed the Jewish state to reopen facilities shuttered by the 12-day conflict. Daily exports have climbed to 1 billion cubic feet per day, according to two people with direct knowledge of the situation. That’s up from 260 million cubic feet when Israel’s Leviathan gas field, the country’s biggest, restarted on Wednesday, they said, declining to be identified because they’re not authorized to speak to the media.  The increased flows have let Egyptian authorities resume supplies to some factories that had been halted because of the shortages. Israel temporarily closed two of its three gas fields – Chevron-operated Leviathan and Energean’s Karish – shortly after launching attacks on Iran on June 13. The facilities that provided the bulk of exports to Egypt and Jordan resumed operations last week after a US-brokered ceasefire with the Islamic Republic took hold. The ramped-up supplies are a relief for Cairo, which has swung from a net exporter to importer of natural gas in recent years. As Israel and Iran traded blows, Egypt enacted contingency plans that included seeking alternative fuel purchases, limiting gas to some industries and switching power stations to fuel oil and diesel to maintain electricity output. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

California Regulator Wants to Pause Newsom Refinery Profit Cap

California’s energy market regulator is backing off a plan to place a profit cap on oil refiners in the state.  Siva Gunda, vice chair of the California Energy Commission, said during a Friday briefing that the cap would “serve as a deterrent” to refiners boosting investments in the state. Gunda said the commission wants to increase gasoline supply in California after two refineries announced plans to close in the next year, accounting for about one-fifth of the state’s crude-processing capacity. The recommendation marks a reversal from years of regulatory scrutiny by Governor Gavin Newsom and the California Energy Commission that contributed to plans by Phillips 66 and Valero Energy Corp. to shut their refineries. The closings prompted Newsom to adjust course in April and urge the energy regulator to collaborate with fuel makers to ensure affordable and reliable supply. Gunda wrote in a Friday letter to Newsom that the commission should pause implementation of a profit margin cap and focus on fuel resupply strategies instead. It comes more than two years after Newsom and state lawmakers gave the energy commission authority to determine a profit margin on refiners and impose financial penalties for violations. The state will be looking to increase fuel imports to make up for the loss of refining capacity, Gunda said. In the short term, California gas prices could rise 15 to 30 cents a gallon because of the loss of production, he said. A spokesperson for the energy commission said the estimated price increases would be mitigated by the plan presented on Friday. Californians already pay the highest gasoline prices in the country. Wade Crowfoot, secretary of the California Natural Resources Agency, said residents want the state to transition away from oil and gas yet they need to prevent cost spikes. “We get it,” he said.

Read More »

Federal Agencies Pull Out from Columbia Basin Protection Pact

The commerce, energy and interior departments, as well as the United States Army Corps. of Engineers and the Bonneville Power Administration, have withdrawn from an agreement signed by the previous administration to safeguard native fish populations in the Columbia River Basin. The December 2023 memorandum of understanding (MOU) launched a 10-year partnership with tribes, conservation groups and the states of Oregon and Washington to restore wild fish populations. That agreement pledged a federal investment of over $1 billion and enabled a 10-year break from decades-long litigation against the federal government’s operation of dams in the Pacific Northwest. The MOU followed then-President Joe Biden’s “Restoring Healthy and Abundant Salmon, Steelhead, and Other Native Fish Populations in the Columbia River Basin” memo of September 2023. Biden’s memo sought to honor U.S. “treaty responsibilities” to tribal nations and enforce safeguards under the Pacific Northwest Electric Power Planning and Conservation Act. Biden’s memo directed all agencies with applicable authority to review their programs affecting native fish in the basin. The federal agencies’ withdrawal from the 10-year partnership agreement, also called the Resilient Columbia Basin Agreement (RCBA), complies with a memo issued by President Donald Trump earlier in June 2025 to rescind Biden’s September 2023 memo. In a letter of withdrawal, the federal agencies and the Bonneville Power Administration told signatory states, tribes and wildlife campaigners, “The undersigned signatories to the [December 2023] MOU now withdraw the United States from the MOU”. “It should be noted, however, that none of the undersigned agencies are opposed to seeking a satisfactory solution to the pending litigations and concerns of the various stakeholders and are willing to engage in good faith in efforts to achieve such a result”, stated the letter, published online. Commenting on the federal agencies’ withdrawal, Energy Secretary Chris Wright said, “This Administration will continue to

Read More »

Oracle inks $30 billion cloud deal, continuing its strong push into AI infrastructure.

He pointed out that, in addition to its continued growth, OCI has a remaining performance obligation (RPO) — total future revenue expected from contracts not yet reported as revenue — of $138 billion, a 41% increase, year over year. The company is benefiting from the immense demand for cloud computing largely driven by AI models. While traditionally an enterprise resource planning (ERP) company, Oracle launched OCI in 2016 and has been strategically investing in AI and data center infrastructure that can support gigawatts of capacity. Notably, it is a partner in the $500 billion SoftBank-backed Stargate project, along with OpenAI, Arm, Microsoft, and Nvidia, that will build out data center infrastructure in the US. Along with that, the company is reportedly spending about $40 billion on Nvidia chips for a massive new data center in Abilene, Texas, that will serve as Stargate’s first location in the country. Further, the company has signaled its plans to significantly increase its investment in Abu Dhabi to grow out its cloud and AI offerings in the UAE; has partnered with IBM to advance agentic AI; has launched more than 50 genAI use cases with Cohere; and is a key provider for ByteDance, which has said it plans to invest $20 billion in global cloud infrastructure this year, notably in Johor, Malaysia. Ellison’s plan: dominate the cloud world CTO and co-founder Larry Ellison announced in a recent earnings call Oracle’s intent to become No. 1 in cloud databases, cloud applications, and the construction and operation of cloud data centers. He said Oracle is uniquely positioned because it has so much enterprise data stored in its databases. He also highlighted the company’s flexible multi-cloud strategy and said that the latest version of its database, Oracle 23ai, is specifically tailored to the needs of AI workloads. Oracle

Read More »

Datacenter industry calls for investment after EU issues water consumption warning

CISPE’s response to the European Commission’s report warns that the resulting regulatory uncertainty could hurt the region’s economy. “Imposing new, standalone water regulations could increase costs, create regulatory fragmentation, and deter investment. This risks shifting infrastructure outside the EU, undermining both sustainability and sovereignty goals,” CISPE said in its latest policy recommendation, Advancing water resilience through digital innovation and responsible stewardship. “Such regulatory uncertainty could also reduce Europe’s attractiveness for climate-neutral infrastructure investment at a time when other regions offer clear and stable frameworks for green data growth,” it added. CISPE’s recommendations are a mix of regulatory harmonization, increased investment, and technological improvement. Currently, water reuse regulation is directed towards agriculture. Updated regulation across the bloc would encourage more efficient use of water in industrial settings such as datacenters, the asosciation said. At the same time, countries struggling with limited public sector budgets are not investing enough in water infrastructure. This could only be addressed by tapping new investment by encouraging formal public-private partnerships (PPPs), it suggested: “Such a framework would enable the development of sustainable financing models that harness private sector innovation and capital, while ensuring robust public oversight and accountability.” Nevertheless, better water management would also require real-time data gathered through networks of IoT sensors coupled to AI analytics and prediction systems. To that end, cloud datacenters were less a drain on water resources than part of the answer: “A cloud-based approach would allow water utilities and industrial users to centralize data collection, automate operational processes, and leverage machine learning algorithms for improved decision-making,” argued CISPE.

Read More »

HPE-Juniper deal clears DOJ hurdle, but settlement requires divestitures

In HPE’s press release following the court’s decision, the vendor wrote that “After close, HPE will facilitate limited access to Juniper’s advanced Mist AIOps technology.” In addition, the DOJ stated that the settlement requires HPE to divest its Instant On business and mandates that the merged firm license critical Juniper software to independent competitors. Specifically, HPE must divest its global Instant On campus and branch WLAN business, including all assets, intellectual property, R&D personnel, and customer relationships, to a DOJ-approved buyer within 180 days. Instant On is aimed primarily at the SMB arena and offers a cloud-based package of wired and wireless networking gear that’s designed for so-called out-of-the-box installation and minimal IT involvement, according to HPE. HPE and Juniper focused on the positive in reacting to the settlement. “Our agreement with the DOJ paves the way to close HPE’s acquisition of Juniper Networks and preserves the intended benefits of this deal for our customers and shareholders, while creating greater competition in the global networking market,” HPE CEO Antonio Neri said in a statement. “For the first time, customers will now have a modern network architecture alternative that can best support the demands of AI workloads. The combination of HPE Aruba Networking and Juniper Networks will provide customers with a comprehensive portfolio of secure, AI-native networking solutions, and accelerate HPE’s ability to grow in the AI data center, service provider and cloud segments.” “This marks an exciting step forward in delivering on a critical customer need – a complete portfolio of modern, secure networking solutions to connect their organizations and provide essential foundations for hybrid cloud and AI,” said Juniper Networks CEO Rami Rahim. “We look forward to closing this transaction and turning our shared vision into reality for enterprise, service provider and cloud customers.”

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Cisco backs quantum networking startup Qunnect

In partnership with Deutsche Telekom’s T-Labs, Qunnect has set up quantum networking testbeds in New York City and Berlin. “Qunnect understands that quantum networking has to work in the real world, not just in pristine lab conditions,” Vijoy Pandey, general manager and senior vice president of Outshift by Cisco, stated in a blog about the investment. “Their room-temperature approach aligns with our quantum data center vision.” Cisco recently announced it is developing a quantum entanglement chip that could ultimately become part of the gear that will populate future quantum data centers. The chip operates at room temperature, uses minimal power, and functions using existing telecom frequencies, according to Pandey.

Read More »

HPE announces GreenLake Intelligence, goes all-in with agentic AI

Like a teammate who never sleeps Agentic AI is coming to Aruba Central as well, with an autonomous supervisory module talking to multiple specialized models to, for example, determine the root cause of an issue and provide recommendations. David Hughes, SVP and chief product officer, HPE Aruba Networking, said, “It’s like having a teammate who can work while you’re asleep, work on problems, and when you arrive in the morning, have those proposed answers there, complete with chain of thought logic explaining how they got to their conclusions.” Several new services for FinOps and sustainability in GreenLake Cloud are also being integrated into GreenLake Intelligence, including a new workload and capacity optimizer, extended consumption analytics to help organizations control costs, and predictive sustainability forecasting and a managed service mode in the HPE Sustainability Insight Center. In addition, updates to the OpsRamp operations copilot, launched in 2024, will enable agentic automation including conversational product help, an agentic command center that enables AI/ML-based alerts, incident management, and root cause analysis across the infrastructure when it is released in the fourth quarter of 2025. It is now a validated observability solution for the Nvidia Enterprise AI Factory. OpsRamp will also be part of the new HPE CloudOps software suite, available in the fourth quarter, which will include HPE Morpheus Enterprise and HPE Zerto. HPE said the new suite will provide automation, orchestration, governance, data mobility, data protection, and cyber resilience for multivendor, multi cloud, multi-workload infrastructures. Matt Kimball, principal analyst for datacenter, compute, and storage at Moor Insights & strategy, sees HPE’s latest announcements aligning nicely with enterprise IT modernization efforts, using AI to optimize performance. “GreenLake Intelligence is really where all of this comes together. I am a huge fan of Morpheus in delivering an agnostic orchestration plane, regardless of operating stack

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »