Stay Ahead, Stay ONMINE

Can crowdsourced fact-checking curb misinformation on social media?

Provided byMohamed bin Zayed University of Artificial Intelligence In a 2019 speech at Georgetown University, Mark Zuckerberg famously declared that he didn’t want Facebook to be an “arbiter of truth.” And yet, in the years since, his company, Meta, has used several methods to moderate content and identify misleading posts across its social media apps, which include Facebook, Instagram, and Threads. These methods have included automatic filters that identify illegal and malicious content, and third-party factcheckers who manually research the validity of claims made in certain posts. Zuckerberg explained that while Meta has put a lot of effort into building “complex systems to moderate content,” over the years, these systems have made many mistakes, with the result being “too much censorship.” The company therefore announced that it would be ending its third-party factchecker program in the US, replacing it with a system called Community Notes, which relies on users to flag false or misleading content and provide context about it. While Community Notes has the potential to be extremely effective, the difficult job of content moderation benefits from a mix of different approaches. As a professor of natural language processing at MBZUAI, I’ve spent most of my career researching disinformation, propaganda, and fake news online. So, one of the first questions I asked myself was: will replacing human factcheckers with crowdsourced Community Notes have negative impacts on users? Wisdom of crowds Community Notes got its start on Twitter as Birdwatch. It’s a crowdsourced feature where users who participate in the program can add context and clarification to what they deem false or misleading tweets. The notes are hidden until community evaluation reaches a consensus—meaning, people who hold different perspectives and political views agree that a post is misleading. An algorithm determines when the threshold for consensus is reached, and then the note becomes publicly visible beneath the tweet in question, providing additional context to help users make informed judgments about its content. Community Notes seems to work rather well. A team of researchers from University of Illinois Urbana-Champaign and University of Rochester found that X’s Community Notes program can reduce the spread of misinformation, leading to post retractions by authors. Facebook is largely adopting the same approach that is used on X today. Having studied and written about content moderation for years, it’s great to see another major social media company implementing crowdsourcing for content moderation. If it works for Meta, it could be a true game-changer for the more than 3 billion people who use the company’s products every day. That said, content moderation is a complex problem. There is no one silver bullet that will work in all situations. The challenge can only be addressed by employing a variety of tools that include human factcheckers, crowdsourcing, and algorithmic filtering. Each of these is best suited to different kinds of content, and can and must work in concert. Spam and LLM safety There are precedents for addressing similar problems. Decades ago, spam email was a much bigger problem than it is today. In large part, we’ve defeated spam through crowdsourcing. Email providers introduced reporting features, where users can flag suspicious emails. The more widely distributed a particular spam message is, the more likely it will be caught, as it’s reported by more people. Another useful comparison is how large language models (LLMs) approach harmful content. For the most dangerous queries—related to weapons or violence, for example—many LLMs simply refuse to answer. Other times, these systems may add a disclaimer to their outputs, such as when they are asked to provide medical, legal, or financial advice. This tiered approach is one that my colleagues and I at the MBZUAI explored in a recent study where we propose a hierarchy of ways LLMs can respond to different kinds of potentially harmful queries. Similarly, social media platforms can benefit from different approaches to content moderation. Automatic filters can be used to identify the most dangerous information, preventing users from seeing and sharing it. These automated systems are fast, but they can only be used for certain kinds of content because they aren’t capable of the nuance required for most content moderation. Crowdsourced approaches like Community Notes can flag potentially harmful content by relying on the knowledge of users. They are slower than automated systems but faster than professional factcheckers. Professional factcheckers take the most time to do their work, but the analyses they provide are deeper compared to Community Notes, which are limited to 500 characters. Factcheckers typically work as a team and benefit from shared knowledge. They are often trained to analyze the logical structure of arguments, identifying rhetorical techniques frequently employed in mis- and disinformation campaigns. But the work of professional factcheckers can’t scale in the same way Community Notes can. That’s why these three methods are most effective when they are used together. Indeed, Community Notes have been found to amplify the work done by factcheckers so it reaches more users. Another study found that Community Notes and factchecking complement each other, as they focus on different types of accounts, with Community Notes tending to analyze posts from large accounts that have high “social influence.” When Community Notes and factcheckers do converge on the same posts, their assessments are similar, however. Another study found that crowdsourced content moderation itself benefits from the findings of professional factcheckers. A path forward At its heart, content moderation is extremely difficult because it is about how we determine truth—and there is much we don’t know. Even scientific consensus, built over years by entire disciplines, can change over time. That said, platforms shouldn’t retreat from the difficult task of moderating content altogether—or become overly dependent on any single solution. They must continuously experiment, learn from their failures, and refine their strategies. As it’s been said, the difference between people who succeed and people who fail is that successful people have failed more times than others have even tried. This content was produced by the Mohamed bin Zayed University of Artificial Intelligence. It was not written by MIT Technology Review’s editorial staff.

Provided byMohamed bin Zayed University of Artificial Intelligence

In a 2019 speech at Georgetown University, Mark Zuckerberg famously declared that he didn’t want Facebook to be an “arbiter of truth.” And yet, in the years since, his company, Meta, has used several methods to moderate content and identify misleading posts across its social media apps, which include Facebook, Instagram, and Threads. These methods have included automatic filters that identify illegal and malicious content, and third-party factcheckers who manually research the validity of claims made in certain posts.

Zuckerberg explained that while Meta has put a lot of effort into building “complex systems to moderate content,” over the years, these systems have made many mistakes, with the result being “too much censorship.” The company therefore announced that it would be ending its third-party factchecker program in the US, replacing it with a system called Community Notes, which relies on users to flag false or misleading content and provide context about it.

While Community Notes has the potential to be extremely effective, the difficult job of content moderation benefits from a mix of different approaches. As a professor of natural language processing at MBZUAI, I’ve spent most of my career researching disinformation, propaganda, and fake news online. So, one of the first questions I asked myself was: will replacing human factcheckers with crowdsourced Community Notes have negative impacts on users?

Wisdom of crowds

Community Notes got its start on Twitter as Birdwatch. It’s a crowdsourced feature where users who participate in the program can add context and clarification to what they deem false or misleading tweets. The notes are hidden until community evaluation reaches a consensus—meaning, people who hold different perspectives and political views agree that a post is misleading. An algorithm determines when the threshold for consensus is reached, and then the note becomes publicly visible beneath the tweet in question, providing additional context to help users make informed judgments about its content.

Community Notes seems to work rather well. A team of researchers from University of Illinois Urbana-Champaign and University of Rochester found that X’s Community Notes program can reduce the spread of misinformation, leading to post retractions by authors. Facebook is largely adopting the same approach that is used on X today.

Having studied and written about content moderation for years, it’s great to see another major social media company implementing crowdsourcing for content moderation. If it works for Meta, it could be a true game-changer for the more than 3 billion people who use the company’s products every day.

That said, content moderation is a complex problem. There is no one silver bullet that will work in all situations. The challenge can only be addressed by employing a variety of tools that include human factcheckers, crowdsourcing, and algorithmic filtering. Each of these is best suited to different kinds of content, and can and must work in concert.

Spam and LLM safety

There are precedents for addressing similar problems. Decades ago, spam email was a much bigger problem than it is today. In large part, we’ve defeated spam through crowdsourcing. Email providers introduced reporting features, where users can flag suspicious emails. The more widely distributed a particular spam message is, the more likely it will be caught, as it’s reported by more people.

Another useful comparison is how large language models (LLMs) approach harmful content. For the most dangerous queries—related to weapons or violence, for example—many LLMs simply refuse to answer. Other times, these systems may add a disclaimer to their outputs, such as when they are asked to provide medical, legal, or financial advice. This tiered approach is one that my colleagues and I at the MBZUAI explored in a recent study where we propose a hierarchy of ways LLMs can respond to different kinds of potentially harmful queries. Similarly, social media platforms can benefit from different approaches to content moderation.

Automatic filters can be used to identify the most dangerous information, preventing users from seeing and sharing it. These automated systems are fast, but they can only be used for certain kinds of content because they aren’t capable of the nuance required for most content moderation.

Crowdsourced approaches like Community Notes can flag potentially harmful content by relying on the knowledge of users. They are slower than automated systems but faster than professional factcheckers.

Professional factcheckers take the most time to do their work, but the analyses they provide are deeper compared to Community Notes, which are limited to 500 characters. Factcheckers typically work as a team and benefit from shared knowledge. They are often trained to analyze the logical structure of arguments, identifying rhetorical techniques frequently employed in mis- and disinformation campaigns. But the work of professional factcheckers can’t scale in the same way Community Notes can. That’s why these three methods are most effective when they are used together.

Indeed, Community Notes have been found to amplify the work done by factcheckers so it reaches more users. Another study found that Community Notes and factchecking complement each other, as they focus on different types of accounts, with Community Notes tending to analyze posts from large accounts that have high “social influence.” When Community Notes and factcheckers do converge on the same posts, their assessments are similar, however. Another study found that crowdsourced content moderation itself benefits from the findings of professional factcheckers.

A path forward

At its heart, content moderation is extremely difficult because it is about how we determine truth—and there is much we don’t know. Even scientific consensus, built over years by entire disciplines, can change over time.

That said, platforms shouldn’t retreat from the difficult task of moderating content altogether—or become overly dependent on any single solution. They must continuously experiment, learn from their failures, and refine their strategies. As it’s been said, the difference between people who succeed and people who fail is that successful people have failed more times than others have even tried.

This content was produced by the Mohamed bin Zayed University of Artificial Intelligence. It was not written by MIT Technology Review’s editorial staff.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

6 trends that will shape the future of the cloud: Gartner

For this reason, Gartner recommends identifying specific use cases and planning the applications and data distributed across the organization that could benefit from a cross-cloud deployment model. This allows workloads to operate collaboratively across different cloud platforms, as well as different on-premises and co-location facilities. 4. Industry solutions According to

Read More »

New England Patriots kick off network upgrade

The longer-term roadmap with NWN includes a refresh of the stadium’s 1,800 Extreme Networks Wi-Fi 6 access points to either Wi-Fi 6E or 7, a refresh of the network’s 80 Cisco physical and virtual firewalls, followed by a network consolidation project. On top of all that, the Kraft Group is

Read More »

CompTIA cert targets operational cybersecurity skills

The SecOT+ certification will provide OT professionals with the skills to manage, mitigate, and remediate security risks in manufacturing and critical infrastructure environments, according to CompTIA. The certification program will provide OT positions, such as floor technicians and industrial engineers, as well as cybersecurity engineers and network architects on the

Read More »

Weatherford Enters into Collaboration with Amazon Web Services

Weatherford International plc said it has signed an agreement with Amazon Web Services (AWS) that aims to transform Weatherford’s digital capabilities and “help drive innovation in the energy sector”. As part of the collaboration, Weatherford will have AWS as its preferred cloud provider and will migrate its software and hardware suite to AWS. The migration includes Weatherford’s modern edge platform, which integrates advanced software-enabled hardware with its control system CygNet, the company said in a news release. AWS will support the development of next-generation technologies, further enhancing Weatherford’s unified data model, a solution that allows customers to integrate, harmonize, and analyze multi-asset data within a scalable, API-compatible model, according to the release. The collaboration will also enhance the WFRD Software Launchpad, a platform that provides customers with access to Weatherford-built and Weatherford-partnered applications. The Launchpad ensures that customers retain control over their data while seamlessly managing multiple software solutions without being locked into a single application, the company said. Weatherford President and CEO Girish Saligram said, “We are excited to work with AWS to deliver a comprehensive suite of innovative solutions that enable our customers to drive efficiency and innovation. This collaboration allows us to leverage AWS’s world-class cloud infrastructure to help our customers modernize their operations, reduce complexity, and achieve greater autonomy in their decision-making”. “AWS capabilities are accelerating Weatherford’s digital transformation and helping the company drive innovation in their digital solutions to meet customers’ needs,” Howard Gefen, GM for Energy and Utilities at AWS, said. “This collaboration will enhance Weatherford’s ability to deliver its operational and petrotechnical solutions by leveraging scalable, hardware and software solutions that empower energy companies to optimize their operations and achieve sustainable growth in an increasingly complex landscape”. New CFO Named Weatherford said in an earlier statement that Anuj Dhruv has been appointed as

Read More »

Africa Oil Corp Announces New Brand Identity

In a release posted on its website, Africa Oil Corp. announced a new brand identity with a change of name to Meren Energy Inc. The company noted in the release that its rebranding follows the recent completion of the “transformative Prime consolidation, doubling reserves and production in high quality offshore assets that benefit from low lifting costs, premium Brent pricing and a favorable fiscal regime”. The business said its common shares will trade under the new symbol ‘MER’ on the TSX and Nasdaq OMX Stockholm. It added in the release that there is no change in the capitalization of the company pursuant to the change of name and new trading symbols. In connection with its name change, the company also announced the launch of a new website, which has gone live today, “to coincide with the trading under the new symbols”. The name Meren is derived from an old nautical term representing the mooring of a vessel as it docks, the company stated in the release.  “Inspired by the maritime legends that set sail in pursuit of new worlds, the name mirrors the company’s stability anchored by a diverse portfolio, strong cash flow profile and proven ability to work side by side with industry leaders on world-class assets,” it added. In the release, the company noted that Meren’s “key strategic objectives will remain to – drive long-term value through its existing portfolio of world-class assets and deliver compelling shareholder returns; continue growing into a leading independent E&P company that is a trusted and prominent industry partner, recognized for the quality of its assets, balance sheet strength, and disciplined capital allocation; and judiciously consider strategic acquisition of production assets within target markets, with strict adherence to strategic, financial and operational criteria”. President and Chief Executive Officer Roger Tucker said in the release,

Read More »

Spain Boosts Costlier Gas Power to Secure Grid After Blackout

Spain is boosting generation from costlier gas-fired power plants in the wake of a nationwide blackout that raised concerns about the grid’s ability to cope with an abundance of renewable energy.  The output of combined-cycle gas turbines, a more steady generation technology than solar, jumped 37% in the two weeks after the outage, compared with the two weeks prior, data from power grid operator Red Electrica show. Their average share of Spain’s power mix increased to 18% from about 12%.  The collapse of Spain’s power grid left millions without electricity, telephone communication, trains and traffic lights for hours on April 28, including in neighboring Portugal and parts of southern France. The government is still investigating the causes of the blackout, and hasn’t yet clarified which technology was at fault and why, or who’s to blame. Additional CCGTs “are currently being included to reduce the impact that an abrupt output change may have over voltages,” Red Electrica said in response to questions.  Following strong oscillations in the grid, about 2.2 gigwatts of capacity went offline in the south of Spain less than a minute before the complete collapse of the Iberian Peninsula’s electricity systems, Deputy Prime Minister Sara Aagesen told lawmakers last week. RBC Capital Markets analysts were among those saying that the most likely culprits were solar farms due to a lack of grid-forming inverters that help stabilize photo-voltaic output.  Aagesen said in an interview to Spanish radio station RNE on Thursday that pointing the finger at solar and the grid’s ability to deal with so much renewable energy is “simplistic.” The system is robust and has had many days with as much or more solar in the mix than on the day of the blackout, she argued.  Still, energy regulator CNMC head Cani Fernandez told lawmakers that the system

Read More »

ADNOC Signs Agreements for Potential $60B of USA Investments

ADNOC said it has entered into multiple agreements with U.S. energy majors for a potential $60 billion of U.S. investments into United Arab Emirates (UAE) energy projects. The agreements were made during the UAE-US business dialogue with President Donald Trump, the company said in a news release. The agreements include a field development plan with ExxonMobil and INPEX/JODCO to expand the capacity of Abu Dhabi’s Upper Zakum offshore field through phased development, according to the release. ADNOC said it signed a strategic collaboration agreement with Occidental Petroleum targeting to boost the production capacity of Shah Gas field’s capacity to 1.85 billion standard cubic feet per day (Bscfd) of natural gas from 1.45 bscfd, as well as accelerate the deployment of advanced technologies in the field. Building on its investment plans for the USA, ADNOC’s global energy investment firm XRG signed a framework agreement with Occidental subsidiary 1PointFive to evaluate a potential investment in a direct air capture (DAC) project in Kleberg County, Texas. The facility aims to remove up to 500,000 tons of carbon dioxide (CO2) per year using commercial-scale DAC technology, with XRG considering a capital commitment of up to one-third of the project’s total development cost, the release said. Abu Dhabi’s Supreme Council for Financial and Economic Affairs (SCFEA) also granted a new unconventional oil exploration concession to EOG Resources Inc. The award is for Unconventional Onshore Block 3, which covers a 1393.4-square-mile (3,609 square-kilometer) area within the Al Dhafra region of Abu Dhabi. ADNOC stated it would oversee and assist with all exploration activities in the concession and has the option to join a subsequent production concession. The phased field development plan for Upper Zakum will leverage artificial intelligence and industry-leading technologies and the deep expertise and strong partnership between ADNOC, ExxonMobil, and INPEX/JODCO “to sustainably grow

Read More »

Strathcona Bares Unsolicited Bid for MEG, Sells Montney Assets

Strathcona Resources Ltd. said it has entered into definitive agreements to divest “substantially all of its Montney assets” for about CAD 2.84 billion ($2.04 billion), as it prepares to make a hostile bid to take over fellow Canadian thermal oil producer MEG Energy Corp. The Montney exit and the planned merger with MEG will allow Strathcona to become a pure-play heavy oil producer, it said. The bulk of the Montney sale consists of Strathcona’s Kakwa business, to be acquired by ARC Resources Ltd. for around CAD 1.7 billion. The transaction value comprises CAD1.65 billion in cash and approximately CAD 45 million in assumed lease obligations. The sale is expected to be completed in the third quarter, subject to regulatory approvals and other customary conditions. Strathcona is also selling its Grande Prairie business for about CAD 850 million, inclusive of about CAD 100 million in assumed lease obligations, to an unnamed buyer. The sale is also expected to close next quarter. The Groundbirch business rounds up the Montney sales. It will go to Tourmaline Oil Corp. in exchange for CAD 291.5 million worth of shares being issued to Strathcona. The parties expect to conclude the transaction by June. The Monetney dispositions produced 72,000 barrels of oil equivalent a day (boed), 28 percent of which were oil and condensates, last year. They had proven and probable reserves of 635 million boe as of December 2024, according to Strathcona. “Taken together, the disposed assets generated CAD 149 million of operating earnings in 2024 (12 percent of total Strathcona year-end 2024 operating earnings, excluding interest and other corporate items) and had a YE 2024 proved PV-10 before-tax of approximately CAD 2.3 billion (15 percent of total Strathcona YE 2024 proved PV-10), while the combined sale price represents approximately 33 percent of Strathcona’s current enterprise

Read More »

OPEC+ Did Not Lift Production to Kill the Oil Price, SEB Says

OPEC+ did not lift production by 400,000 barrels per day in May and June to kill the oil price and to go full throttle on an oil price war. That’s what Bjarne Schieldrop, Chief Commodities Analyst at Skandinaviska Enskilda Banken AB (SEB), said in an oil report sent to Rigzone by the SEB team on Friday, adding that the group did it to meet added demand for oil in the Middle East, “which rise[s] significantly in summertime due to air conditioning and religious pilgrimage to Saudi Arabia”. “The plan of lifting production by 2.1 million barrels per day by December 2026 has not at all been abandoned,” Hvalbye stated in the report. “It is still a monthly decision of what to do. Lift production or even reduce production if needed,” he added. “The global oil market is still tight as of today [Friday] with consumers asking for more than what producers are giving them. Thus, the front-end backwardation,” he continued. “While there is no sign of a blasting price war emerging between OPEC+ and U.S. shale oil producers, it is still clear that U.S. shale oil producers will have to shed the needed volume to make room for more oil from OPEC+ to December 2026 to the magnitude of 2.1 million barrels per day added supply from the group,” Hvalbye went on to state. In a BMI report sent to Rigzone by the Fitch Group on Friday, BMI analysts said, “with OPEC+ continuing to increase production at faster pace than earlier guidance the risk of oversupply remains”. The analysts noted in that report that they continue to hold to their current forecast for Brent crude to average $68 per barrel in 2025. In a BofA Global Research report sent to Rigzone on Friday, BofA analysts stated that successive months of

Read More »

Liquid cooling becoming essential as AI servers proliferate

“Facility water loops sometimes have good water quality, sometimes bad,” says My Troung, CTO at ZutaCore, a liquid cooling company. “Sometimes you have organics you don’t want to have inside the technical loop.” So there’s one set of pipes that goes around the data center, collecting the heat from the server racks, and another set of smaller pipes that lives inside individual racks or servers. “That inner loop is some sort of technical fluid, and the two loops exchange heat across a heat exchanger,” says Troung. The most common approach today, he says, is to use a single-phase liquid — one that stays in liquid form and never evaporates into a gas — such as water or propylene glycol. But it’s not the most efficient option. Evaporation is a great way to dissipate heat. That’s what our bodies do when we sweat. When water goes from a liquid to a gas it’s called a phase change, and it uses up energy and makes everything around it slightly cooler. Of course, few servers run hot enough to boil water — but they can boil other liquids. “Two phase is the most efficient cooling technology,” says Xianming (Simon) Dai, a professor at University of Texas at Dallas. And it might be here sooner than you think. In a keynote address in March at Nvidia GTC, Nvidia CEO Jensen Huang unveiled the Rubin Ultra NVL576, due in the second half of 2027 — with 600 kilowatts per rack. “With the 600 kilowatt racks that Nvidia is announcing, the industry will have to shift very soon from single-phase approaches to two-phase,” says ZutaCore’s Troung. Another highly-efficient cooling approach is immersion cooling. According to a Castrol survey released in March, 90% of 600 data center industry leaders say that they are considering switching to immersion

Read More »

Cisco taps OpenAI’s Codex for AI-driven network coding

“If you want to ask Codex a question about your codebase, click “Ask”. Each task is processed independently in a separate, isolated environment preloaded with your codebase. Codex can read and edit files, as well as run commands including test harnesses, linters, and type checkers. Task completion typically takes between 1 and 30 minutes, depending on complexity, and you can monitor Codex’s progress in real time,” according to OpenAI. “Once Codex completes a task, it commits its changes in its environment. Codex provides verifiable evidence of its actions through citations of terminal logs and test outputs, allowing you to trace each step taken during task completion,” OpenAI wrote. “You can then review the results, request further revisions, open a GitHub pull request, or directly integrate the changes into your local environment. In the product, you can configure the Codex environment to match your real development environment as closely as possible.” OpenAI is releasing Codex as a research preview: “We prioritized security and transparency when designing Codex so users can verify its outputs – a safeguard that grows increasingly more important as AI models handle more complex coding tasks independently and safety considerations evolve. Users can check Codex’s work through citations, terminal logs and test results,” OpenAI wrote.  Internally, technical teams at OpenAI have started using Codex. “It is most often used by OpenAI engineers to offload repetitive, well-scoped tasks, like refactoring, renaming, and writing tests, that would otherwise break focus. It’s equally useful for scaffolding new features, wiring components, fixing bugs, and drafting documentation,” OpenAI stated. Cisco’s view of agentic AI Patel stated that Codex is part of the developing AI agent world, where Cisco envisions billions of AI agents will work together to transform and redefine the architectural assumptions the industry has relied on. Agents will communicate within and

Read More »

US companies are helping Saudi Arabia to build an AI powerhouse

AMD announced a five-year, $10 billion collaboration with Humain to deploy up to 500 megawatts of AI compute in Saudi Arabia and the US, aiming to deploy “multi-exaflop capacity by early 2026.” AWS, too, is expanding its data centers in Saudi Arabia to bolster Humain’s cloud infrastructure. Saudi Arabia has abundant oil and gas to power those data centers, and is growing its renewable energy resources with the goal of supplying 50% of the country’s power by 2030. “Commercial electricity rates, nearly 50% lower than in the US, offer potential cost savings for AI model training, though high local hosting costs due to land, talent, and infrastructure limit total savings,” said Eric Samuel, Associate Director at IDC. Located near Middle Eastern population centers and fiber optic cables to Asia, these data centers will offer enterprises low-latency cloud computing for real-time AI applications. Late is great There’s an advantage to being a relative latecomer to the technology industry, said Eric Samuel, associate director, research at IDC. “Saudi Arabia’s greenfield tech landscape offers a unique opportunity for rapid, ground-up AI integration, unburdened by legacy systems,” he said.

Read More »

AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers

Humain will deploy the Nvidia Omniverse platform as a multi-tenant system to drive acceleration of the new era of physical AI and robotics through simulation, optimization and operation of physical environments by new human-AI-led solutions. The AMD deal did not discuss the number of chips involved in the deal, but it is valued at $10 billion. AMD and Humain plan to develop a comprehensive AI infrastructure through a network of AMD-based AI data centers that will extend from Saudi Arabia to the US and support a wide range of AI workloads across corporate, start-up, and government markets. Think of it as AWS but only offering AI as a service. AMD will provide its AI compute portfolio – Epyc, Instinct, and FPGA networking — and the AMD ROCm open software ecosystem, while Humain will manage the delivery of the hyperscale data center, sustainable power systems, and global fiber interconnects. The partners expect to activate a multi-exaflop network by early 2026, supported by next-generation AI silicon, modular data center zones, and a software platform stack focused on developer enablement, open standards, and interoperability. Amazon Web Services also got a piece of the action, announcing a more than $5 billion investment to build an “AI zone” in the Kingdom. The zone is the first of its kind and will bring together multiple capabilities, including dedicated AWS AI infrastructure and servers, UltraCluster networks for faster AI training and inference, AWS services like SageMaker and Bedrock, and AI application services such as Amazon Q. Like the AMD project, the zone will be available in 2026. Humain only emerged this month, so little is known about it. But given that it is backed by Crown Prince Salman and has the full weight of the Kingdom’s Public Investment Fund (PIF), which ranks among the world’s largest and

Read More »

Check Point CISO: Network segregation can prevent blackouts, disruptions

Fischbein agrees 100% with his colleague’s analysis and adds that education and training can help prevent such incidents from occurring. “Simulating such a blackout is impossible, it has never been done,” he acknowledges, but he is committed to strengthening personal and team training and risk awareness. Increased defense and cybersecurity budgets In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. You have to look for a security platform that makes things easier, faster, and simpler,” he says. ” Today there are excellent tools that can stop all kinds of attacks.” “Since 2010, there have been cybersecurity systems, also from Check Point, that help prevent this type of incident from happening, but I’m not sure that [Spain’s electricity blackout] was a cyberattack.” Leading the way in email security According to Gartner’s Magic Quadrant, Check Point is the leader in email security platforms. Today email is still responsible for 88% of all malicious file distributions. Attacks that, as Fischbein explains, enter through phishing, spam, SMS, or QR codes. “There are two challenges: to stop the threats and not to disturb, because if the security tool is a nuisance it causes more harm than good. It is very important that the solution does not annoy [users],” he stresses. “As almost all attacks enter via e-mail, it is

Read More »

HPE ‘morphs’ private cloud portfolio with improved virtualization, storage and data protection

What do you get when combining Morpheus with Aruba? As part of the extensible platform message that HPE is promoting with Morpheus, it’s also working in some capabilities from the broader HPE portfolio. One integration is with HPE Aruba for networking microsegmentation. Bhardwaj noted that a lot of HPE Morpheus users are looking for microsegmentation in order to make sure that the traffic between two virtual machines on a server is secure. “The traditional approach of doing that is on the hypervisor, but that costs cycles on the hypervisor,” Bhardwaj said. “Frankly, the way that’s being delivered today, customers have to pay extra cost on the server.” With the HPE Aruba plugin that now works with HPE Morpheus, the microsegmentation capability can be enabled at the switch level. Bhardwaj said that by doing the microsegmentation in the switch and not the hypervisor, costs can be lowered and performance can be increased. The integration brings additional capabilities, including the ability to support VPN and network address translation (NAT) in an integrated way between the switch and the hypervisor. VMware isn’t the only hypervisor supported by HPE  The HPE Morpheus VM Essentials Hypervisor is another new element in the HPE cloud portfolio. The hypervisor is now being integrated into HPE’s private cloud offerings for both data center as well as edge deployments.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »