Stay Ahead, Stay ONMINE

Why security stacks need to think like an attacker, and score every user in real time

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More More than 40% of corporate fraud is now AI-driven, designed to mimic real users, bypass traditional defenses and scale at speeds that overwhelm even the best-equipped SOCs. In 2024, nearly 90% of enterprises were targeted, and […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


More than 40% of corporate fraud is now AI-driven, designed to mimic real users, bypass traditional defenses and scale at speeds that overwhelm even the best-equipped SOCs.

In 2024, nearly 90% of enterprises were targeted, and half of them lost $10 million or more.

Bots emulate human behavior and create entire emulation frameworks, synthetic identities, and behavioral spoofing to pull off account takeovers at scale while slipping past legacy firewalls, EDR tools, and siloed fraud detection systems.

Attackers weaponize AI to create bots that evade, mimic, and scale

Attackers aren’t wasting any time capitalizing on using AI to weaponize bots in new ways. Last year, malicious bots comprised 24% of all internet traffic, with 49% classified as ‘advanced bots’ designed to mimic human behavior and execute complex interactions, including account takeovers (ATO).

Over 60% of account takeover (ATO) attempts in 2024 were initiated by bots, capable of breaching a victim’s credentials in real time using emulation frameworks that mimic human behavior. Attacker’s tradecraft now reflects the ability to combine weaponized AI and behavioral attack techniques into a single bot strategy.

That’s proving to be a lethal combination for many enterprises already battling malicious bots whose intrusion attempts often aren’t captured by existing apps and tools in security operations centers (SOCs).

Malicious bot attacks force SOC teams into firefighting mode with little or no warning, depending on the legacy of their security tech stack.

“Once amassed by a threat actor, they can be weaponized,” Ken Dunham, director of the threat research unit at Qualys recently said. “Bots have incredible resources and capabilities to perform anonymous, distributed, asynchronous attacks against targets of choice, such as brute force credential attacks, distributed denial of service attacks, vulnerability scans, attempted exploitation and more.”

From fan frenzy to fraud surface: bots corner the market for Taylor Swift tickets  

Bots are the virtual version of attackers who can scale to millions of attempts per second to attack a targeted enterprise and increasingly high-profile events, including concerts of well-known entertainers, such as Taylor Swift.

Datadome observes that the worldwide popularity of Taylor Swift’s concerts creates the ROI attackers are looking for to build ticket bots that automate what scalpers do at scale. Ticket bots, as Datadome calls them, scoop up massive quantities of tickets at the world’s most popular events and then resell them at significant markups.

The bots flooded Ticketmaster and were a large part of a surge of 3.5 billion requests that hit the ticket site, causing it to crash repeatedly. Thousands of fans were unable to access the presale group, and ultimately, the general ticket sale had to be canceled.

Swarms of weaponized bots froze tens of thousands of Swifties from attending her last Eras concert tour. VentureBeat has learned of comparable attacks on the world’s leading brands on their online stores and presence globally. Dealing with bot attacks at that scale, powered by weaponized AI, is beyond the scope of an e-commerce tech stack to handle – they’re not built to deal with that level of security threat.  

“It’s not just about blocking bots—it’s about restoring fairness,” Benjamin Fabre, CEO of DataDome, told VentureBeat in a recent interview. The company helped See Tickets deflect similar scalping attacks in milliseconds, distinguishing fans from fraud using multi-modal AI and real-time session analysis.

Bot attacks weaponized with AI often start by targeting login and session flows, bypassing endpoints in an attempt not to be detected by standard web application firewalls (WAF) and endpoint detection and response (EDR) tools. Such sophisticated attacks must be tracked and contained in a business’s core security infrastructure, managed from its SOC.

Why SOC teams are now on the front line

Weaponized bots are now a key part of any attacker’s arsenal, capable of scaling beyond what fraud teams alone can contain during an attack. Bots have proven lethal, taking down enterprises’ e-commerce operations or, in the case of Ticketmaster, a best-selling concert tour worth billions in revenue.  

As a result, more enterprises are bolstering the tech stacks supporting their SOCs with online fraud detection (OFD) platforms. Gartner’s Dan Ayoub recently wrote in the firm’s research note Emerging Tech Impact Radar: Online Fraud Detection that “organizations are increasingly waking up to the understanding that ‘fraud is a security problem’ as is becoming evident in adoption of some of the emerging technologies being leveraged today”.

Gartner’s research and VentureBeat’s interviews with CISOs confirm that today’s malicious bot attacks are too fast, stealthy and capable of reconfiguring themselves on the fly for siloed fraud tools to handle. Weaponized bots have long been able to exploit gaps between WAFs, EDR tools and fraud scoring engines, while also evading static rules that are so prevalent in legacy fraud detection systems.

All these factors and more are why CISOs are bringing fraud telemetry into the SOC.

Journey-Time Orchestration is the next wave of online fraud detection (OFD)

AI-enabled bots are constantly learning how to bypass long-standing fraud detection platforms that rely on sporadic or single point-in-time checks. These checks include login validations, transaction scoring tracking over time, and a series of challenge-responses. While these were effective before the widespread weaponization of bots, botnets and networks, AI-literate adversaries now know how to exploit context switching and, as many deepfakes attacks have proven, know how to excel at behavioral mimicry.

Gartner’s research points to Journey Time Orchestration  (JTO) as the defining architecture for the next wave of OFD platforms that will help SOCs better contain the onslaught of AI-driven bot attacks. Core to JTO is embedding fraud defenses throughout each digital session being monitored and scoring risk continuously from login to checkout to post-transaction behavior.

Journey-Time Orchestration continuously scores risk across the entire user session—from login to post-transaction—to detect AI-driven bots. It replaces single-point fraud checks with real-time, session-wide monitoring to counter behavioral mimicry and context-switching attacks. Source: Gartner, Innovation Insight: IAM Journey-Time Orchestration, Feb. 2025

Who’s establishing an early lead in Journey Time Orchestration defense  

DataDome, Ivanti and Telesign are three companies whose approaches show the power of shifting security from static checkpoints to continuous, real-time assessments is paying off. Each also shows why the future of SOCs must be predicated on real-time data to succeed. All three of these companies’ platforms have progressed to delivering scoring for every user interaction down to the API call, delivering greater contextual insight across every behavior on every device, within each session.

What sets these three companies apart is how they’ve taken on the challenges of hardening fraud prevention, automating core security functions while continually improving user experiences. Each combines these strengths on real-time platforms that are also AI-driven and continually learn – two core requirements to keep up with weaponized AI arsenals that include botnets.

DataDome: Thinking Like an Attacker in Real Time

DataDome, A category leader in real-time bot defense, has extensive expertise in AI-intensive behavioral modeling and relies on a platform that includes over 85,000 machine learning models delivered simultaneously across 30+ global PoPs. Their global reach allows them to inspect more than 5 trillion data points daily. Every web, mobile and API request that their platform can identify is scored in real time (typically within 2 milliseconds) using multi-modal AI that correlates device fingerprinting, IP entropy, browser header consistency and behavior biometrics.

“Our philosophy is to think like an attacker,” Fabre told VentureBeat. “That means analyzing every request anew—without assuming trust—and continuously retraining our detection models to adapt to zero-day tactics”​.

Unlike legacy systems, which lean on static heuristics or CAPTCHAs, DataDome’s approach minimizes friction for verified, legitimate users. Its false-positive rate is under 0.01%, meaning fewer than 1 in 10,000 human visitors see a challenge screen. Even when challenged, the platform invisibly continues behavior analysis to verify the user’s legitimacy.

“Bots aren’t just solving CAPTCHAs now—they’re solving them faster than humans,” Fabre added. “That’s why we moved away from static challenges entirely. AI is the only way to beat AI-driven fraud at scale”​.

Case in point: See Tickets used DataDome to defend against the same bot-driven scalping wave that crashed Ticketmaster during the Taylor Swift Eras Tour. DataDome could distinguish bots from fans in milliseconds and prevent bulk buyouts, preserving ticket equity during peak load. In luxury retail, brands like Hermès deploy DataDome to protect high-demand drops (e.g., Birkin bags) from automated hoarding.

Ivanti Extends Zero Trust and exposure management into the SOC

Ivanti is redefining exposure management by integrating real-time fraud signals directly into SOC workflows through its Ivanti Neurons for Zero Trust Access and Ivanti Neurons for Patch Management platforms. “Zero trust doesn’t stop at logins,” Mike Riemer, Ivanti Field CISO told VentureBeat during a recent interview. “We’ve extended it to session behaviors including credential resets, payment submissions, and profile edits are all potential exploit paths.”

Ivanti Neurons continuously evaluates device posture and identity behavior, flagging anomalous activity and enforcing least-privilege access mid-session. “2025 will mark a turning point,” added Daren Goeson, SVP of product management at Ivanti. “Now defenders can use GenAI to correlate behavior across sessions and predict threats faster than any human team ever could.”

As attack surfaces expand, Ivanti’s platform helps SOC teams detect SIM swaps, mitigate lateral movement and automate dynamic microsegmentation. “What we currently call ‘patch management’ should more aptly be named exposure management or how long is your organization willing to be exposed to a specific vulnerability?” Chris Goettl, VP of product management for endpoint security at Ivanti told VentureBeat. “Risk-based algorithms help teams identify high-risk threats amid the noise of numerous updates.”

“Organizations should transition from reactive vulnerability management to a proactive exposure management approach,” added Goeson. “By adopting a continuous approach, they can effectively protect their digital infrastructure from modern cyber risks.”

Telesign’s AI-driven identity intelligence pushes fraud detection to session scale

Telesign is redefining digital trust by bringing identity intelligence at session scale to the front lines of fraud detection. By analyzing more than 2,200 digital identity signals ranging from phone number metadata to device hygiene and IP reputation, Telesign’s APIs deliver real-time risk scores that catch bots and synthetic identities before damage is done.

“AI is the best defense against AI-enabled fraud attacks,” said Telesign CEO Christophe Van de Weyer in a recent interview with VentureBeat. “At Telesign, we are committed to leveraging AI and ML technologies to combat digital fraud, ensuring a more secure and trustworthy digital environment for all.”

Rather than relying on static checkpoints at login or checkout, Telesign’s dynamic risk scoring continuously evaluates behavior throughout the session. “Machine learning has the power to constantly learn how fraudsters behave,” Van de Weyer told VentureBeat. “It can study typical user behaviors to create baselines and build risk models.”

Telesign’s Verify API underscores its omnichannel strategy, enabling identity verification across SMS, email, WhatsApp, and more, all through a single API. “Verifying customers is so important because many kinds of fraud can often be stopped at the ‘front door,’” Van de Weyer noted in a recent VentureBeat interview.

As generative AI accelerates attacker sophistication, Van de Weyer issued a clear call to action: “The emergence of AI has brought the importance of trust in the digital world to the forefront. Businesses that prioritize trust will emerge as leaders in the digital economy.” With AI as its backbone, Telesign looks to turn trust into a competitive advantage.

Why fraud prevention’s future belongs in the SOC

For fraud protection to scale, it must be integrated into the broader security infrastructure stack and owned by the SOC teams who use it to avert potential attacks. Online fraud detection platforms and apps are proving just as critical as APIs, Identity and Access Management (IAM), EDRs, SIEMs and XDRs. VentureBeat is seeing more security teams in SOCs take greater ownership of validating how consumer transactions are modeled, scored and challenged.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

TotalEnergies farms out 40% participating interest in certain licenses offshore Nigeria to Chevron

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style

Read More »

AI-driven network management gains enterprise trust

The way the full process works is that the raw data feed comes in, and machine learning is used to identify an anomaly that could be a possible incident. That’s where the generative AI agents step up. In addition to the history of similar issues, the agents also look for

Read More »

Chinese cyberspies target VMware vSphere for long-term persistence

Designed to work in virtualized environments The CISA, NSA, and Canadian Cyber Center analysts note that some of the BRICKSTORM samples are virtualization-aware and they create a virtual socket (VSOCK) interface that enables inter-VM communication and data exfiltration. The malware also checks the environment upon execution to ensure it’s running

Read More »

YPF lets contract for Vaca Muerta drilling in Argentina

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } YPF SA has let a 5-year contract to Archer Ltd. for drilling services in the Vaca Muerta area of Argentina. Under the terms of the agreement, Archer will provide and operate seven drilling rigs, equipped with integrated Managed Pressure Drilling (MPD) systems. Two of these rigs will be leased internationally, bringing additional drilling capacity to Argentina. In addition to the firm 5-year term, the contract includes a 2-year extension option. The contract has a total estimated value of $600 million. The Vaca Muerta formation is the main source rock and one of the largest areal stratigraphic units in Neuquén basin, onshore Argentina. The formation trends to calcareous sandstones on the western and middle sections of the basin and towards limestones to the east in a shelf. Depositional depths are less than 300 m and extend about 90,000 sq km, of which 30,000 sq km are prospective for unconventional exploitation. On Aug. 6, 2025, YPF acquired interest in two unconventional oil and gas blocks in Vaca Muerta.

Read More »

OPEC+ keeps output increase on hold, approves new quota system

OPEC and its allies (OPEC+) agreed on Sunday, Nov. 30, to keep oil production policy unchanged into early 2026 while approving a new capacity-based quota system that will reshape how the group allocates output from 2027 onward. Meeting virtually, the eight participating OPEC+ members—Algeria, Iraq, Kazakhstan, Kuwait, Oman, Russia, Saudi Arabia, and the UAE—reaffirmed their Nov. 2 decision to maintain current production levels through first-quarter 2026. They will continue meeting monthly to track adherence and discuss any need for additional action, with the next gathering set for Jan. 4, 2026. Since April 2025, the OPEC+ group has introduced about 2.9 million b/d into the market, while continuing to restrict around 3.24 million b/d of supply, which accounts for roughly 3% of global demand. The meeting took place amid renewed US efforts to negotiate a peace agreement between Russia and Ukraine. A successful deal could potentially increase global oil supply if sanctions on Russia are lifted. In parallel, the broader OPEC+ ministerial meeting confirmed group-wide 2026 quotas previously agreed earlier this year, signaling that no fresh changes in baseline targets are planned before the end of next year unless market conditions deteriorate sharply. New Maximum Sustainable Capacity audits Beyond near-term policy, the most consequential move from the Nov. 30 meetings was approval of a new quota framework based on audited Maximum Sustainable Production Capacity (MSC), which will be used to set production baselines starting in 2027. Under the mechanism, OPEC+ will commission third-party audits of most its members’ sustainable production capacity between January and September 2026. A US consultancy, DeGolyer and MacNaughton, will assess most producers, while separate arrangements will be used for Russia and Venezuela and domestic figures for Iran because of sanctions and data-sharing constraints. MSC is defined as the level of output a country can sustain for a

Read More »

EIA: US crude inventories up 600,000 bbl

US crude oil inventories for the week ended Nov. 28, excluding the Strategic Petroleum Reserve, increased by 600,000 bbl from the previous week, according to data from the US Energy Information Administration. At 427.5 million bbl, US crude oil inventories are about 3% below the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories increased by 4.5 million bbl from last week and are about 2% below the 5-year average for this time of year. Finished gasoline inventories and blending components inventories both increased last week. Distillate fuel inventories increased by 2.1 million bbl last week and are about 7% below the 5-year average for this time of year. Propane-propylene inventories decreased by 700,000 bbl from last week and are about 15% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.9 million b/d for the week ended Nov. 28, which was 433,000 b/d more than the previous week’s average. Refineries operated at 94.1% of capacity. Gasoline production increased, averaging 9.8 million b/d. Distillate fuel production increased by 53,000 b/d, averaging 5.1 million b/d. US crude oil imports averaged 6.0 million b/d, down 456,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 5.9 million b/d, 14.4% less than the same 4-week period last year. Total motor gasoline imports averaged 772,000 b/d. Distillate fuel imports averaged 190,000 b/d.

Read More »

CNOOC starts oil production at Weizhou development project

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } CNOOC Ltd. started production at the Weizhou 11-4 Oilfield Adjustment and Satellite Fields Development Project in Beibu Gulf basin of the South China Sea.  The project is in an average water depth of about 43 m and adopted a coordinated development plan with three offshore processing centers and one onshore terminal. Main production comes from a newly built unmanned wellhead platform and a central processing platform which are connected to an existing platform through a trestle bridge. Thirty-five development wells are planned to be commissioned, including 28 production wells and 7 water injection wells. The project is expected to achieve a plateau production of about 16,900 boe/d in 2026. The oil property is light crude. CNOOC is operator of the project with 100% interest.

Read More »

Eni, bp start up jointly owned Angolan gas processing project

Azule Energy Holdings Ltd.—a 50:50 joint venture of Eni SPA and bp PLC—has commissioned the onshore natural gas treatment plant to handle production from the first phase of the New Gas Consortium (NGC), Angola’s first dedicated non-associated gas development project. Located at Soyo in Angola’s northern province of Zaire and operated by Azule Energy, the NGC gas plant is equipped with capacities to process 400 MMcfd of gas and 20,000 b/d of condensate from two wellhead platforms in NGC’s Quiluma and Moboquerio shallow-water offshore gas fields, Azule Energy and Eni said in separate statements. Started for construction in October 2023, the new plant began processing its first gas volumes 24 months later during November 2025, putting the project 6 months ahead of its originally planned schedule, Azule Energy said. Additional potential gas feedstock for the onshore gas plant could also come from NGC Phase 1’s Blocks 2, 3, and 15/14 areas, according to Azule Energy’s website. The companies said formal commissioning of the new plant—which connects directly to Angola LNG Ltd.’s Soyo 1.1-bcfd gas plant that produces up to 5.2 million tonnes/year (tpy) of LNG—marks a major step in Angola’s plan to expand non-associated gas production and supply feedstock to Angola LNG. Gas volumes processed at NGC’s plant are delivered to Angola LNG’s plant for export and domestic consumption, with Angola LNG marketing the gas as LNG with condensates sold directly by NGC owners, which include Azule Energy (37.4%), TotalEnergies SE subsidiary TotalEnergies EP Angola Development Gaz (11.8%), Chevron Corp. subsidiary Cabinda Gulf Oil Co. Ltd. (31%), and Sociedade Nacional de Combustíveis de Angola EP (Sonangol) subsidiary Sonangol Pesquisa e Produção SA (19.8%). With the associated gas project now online, “Angola [has taken] a decisive step toward establishing itself as a strategic force in the global natural gas market, [and] Azule Energy

Read More »

PetroNor to withdraw from license offshore Gambia

PetroNor E&P ASA, an Africa-focused independent oil and gas exploration and production company, will withdraw from the exploration license for Block A4 offshore Gambia. The company is relinquishing its rights to the license following the conclusion of discussions with the Gambian government regarding options after the initial exploration phase of the license expired in November. As part of its third-quarter 2025 report, PetroNor said it had entered the discussions to extend the license without a drilling commitment and the outcome “may result in relinquishment of the block.”  PetroNor, which had agreed to terms settling arbitration related to the 1,376-sq km A4 license in 2020, thanked the Ministry of Petroleum and Energy, the Petroleum Commission, and its partner, state-owned Gambia National Petroleum Corp. (GNPC), “for their support and close collaboration over the past years of the license term.” The A4 license lies “within the same proven play trend as Senegal and Sangomar field, a play which is expected to extend southward into The Gambia,” PetroNor notes on its website. PetroNor held a 90% interest in the license. The Gambian government held the remaining 10%. 

Read More »

US approves Nvidia H200 exports to China, raising questions about enterprise GPU supply

Shifting demand scenarios What remains unclear is how much demand Chinese firms will actually generate, given Beijing’s recent efforts to steer its tech companies away from US chips. Charlie Dai, VP and principal analyst at Forrester, said renewed H200 access is likely to have only a modest impact on global supply, as China is prioritizing domestic AI chips and the H200 remains below Nvidia’s latest Blackwell-class systems in performance and appeal. “While some allocation pressure may emerge, most enterprise customers outside China will see minimal disruption in pricing or lead times over the next few quarters,” Dai added. Neil Shah, VP for research and partner at Counterpoint Research, agreed that demand may not surge, citing structural shifts in China’s AI ecosystem. “The Chinese ecosystem is catching up fast, from semi to stack, with models optimized on the silicon and software,” Shah said. Chinese enterprises might think twice before adopting a US AI server stack, he said. Others caution that even selective demand from China could tighten global allocation at a time when supply of high-end accelerators remains stretched, and data center deployments continue to rise.

Read More »

What does Arm need to do to gain enterprise acceptance?

But in 2017, AMD released the Zen architecture, which was equal if not superior to the Intel architecture. Zen made AMD competitive, and it fueled an explosive rebirth for a company that was near death a few years prior. AMD now has about 30% market share, while Intel suffers from a loss of technology as well as corporate leadership. Now, customers have a choice of Intel or AMD, and they don’t have to worry about porting their applications to a new platform like they would have to do if they switched to Arm. Analysts weigh in on Arm Tim Crawford sees no demand for Arm in the data center. Crawford is president of AVOA, a CIO consultancy. In his role, he talks to IT professionals all the time, but he’s not hearing much interest in Arm. “I don’t see Arm really making a dent, ever, into the general-purpose processor space,” Crawford said. “I think the opportunity for Arm is special applications and special silicon. If you look at the major cloud providers, their custom silicon is specifically built to do training or optimized to do inference. Arm is kind of in the same situation in the sense that it has to be optimized.” “The problem [for Arm] is that there’s not necessarily a need to fulfill at this point in time,” said Rob Enderle, principal analyst with The Enderle Group. “Obviously, there’s always room for other solutions, but Arm is still going to face the challenge of software compatibility.” And therein lies what may be Arm’s greatest challenge: software compatibility. Software doesn’t care (usually) if it’s on Intel or AMD, because both use the x86 architecture, with some differences in extensions. But Arm is a whole new platform, and that requires porting and testing. Enterprises generally don’t like disruption —

Read More »

Intel decides to keep networking business after all

That doesn’t explain why Intel made the decision to pursue spin-off in the first place. In July, NEX chief Sachin Katti issued a memo that outlined plans to establish key elements of the Networking and Communications business as a stand-alone company. It looked like a done deal, experts said. Jim Hines, research director for enabling technologies and semiconductors at IDC, declined to speculate on whether Intel could get a decent offer but noted NEX is losing ground. IDC estimates Intel’s market share in overall semiconductors at 6.8% in Q3 2025, which is down from 7.4% for the full year 2024 and 9.2% for the full year 2023. Intel’s course reversal “is a positive for Intel in the long term, and recent improvements in its financial situation may have contributed to the decision to keep NEX in house,” he said. When Tan took over as CEO earlier this year, prioritized strengthening the balance sheet and bringing a greater focus on execution. Divest NEX was aligned with these priorities, but since then, Intel has secured investments from the US Government, Nvidia and SoftBank that have reduced the need to raise cash through other means, Hines notes. “The NEX business will prove to be a strategic asset for Intel as it looks to protect and expand its position in the AI datacenter market. Success in this market now requires processor suppliers to offer a full-stack solution, not just silicon. Scale-up and scale-out networking solutions are a key piece of the package, and Intel will be able to leverage its NEX technologies and software, including silicon photonics, to develop differentiated product offerings in this space,” Hines said.

Read More »

At the Crossroads of AI and the Edge: Inside 1623 Farnam’s Rising Role as a Midwest Interconnection Powerhouse

That was the thread that carried through our recent conversation for the DCF Show podcast, where Severn walked through the role Farnam now plays in AI-driven networking, multi-cloud connectivity, and the resurgence of regional interconnection as a core part of U.S. digital infrastructure. Aggregation, Not Proximity: The Practical Edge Severn is clear-eyed about what makes the edge work and what doesn’t. The idea that real content delivery could aggregate at the base of cell towers, he noted, has never been realistic. The traffic simply isn’t there. Content goes where the network already concentrates, and the network concentrates where carriers, broadband providers, cloud onramps, and CDNs have amassed critical mass. In Farnam’s case, that density has grown steadily since the building changed hands in 2018. At the time an “underappreciated asset,” the facility has since become a meeting point for more than 40 broadband providers and over 60 carriers, with major content operators and hyperscale platforms routing traffic directly through its MMRs. That aggregation effect feeds on itself; as more carrier and content traffic converges, more participants anchor themselves to the hub, increasing its gravitational pull. Geography only reinforces that position. Located on the 41st parallel, the building sits at the historical shortest-distance path for early transcontinental fiber routes. It also lies at the crossroads of major east–west and north–south paths that have made Omaha a natural meeting point for backhaul routes and hyperscale expansions across the Midwest. AI and the New Interconnection Economy Perhaps the clearest sign of Farnam’s changing role is the sheer volume of fiber entering the building. More than 5,000 new strands are being brought into the property, with another 5,000 strands being added internally within the Meet-Me Rooms in 2025 alone. These are not incremental upgrades—they are hyperscale-grade expansions driven by the demands of AI traffic,

Read More »

Schneider Electric’s $2.3 Billion in AI Power and Cooling Deals Sends Message to Data Center Sector

When Schneider Electric emerged from its 2025 North American Innovation Summit in Las Vegas last week with nearly $2.3 billion in fresh U.S. data center commitments, it didn’t just notch a big sales win. It arguably put a stake in the ground about who controls the AI power-and-cooling stack over the rest of this decade. Within a single news cycle, Schneider announced: Together, the deals total about $2.27 billion in U.S. data center infrastructure, a number Schneider confirmed in background with multiple outlets and which Reuters highlighted as a bellwether for AI-driven demand.  For the AI data center ecosystem, these contracts function like early-stage fuel supply deals for the power and cooling systems that underpin the “AI factory.” Supply Capacity Agreements: Locking in the AI Supply Chain Significantly, both deals are structured as supply capacity agreements, not traditional one-off equipment purchase orders. Under the SCA model, Schneider is committing dedicated manufacturing lines and inventory to these customers, guaranteeing output of power and cooling systems over a multi-year horizon. In return, Switch and Digital Realty are providing Schneider with forecastable volume and visibility at the scale of gigawatt-class campus build-outs.  A Schneider spokesperson told Reuters that the two contracts are phased across 2025 and 2026, underscoring that this arrangement is about pipeline, as opposed to a one-time backlog spike.  That structure does three important things for the market: Signals confidence that AI demand is durable.You don’t ring-fence billions of dollars of factory output for two customers unless you’re highly confident the AI load curve runs beyond the current GPU cycle. Pre-allocates power & cooling the way the industry pre-allocated GPUs.Hyperscalers and neoclouds have already spent two years locking up Nvidia and AMD capacity. These SCAs suggest power trains and thermal systems are joining chips on the list of constrained strategic resources.

Read More »

The Data Center Power Squeeze: Mapping the Real Limits of AI-Scale Growth

As we all know, the data center industry is at a crossroads. As artificial intelligence reshapes the already insatiable digital landscape, the demand for computing power is surging at a pace that outstrips the growth of the US electric grid. As engines of the AI economy, an estimated 1,000 new data centers1 are needed to process, store, and analyze the vast datasets that run everything from generative models to autonomous systems. But this transformation comes with a steep price and the new defining criteria for real estate: power. Our appetite for electricity is now the single greatest constraint on our expansion, threatening to stall the very innovation we enable. In 2024, US data centers consumed roughly 4% of the nation’s total electricity, a figure that is projected to triple by 2030, reaching 12% or more.2 For AI-driven hyperscale facilities, the numbers are even more staggering. With the largest planned data centers requiring gigawatts of power, enough to supply entire cities, the cumulative demand from all data centers is expected to reach 134 gigawatts by 2030, nearly three times the current load.​3 This presents a systemic challenge. The U.S. power grid, built for a different era, is struggling to keep pace. Utilities are reporting record interconnection requests, with some regions seeing demand projections that exceed their total system capacity by fivefold.4 In Virginia and Texas, the epicenters of data center expansion, grid operators are warning of tight supply-demand balances and the risk of blackouts during peak periods.5 The problem is not just the sheer volume of power needed, but the speed at which it must be delivered. Data center operators are racing to secure power for projects that could be online in as little as 18 months, but grid upgrades and new generation can take years, if not decades. The result

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »