Stay Ahead, Stay ONMINE

From alerts to autonomy: How leading SOCs use AI copilots to fight signal overload and staffing shortfalls

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Thanks to the rapid advances in AI-powered security copilots, security operations centers (SOCs) are seeing false positive rates drop by up to 70% while saving over 40 hours a week of manual triage. The latest generation of copilots has […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Thanks to the rapid advances in AI-powered security copilots, security operations centers (SOCs) are seeing false positive rates drop by up to 70% while saving over 40 hours a week of manual triage.

The latest generation of copilots has moved far beyond chat interfaces. These agentic AI systems are capable of real-time remediation, automated policy enforcement and integrated triage across cloud, endpoint and network domains. Purpose-built to integrate within SIEM, SOAR and XDR pipelines, they’re making solid contributions to improving SOC accuracy, efficiency and speed of response.

Microsoft launched six new Security Copilot agents today—including ones for phishing triage, insider risk, conditional access, vulnerability remediation, and threat intelligence—alongside five partner-built agents, as detailed in Vasu Jakkal’s blog post.

Quantifiable gains in SOC performance are growing. Mean-time-to-restore is improving by 20% or more, and threat detection times have dropped by at least 30% in SOCs deploying these technologies. When copilots are used, KPMG reports a 43% boost in triage accuracy among junior analysts.

SOC analysts tell VentureBeat on condition of anonymity how frustrating their jobs are when they have to interpret multiple systems’ alerts and manually triage every intrusion alert.

Swivel chair integration is alive and well in many SOCs today, and while it saves on software costs, it burns out the best analysts and leaders. Burnout should not be dismissed as an isolated issue that only happens in SOCs that have analysts doing back-to-back shifts because they’re short-handed. It’s far more pervasive than security leaders realize.  

More than 70% of SOC analysts say they’re burned out, with 66% reporting that half their work is repetitive enough to be automated. Additionally, nearly two-thirds are planning to switch roles by 2025 and the need to make the most of AI’s rapid gains in automating SOCs becomes unavoidable.

AI security copilots are gaining traction as more organizations confront the challenges of keeping their SOCs efficient and staffed well enough to contain threats. The latest generation of AI security copilots don’t just accelerate response, they’re proving indispensable in training and retaining staff eliminating rote, routine work while opening new opportunities for SOC analysts to learn and earn more.

“I do get asked a lot well does that mean you know what SOC analysts are gonna be out of business? No. You know what it means? It means that you can take tier one analysts and turn them into tier three, you can take the eight hours of mundane work and turn it into 10 minutes,” George Kurtz, founder and CEO of CrowdStrike said at the company’s Fal.Con event last year.

“The way forward is not to eliminate the human element, but to empower humans with AI assistants,” says Ivanti CIO Robert Grazioli, emphasizing how AI copilots reduce repetitive tasks and free analysts to focus on complex threats. Grazioli added, “analyst burnout is driven by repetitive tasks and a continuous flood of low-fidelity alerts. AI copilots cut through this noise, letting experts tackle the toughest issues.” Ivanti’s research finds that organizations embracing AI triage can reduce false positives by up to 70%.

Vineet Arora, CTO for WinWire agrees, telling VentureBeat that, “the ideal approach is typically to use AI as a force multiplier for human analysts rather than a replacement. For example, AI can handle initial alert triage and routine responses to security issues, allowing analysts to focus their expertise on sophisticated threats and strategic work. The human team should maintain oversight of AI systems while leveraging them to reduce mundane workload.”

Ivanti’s 2025 State of Cybersecurity Report found that despite 89% of boards calling security a priority, their latest research reveals gaps in organizations’ ability to defend against high-risk threats. About half of the security executives interviewed, 54%, say generative ATI (gen AI) security is their top budget priority for this year.

The goal: turn massive amounts of real-time, raw telemetry into insights

By their nature, SOCs are continually flooded with data comprised mainly of endpoint logs, firewall events logs, identity change notices and logs and, for many, new behavioral analytics reports.

AI security copilots are proving effective in separating the signals that matter from noise. Controlling the signal-to-noise ratio increases a SOC team’s accuracy, insights and speed of response.

Instead of drowning in alerts, SOC teams are responding to prioritized, high-fidelity incidents that can be triaged automatically.

CrowdStrike’s Charlotte AI processes over 1 trillion high-fidelity signals daily from the Falcon platform and is trained on millions of real-world analyst decisions. It autonomously triages endpoint detections with over 98% agreement with human experts, saving teams an average of 40+ hours of manual work per week.

Microsoft Security Copilot customers are reporting that they’re saving up to 40% of their security analysts’ time on foundational tasks including investigation and response, threat hunting and threat intelligence assessments. On more mundane tasks such as preparing reports or troubleshooting minor issues, Security Copilot delivered gains in efficiency up to and above 60%.

In the following diagram, Gartner defines how Microsoft Copilot for Security manages user prompts, built-in and third-party security plugins, in addition to large language model (LLM) processing within a responsible AI framework.

High-level workflow of Microsoft Copilot for Security, highlighting encryption, grounding, plugin support, and responsible AI considerations. Source:Gartner, Microsoft Copilot for Security Adoption Considerations, Oct.2023

Like CrowdStrike, nearly every AI security copilot provider emphasizes using AI to augment and strengthen the SOC team’s skills rather than replacing people with copilots.

Nir Zuk, founder and CTO of Palo Alto Networks told VentureBeat recently that “our AI-powered platforms don’t aim to remove analysts from the loop; they unify the SOC workflow so analysts can do their jobs more strategically.” Similarly, Jeetu Patel, Cisco’s EVP and GM of security and collaboration, said, “AI’s real value is how it narrows the talent gap in cybersecurity—not by automating analysts out of the picture, but by making them exponentially more effective.”

Charting the rapid rise of AI security copilots

AI security copilots are rapidly reshaping how mid-sized enterprises detect, investigate and neutralize threats. VentureBeat tracks this expanding ecosystem, where each solution advances automated triage, cloud-native coverage and predictive threat intelligence.

Below is a snapshot of today’s top copilots, highlighting their differentiators, telemetry focus and real-world gains. VentureBeat’s Security Copilot Guide (Google Sheet) provides a complete matrix with 16 vendors’ AI security copilots.

Source: VentureBeat Analysis

CrowdStrike Charlotte, SentinelOne’s Purple AI and Trellix WISE are already triaging, isolating and remediating threats without human intervention. Google and Microsoft are embedding risk scoring, auto-mitigation and cross-cloud attack surface mapping into their copilots.

 Google’s recent acquisition of Wiz will significantly impact AI security copilot adoption as part of a broader CNAPP strategy in many organizations.

Platforms such as Observo Orion illustrate what’s next: agentic copilots unifying DevOps, observability, and security data to deliver proactive, automated defenses. Rather than just detecting threats, they orchestrate complex workflows, including code rollbacks or node isolation, bridging security, development and operations in the process.

The endgame isn’t just about smart, prompt-driven personal programming assistants; it’s about integrating AI-driven decision-making across SOC workflows.

AI security copilots’ leading use cases today   

The better a given use case can integrate into SOC analysts’ workflows, the greater its potential to scale and deliver strong value. Core to the scale of an AI security copilot’s architecture is the ability to ingest data from heterogeneous telemetry sources and identify decisions early in the process, keeping them in context.

Here’s where adoption is scaling the fastest:

Accelerating triage: Tier-1 analysts using copilots, including Microsoft Security Copilot and Charlotte AI, can reduce triage to minutes instead of many hours. This is possible due to pre-trained models that flag known tactics, techniques and procedures (TTPs), cross-reference threat intel and summarize findings with confidence scores.

Alert de-duplication and noise suppression: Observo Orion and Trellix WISE use contextual filtering to correlate multi-source telemetry, eliminating low-priority noise. This reduces alert fatigue by as much as 70%, freeing teams to focus on high-fidelity signals. Sophos XDR AI Assistant achieves similar results for mid-sized SOCs with smaller teams.

Policy enforcement and firewall tuning: Cisco AI Assistant and Palo Alto’s Cortex copilots dynamically suggest and auto-implement policy changes based on telemetry thresholds and anomaly detection. This is critical for SOCs with complex, distributed firewall topologies and zero-trust mandates.

Cross-domain correlation: Security Copilot (Microsoft) and SentinelOne Purple AI integrate identity telemetry, SIEM logs and endpoint data to detect lateral movement, privilege escalation, or suspicious multi-hop activity. Analysts receive contextual playbooks that reduce root cause analysis by over 40%.

Exposure validation and breach simulation: Cymulate AI Copilot emulates red-team logic and tests exposure against new CVEs, enabling SOCs to validate controls proactively. This replaces manual validation steps with automated posture testing integrated into SOAR workflows.

Natural language SIEM interaction: Exabeam Copilot and Splunk AI Assistant allow analysts to convert natural language queries into executable SIEM commands. This democratizes investigation capabilities, especially for less technical staff, and reduces dependency on deep query language knowledge.

Identity risk reduction: Oleria Copilot continuously scans for dormant accounts, excessive access rights, and unlinked entitlements. These copilots auto-generate cleanup plans and enforce least-privilege policies, helping reduce insider threat surface in hybrid environments.

Bottom Line: Copilots don’t replace analysts, they amplify and scale their experience and strengths

By integrating identity, endpoint and network telemetry, copilots reduce the time it takes to identify lateral movement and privilege escalation, two of the most dangerous phases in an attack chain. As Elia Zaitsev, CTO of CrowdStrike, explained to VentureBeat in an earlier conversation: it’s less about substituting human roles, and more about supporting and augmenting them.

AI-powered tools should be viewed as collaborative partners for people — a concept that is especially crucial in cybersecurity.  Zaitsev cautioned that focusing on completely replacing human professionals rather than working alongside them is a misguided strategy.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Western Digital wants to ramp-up hard disk drive speeds

Most enterprises are not using SATA drives, at least not with hot data. Perhaps cold storage but not frequently accessed data. They are using PCI Express based drives and those are considerably faster than anything Western Digital can engineer in a hard disk. Capacity aside, Western Digital is also aiming

Read More »

LoRaWAN reaches 125 million devices as industrial IoT expands

Satellite integration is set to grow Terrestrial LoRaWAN networks cannot achieve complete geographic coverage. Yegin cited Swisscom’s nationwide Switzerland deployment, which covers 97.2% of the population but cannot reach remote alpine terrain. Two LoRa Alliance members, Lacuna Space and Plan-S, already operate commercial LoRaWAN services from low Earth orbit. Standard

Read More »

Data stored in glass could last over 10,000 years, Microsoft says

Magnetic tape, the most widely deployed archival medium today, reflects those constraints. An LTO-10 (Linear Tape-Open) cartridge, the current enterprise benchmark, holds 30TB to 40TB native at 400MB/s, but its rated shelf life is just 30 years. It requires climate-controlled storage between 16°C and 25°C and migration roughly every five

Read More »

Arista hints at in-the-works telemetry tools to manage AI fabrics

“This greatly aids our customers in building an overall working solution, because the interactions between the network and the host can be complicated and difficult to debug when it’s different systems collecting them,” Duda said. Analysts react to telemetry preview Arista declined to share more details about its forthcoming AI

Read More »

Eni delivers Congo LNG Phase 2 with first cargo from Nguya vessel

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } <!–> Eni SPA has delivered Phase 2 of the Congo LNG project with the first LNG cargo export from the 2.4-million tonne/year (tpy) Nguya floating liquefied natural gas (FLNG) unit, lifting the Republic of Congo’s liquefaction capacity to 3 million tpy. ]–> Photo: Eni SPA <!–> ]–> <!–> Nov. 26, 2024 ]–> Photo from Eni. <!–> ]–> <!–> Aug. 26, 2025 ]–> Phase 2 leverages natural gas resources from Nené and Litchendjili fields in the offshore Marine XII license, which lies 20 km offshore Congo and is estimated to hold 1.3-billion boe proven and probable reserves. Phase 1 of Congo LNG, launched with the 600,000-tpy Tango FLNG vessel, reached start-up in December 2023, just over 1 year after the project definition. Phase 2 start-up comes only 35 months after construction of the Nguya FLNG unit began, Eni said in a release Feb. 7. map from Eni <!–> –> <!–> ]–>

Read More »

PTTEP takes FID on first greenfield development in Malaysia

PTT Exploration and Production Public Co. Ltd. (PTTEP) has reached a final investment decision (FID) to develop the Malaysia SK405B project offshore Malaysia, marking its first greenfield development project in the country. PTTEP Sarawak Oil Ltd., a subsidiary of PTTEP and operator of SK405B PSC will proceed with development of Sirung and Chenda fields. The development plan for both fields comprises a central processing platform and a wellhead platform with a target combined production capacity of about 15,000 b/d and 200 MMscfd of gas. SK405B lies in shallow water offshore Sarawak.  The field development plan, together with the FID, has been approved by the project partners, and the engineering, procurement, construction, installation, and commissioning (EPCIC) contract is expected to be signed in early 2026, with first production anticipated in 2028. The project is designed with Zero Routine Flaring and incorporates advanced remote-operated offshore operations, the company said.

Read More »

Russia’s crude exports signal narrowing buyer pool

A growing number of vessels sailing to unknown destinations and a sharp rise in Russian oil held on water—up as much as 49 million bbl since November 2025—suggest a shrinking pool of willing buyers. Russian crude exports declined by 350,000 b/d m-o-m, reversing most of December’s 360,000 b/d increase. The bulk of the drop came from the Black Sea, while product exports rose by 260,000 b/d, largely driven by heavy product flows (+200,000 b/d). Higher prices boosted revenues across both crude and products. Product revenues climbed by $330 million, more than offsetting a $210 million decline in crude export revenues. Separately, Russia reported a 24% year-on-year decline in 2025 oil and gas tax revenues to about $110 billion. Under the European Union (EU)’s revised mechanism, the price cap on Russian crude was lowered to $44.10/bbl as of Feb. 2. Urals Primorsk averaged $40.06/bbl in January. Of total crude exports, 65% were sold by Russian proxy companies, 13% by sanctioned firms, and 21% by other companies. Among the proxy companies, Redwood Global FZE LLC—Rosneft’s substitute—remained the largest crude exporter, supplying 1 million b/d to China and India last month. Russian crude imports EU enforcement measures are beginning to reshape trade flows. Since Jan. 21, EU buyers have been required to more rigorously verify the origin of imported products. In 2025, the EU-27 and UK sourced 12% of their middle distillate imports from refineries in India and Türkiye processing Russian crude. India’s Jamnagar refinery halted Russian crude imports in mid-December to comply, as Europe accounted for 40% of its middle distillate exports last year. As a result, EU and UK reliance on seaborne Russian-origin molecules fell to 1.6% in January, with most cargoes shipped before Jan. 21 and largely originating from Türkiye. Meanwhile, EU middle distillate imports from the US rose by

Read More »

Commonwealth LNG signs 20-year LNG supply deal with Aramco Trading

Commonwealth LNG, a Caturus company, signed a long-term LNG supply agreement with Aramco Trading, a subsidiary of Saudi Aramco. Under a sale and purchase agreement, Aramco Trading will purchase 1 million tonnes/year (tpy) of LNG from the 9.5-million tpy Commonwealth LNG plant currently under development on the Gulf Coast in Cameron Parish, La. Caturus is working to secure the project’s remaining capacity as it aims for a final investment decision (FID) on the plant. The company holds long-term offtake contracts with Glencore, JERA, Petronas, Mercuria, and EQT. In December 2025, the company authorized full purchase orders to certain industry partners supporting development of the project. The purchase orders are being executed via Commonwealth’s engineering, procurement and construction partner Technip Energies. The purchase orders address long lead time equipment needed to advance construction features of Commonwealth’s modular approach. They include orders with Baker Hughes for six mixed-refrigerant compressors driven by LM9000 gas turbines; Honeywell, to supply six main cryogenic heat exchangers; and Solar Turbines, providing four Titan 350 gas turbine-generators.  At the time, Caturus said the FID on the project was expected in first-quarter 2026.

Read More »

Exxon Mobil Guyana prepares Errea Wittu FPSO at Uaru field

Exxon Mobil Guyana Ltd. subcontractor Jumbo Offshore, on behalf of Modec, has completed mooring pre-installation for the Errea Wittu floating, production, storage, and offloading (FPSO) unit at Uaru field, Stabroek block, offshore Guyana. Jumbo Offshore performed installation engineering, procurement, mobilization, and marshaling activities to support the deepwater pre-lay mooring project. The offshore campaign was executed using the Fairplayer J-class installation vessel. Errea Wittu is expected to produce 250,000 b/d of oil and will have a gas treatment capacity of 540 MMcfd. The unit will have a water injection capacity of 350,000 b/d, a produced water capacity of 300,000 b/d, and a storage capacity of 2 million bbl of crude oil. Uaru field lies 200 km offshore Guyana at a depth of 1,750 m. The fifth project on Guyana’s offshore Stabroek block, Uaru is estimated to hold more than 800 million bbl of oil. First oil is expected this year.  As part of its fourth-quarter 2025 earnings call Jan. 30, 2026, the company noted record full-year production from Guyana of more than 700,000 b/d with its first four developments.

Read More »

US rig count unchanged, Canada rig count dips

The active rig count in the US was unchanged from last week with 551 rigs running for the week ended Feb. 13, according to Baker Hughes data. The number of working oil-directed rigs in the US decreased by 3 units to 409 for the week. The count is down 72 units year-over-year. Gas-directed rigs increased by 3 units to 133, up 32 units year-over-year. Nine rigs considered unclassified remained active during the week, unchanged from last week. The number of working US land-based rigs declined by 1 to 531. Horizontal rigs decreased 2 units to 481. Directional drilling rigs increased by 2 to 57 for the week. The vertical rig count was unchanged this week at 13 rigs working. The number of rigs working offshore increased by 1 to end the week with 17 working rigs. Louisiana saw its rig count increase by 2 units to end the week with 41 rigs. New Mexico, Pennsylvania, and Wyoming each saw rig counts increase by a single unit this week to respective counts of 102, 20, and 17. Texas dropped 3 rigs to leave 229 running for the week. Rig counts in Oklahoma and North Dakota fell by one unit each, leaving 45 rigs running in Oklahoma and 26 in North Dakota. Canada’s rig count fell by 6 rigs to 222. The count is down 23 units from this time a year-ago. The number of gas-directed rigs decreased by 4 units to 69. The oil-directed rig count fell by 2 units to leave 153 units working.

Read More »

ECL targets AI data centers with fuel-agnostic power platform

Power availability has become a gating factor for many data center projects, particularly where developers need larger connections or rapid delivery. Grid constraints can also influence where operators place compute for low-latency AI workloads. “Inference has to live close to people, data and applications, in and around major cities, smaller metros and industrial hubs where there is rarely a spare 50 or 100 megawatts sitting on the grid, and almost never a mature hydrogen ecosystem,” said Bachar. In typical data center design, the facilities are planned around 1 energy source, be it electrical grid, solar and other renewables, or diesel generated. All require different layouts and designs. One design does not fit all power sources. FlexGrid lets the data center use any power source it wants and switch to a new source without requiring a redesign of the facilities.

Read More »

AI likely to put a major strain on global networks—are enterprises ready?

“When AI pipelines slow down or traffic overloads common infrastructure, business processes slow down, and customer experience degrades,” Kale says. “Since many organizations are using AI to enable their teams to make critical decisions, disruptions caused by AI-related failures will be experienced instantly by both internal teams and external customers.” A single bottleneck can quickly cascade through an organization, Kales says, “reducing the overall value of the broader digital ecosystem.” In 2026, “we will see significant disruption from accelerated appetite for all things AI,” research firm Forrester noted in a late-year predictions post. “Business demands of AI systems, network connectivity, AI for IT operations, the conversational AI-powered service desk, and more are driving substantial changes that tech leaders must enable within their organizations.” And in a 2025 study of about 1,300 networking, operations, cloud, and architecture professionals worldwide, Broadcom noted a “readiness gap” between the desire for AI and network preparedness. While 99% of organizations have cloud strategies and are adopting AI, only 49% say their networks can support the bandwidth and low latency that AI requires, according to Broadcom’s  2026 State of Network Operations report. “AI is shifting Internet traffic from human-paced to machine-paced, and machines generate 100 times more requests with zero off-hours,” says Ed Barrow, CEO of Cloud Capital, an investment management firm focused on acquiring, managing, and operating data centers. “Inference workloads in particular create continuous, high-intensity, globally distributed traffic patterns,” Barrow says. “A single AI feature can trigger millions of additional requests per hour, and those requests are heavier—higher bandwidth, higher concurrency, and GPU-accelerated compute on the other side of the network.”

Read More »

Adani bets $100 billion on AI data centers as India eyes global hub status

The sovereignty question Adani framed the investment as a matter of national digital sovereignty, saying it would reserve a significant portion of GPU capacity for Indian AI startups and research institutions. Analysts were not convinced the structure supported the claim. “I believe it is too distant from digital sovereignty if the majority of the projects are being built to serve leading MNC AI hyperscalers,” said Shah. “Equal investments have to happen for public AI infrastructure, and the data of billions of users — from commerce to content to health — must remain sovereign.” Gogia framed the gap in operational terms. “Ownership alone does not define sovereignty,” he said. “The practical determinants are who controls privileged access during incidents, where critical workloads fail over when grids are stressed, and what regulatory oversight mechanisms are contractually enforceable.” Those are questions Adani has not yet answered and the market, analysts say, will be watching for more than just construction progress. But Banerjee said the market would not wait nine years to judge the announcement. “In practice, I think the market will judge this on near-term proof points, grid capacity secured, power contracting in place, and anchor tenants signed, rather than the headline capex or long-dated targets,” he said.

Read More »

Arista laments ‘horrendous’ memory situation

Digging in on campus Arista has been clear about its plans to grow its presence campus networking environments. Last Fall, Ullal said she expects Arista’s campus and WAN business would grow from the current $750 million-$800 million run rate to $1.25 billion, representing a 60% growth opportunity for the company. “We are committed to our aggressive goal of $1.25 billion for ’26 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine and peering use cases,” Ullal said. “In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue.” Ethernet leads the way “In terms of annual 2025 product lines, our core cloud, AI and data center products built upon our highly differentiated Arista EOS stack is successfully deployed across 10 gig to 800 gigabit Ethernet speeds with 1.6 terabit migration imminent,” Ullal said. “This includes our portfolio of EtherLink AI and our 7000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility for both the front and back-end compute, storage and all of the interconnect zones.” Ullal said she expects Ethernet will get even more of a boost later this year when the multivendor Ethernet for Scale-Up Networking (ESUN) specification is released.  “We have consistently described that today’s configurations are mostly a combination of scale out and scale up were largely based on 800G and smaller ratings. Now that the ESUN specification is well underway, we need a good solid spec. Otherwise, we’ll be shipping proprietary products like some people in the world do today. And so we will tie our

Read More »

From NIMBY to YIMBY: A Playbook for Data Center Community Acceptance

Across many conversations at the start of this year, at PTC and other conferences alike, the word on everyone’s lips seems to be “community.” For the data center industry, that single word now captures a turning point from just a few short years ago: we are no longer a niche, back‑of‑house utility, but a front‑page presence in local politics, school board budgets, and town hall debates. That visibility is forcing a choice in how we tell our story—either accept a permanent NIMBY-reactive framework, or actively build a YIMBY narrative that portrays the real value digital infrastructure brings to the markets and surrounding communities that host it. Speaking regularly with Ilissa Miller, CEO of iMiller Public Relations about this topic, there is work to be done across the ecosystem to build communications. Miller recently reflected: “What we’re seeing in communities isn’t a rejection of digital infrastructure, it’s a rejection of uncertainty driven by anxiety and fear. Most local leaders have never been given a framework to evaluate digital infrastructure developments the way they evaluate roads, water systems, or industrial parks. When there’s no shared planning language, ‘no’ becomes the safest answer.” A Brief History of “No” Community pushback against data centers is no longer episodic; it has become organized, media‑savvy, and politically influential in key markets. In Northern Virginia, resident groups and environmental organizations have mobilized against large‑scale campuses, pressing counties like Loudoun and Prince William to tighten zoning, question incentives, and delay or reshape projects.1 Loudoun County’s move in 2025 to end by‑right approvals for new facilities, requiring public hearings and board votes, marked a watershed moment as the world’s densest data center market signaled that communities now expect more say over where and how these campuses are built. Prince William County’s decision to sharply increase its tax rate on

Read More »

Nomads at the Frontier: PTC 2026 Signals the Digital Infrastructure Industry’s Moment of Execution

Each January, the Pacific Telecommunications Council conference serves as a barometer for where digital infrastructure is headed next. And according to Nomad Futurist founders Nabeel Mahmood and Phillip Koblence, the message from PTC 2026 was unmistakable: The industry has moved beyond hype. The hard work has begun. In the latest episode of The DCF Show Podcast, part of our ongoing ‘Nomads at the Frontier’ series, Mahmood and Koblence joined Data Center Frontier to unpack the tone shift emerging across the AI and data center ecosystem. Attendance continues to grow year over year. Conversations remain energetic. But the character of those conversations has changed. As Mahmood put it: “The hype that the market started to see is actually resulting a bit more into actions now, and those conversations are resulting into some good progress.” The difference from prior years? Less speculation. More execution. From Data Center Cowboys to Real Deployments Koblence offered perhaps the sharpest contrast between PTC conversations in 2024 and those in 2026. Two years ago, many projects felt speculative. Today, developers are arriving with secured power, customers, and construction underway. “If 2024’s PTC was data center cowboys — sites that in someone’s mind could be a data center — this year was: show me the money, show me the power, give me accurate timelines.” In other words, the market is no longer rewarding hypothetical capacity. It is demanding delivered capacity. Operators now speak in terms of deployments already underway, not aspirational campuses still waiting on permits and power commitments. And behind nearly every conversation sits the same gating factor. Power. Power Has Become the Industry’s Defining Constraint Whether discussions centered on AI factories, investment capital, or campus expansion, Mahmood and Koblence noted that every conversation eventually returned to energy availability. “All of those questions are power,” Koblence said.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »