Stay Ahead, Stay ONMINE

From alerts to autonomy: How leading SOCs use AI copilots to fight signal overload and staffing shortfalls

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Thanks to the rapid advances in AI-powered security copilots, security operations centers (SOCs) are seeing false positive rates drop by up to 70% while saving over 40 hours a week of manual triage. The latest generation of copilots has […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Thanks to the rapid advances in AI-powered security copilots, security operations centers (SOCs) are seeing false positive rates drop by up to 70% while saving over 40 hours a week of manual triage.

The latest generation of copilots has moved far beyond chat interfaces. These agentic AI systems are capable of real-time remediation, automated policy enforcement and integrated triage across cloud, endpoint and network domains. Purpose-built to integrate within SIEM, SOAR and XDR pipelines, they’re making solid contributions to improving SOC accuracy, efficiency and speed of response.

Microsoft launched six new Security Copilot agents today—including ones for phishing triage, insider risk, conditional access, vulnerability remediation, and threat intelligence—alongside five partner-built agents, as detailed in Vasu Jakkal’s blog post.

Quantifiable gains in SOC performance are growing. Mean-time-to-restore is improving by 20% or more, and threat detection times have dropped by at least 30% in SOCs deploying these technologies. When copilots are used, KPMG reports a 43% boost in triage accuracy among junior analysts.

SOC analysts tell VentureBeat on condition of anonymity how frustrating their jobs are when they have to interpret multiple systems’ alerts and manually triage every intrusion alert.

Swivel chair integration is alive and well in many SOCs today, and while it saves on software costs, it burns out the best analysts and leaders. Burnout should not be dismissed as an isolated issue that only happens in SOCs that have analysts doing back-to-back shifts because they’re short-handed. It’s far more pervasive than security leaders realize.  

More than 70% of SOC analysts say they’re burned out, with 66% reporting that half their work is repetitive enough to be automated. Additionally, nearly two-thirds are planning to switch roles by 2025 and the need to make the most of AI’s rapid gains in automating SOCs becomes unavoidable.

AI security copilots are gaining traction as more organizations confront the challenges of keeping their SOCs efficient and staffed well enough to contain threats. The latest generation of AI security copilots don’t just accelerate response, they’re proving indispensable in training and retaining staff eliminating rote, routine work while opening new opportunities for SOC analysts to learn and earn more.

“I do get asked a lot well does that mean you know what SOC analysts are gonna be out of business? No. You know what it means? It means that you can take tier one analysts and turn them into tier three, you can take the eight hours of mundane work and turn it into 10 minutes,” George Kurtz, founder and CEO of CrowdStrike said at the company’s Fal.Con event last year.

“The way forward is not to eliminate the human element, but to empower humans with AI assistants,” says Ivanti CIO Robert Grazioli, emphasizing how AI copilots reduce repetitive tasks and free analysts to focus on complex threats. Grazioli added, “analyst burnout is driven by repetitive tasks and a continuous flood of low-fidelity alerts. AI copilots cut through this noise, letting experts tackle the toughest issues.” Ivanti’s research finds that organizations embracing AI triage can reduce false positives by up to 70%.

Vineet Arora, CTO for WinWire agrees, telling VentureBeat that, “the ideal approach is typically to use AI as a force multiplier for human analysts rather than a replacement. For example, AI can handle initial alert triage and routine responses to security issues, allowing analysts to focus their expertise on sophisticated threats and strategic work. The human team should maintain oversight of AI systems while leveraging them to reduce mundane workload.”

Ivanti’s 2025 State of Cybersecurity Report found that despite 89% of boards calling security a priority, their latest research reveals gaps in organizations’ ability to defend against high-risk threats. About half of the security executives interviewed, 54%, say generative ATI (gen AI) security is their top budget priority for this year.

The goal: turn massive amounts of real-time, raw telemetry into insights

By their nature, SOCs are continually flooded with data comprised mainly of endpoint logs, firewall events logs, identity change notices and logs and, for many, new behavioral analytics reports.

AI security copilots are proving effective in separating the signals that matter from noise. Controlling the signal-to-noise ratio increases a SOC team’s accuracy, insights and speed of response.

Instead of drowning in alerts, SOC teams are responding to prioritized, high-fidelity incidents that can be triaged automatically.

CrowdStrike’s Charlotte AI processes over 1 trillion high-fidelity signals daily from the Falcon platform and is trained on millions of real-world analyst decisions. It autonomously triages endpoint detections with over 98% agreement with human experts, saving teams an average of 40+ hours of manual work per week.

Microsoft Security Copilot customers are reporting that they’re saving up to 40% of their security analysts’ time on foundational tasks including investigation and response, threat hunting and threat intelligence assessments. On more mundane tasks such as preparing reports or troubleshooting minor issues, Security Copilot delivered gains in efficiency up to and above 60%.

In the following diagram, Gartner defines how Microsoft Copilot for Security manages user prompts, built-in and third-party security plugins, in addition to large language model (LLM) processing within a responsible AI framework.

High-level workflow of Microsoft Copilot for Security, highlighting encryption, grounding, plugin support, and responsible AI considerations. Source:Gartner, Microsoft Copilot for Security Adoption Considerations, Oct.2023

Like CrowdStrike, nearly every AI security copilot provider emphasizes using AI to augment and strengthen the SOC team’s skills rather than replacing people with copilots.

Nir Zuk, founder and CTO of Palo Alto Networks told VentureBeat recently that “our AI-powered platforms don’t aim to remove analysts from the loop; they unify the SOC workflow so analysts can do their jobs more strategically.” Similarly, Jeetu Patel, Cisco’s EVP and GM of security and collaboration, said, “AI’s real value is how it narrows the talent gap in cybersecurity—not by automating analysts out of the picture, but by making them exponentially more effective.”

Charting the rapid rise of AI security copilots

AI security copilots are rapidly reshaping how mid-sized enterprises detect, investigate and neutralize threats. VentureBeat tracks this expanding ecosystem, where each solution advances automated triage, cloud-native coverage and predictive threat intelligence.

Below is a snapshot of today’s top copilots, highlighting their differentiators, telemetry focus and real-world gains. VentureBeat’s Security Copilot Guide (Google Sheet) provides a complete matrix with 16 vendors’ AI security copilots.

Source: VentureBeat Analysis

CrowdStrike Charlotte, SentinelOne’s Purple AI and Trellix WISE are already triaging, isolating and remediating threats without human intervention. Google and Microsoft are embedding risk scoring, auto-mitigation and cross-cloud attack surface mapping into their copilots.

 Google’s recent acquisition of Wiz will significantly impact AI security copilot adoption as part of a broader CNAPP strategy in many organizations.

Platforms such as Observo Orion illustrate what’s next: agentic copilots unifying DevOps, observability, and security data to deliver proactive, automated defenses. Rather than just detecting threats, they orchestrate complex workflows, including code rollbacks or node isolation, bridging security, development and operations in the process.

The endgame isn’t just about smart, prompt-driven personal programming assistants; it’s about integrating AI-driven decision-making across SOC workflows.

AI security copilots’ leading use cases today   

The better a given use case can integrate into SOC analysts’ workflows, the greater its potential to scale and deliver strong value. Core to the scale of an AI security copilot’s architecture is the ability to ingest data from heterogeneous telemetry sources and identify decisions early in the process, keeping them in context.

Here’s where adoption is scaling the fastest:

Accelerating triage: Tier-1 analysts using copilots, including Microsoft Security Copilot and Charlotte AI, can reduce triage to minutes instead of many hours. This is possible due to pre-trained models that flag known tactics, techniques and procedures (TTPs), cross-reference threat intel and summarize findings with confidence scores.

Alert de-duplication and noise suppression: Observo Orion and Trellix WISE use contextual filtering to correlate multi-source telemetry, eliminating low-priority noise. This reduces alert fatigue by as much as 70%, freeing teams to focus on high-fidelity signals. Sophos XDR AI Assistant achieves similar results for mid-sized SOCs with smaller teams.

Policy enforcement and firewall tuning: Cisco AI Assistant and Palo Alto’s Cortex copilots dynamically suggest and auto-implement policy changes based on telemetry thresholds and anomaly detection. This is critical for SOCs with complex, distributed firewall topologies and zero-trust mandates.

Cross-domain correlation: Security Copilot (Microsoft) and SentinelOne Purple AI integrate identity telemetry, SIEM logs and endpoint data to detect lateral movement, privilege escalation, or suspicious multi-hop activity. Analysts receive contextual playbooks that reduce root cause analysis by over 40%.

Exposure validation and breach simulation: Cymulate AI Copilot emulates red-team logic and tests exposure against new CVEs, enabling SOCs to validate controls proactively. This replaces manual validation steps with automated posture testing integrated into SOAR workflows.

Natural language SIEM interaction: Exabeam Copilot and Splunk AI Assistant allow analysts to convert natural language queries into executable SIEM commands. This democratizes investigation capabilities, especially for less technical staff, and reduces dependency on deep query language knowledge.

Identity risk reduction: Oleria Copilot continuously scans for dormant accounts, excessive access rights, and unlinked entitlements. These copilots auto-generate cleanup plans and enforce least-privilege policies, helping reduce insider threat surface in hybrid environments.

Bottom Line: Copilots don’t replace analysts, they amplify and scale their experience and strengths

By integrating identity, endpoint and network telemetry, copilots reduce the time it takes to identify lateral movement and privilege escalation, two of the most dangerous phases in an attack chain. As Elia Zaitsev, CTO of CrowdStrike, explained to VentureBeat in an earlier conversation: it’s less about substituting human roles, and more about supporting and augmenting them.

AI-powered tools should be viewed as collaborative partners for people — a concept that is especially crucial in cybersecurity.  Zaitsev cautioned that focusing on completely replacing human professionals rather than working alongside them is a misguided strategy.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

HPE Aruba boosts NAC security, adds GreenLake ‘kill switch’

In addition, HPE Aruba tightened the integration between HPE Aruba Networking Central and HPE OpsRamp, the technology HPE bought in 2023 to manage hybrid and multicloud environments. OpsRamp monitors elements such as third-party switches, access points, firewalls, and routers. Tighter integration expands the ability to natively monitor third-party devices from vendors such as Cisco, Arista,

Read More »

GeoPark Names Felipe Bayon as New CEO

Latin American oil and gas firm GeoPark Limited has appointed Felipe Bayon as its new CEO and director, effective June 1. Bayon succeeds Andres Ocampo, who is stepping down from the role due to personal reasons, the company said in a news release. Bayon is recognized as one of the most effective energy executives in Latin America with more than three decades of accomplishments in the international oil and gas industry, GeoPark said. From 2017 to 2023, Bayon was CEO of Bogota, Colombia-based Ecopetrol, where he led 18,000 employees, oversaw production of approximately 700,000 barrels of oil equivalent per day (boepd) and revenues of over $30 billion. He brought Ecopetrol into the unconventional Permian Basin in the USA in partnership with Occidental Petroleum—a project that grew from 0 to around 150,000 barrels per day (bpd) gross in four years, as well as into the Brazilian ultra-deep water pre-salt play in partnership with Shell, according to the release. Bayon is a mechanical engineer who began his career in 1991 with Shell in field operations and projects and then moved to BP plc where he worked for 21 years. He also served as the CEO of Pan American Energy, one of the private hydrocarbon producers in Argentina, from 2005 to 2010. He has served on multiple boards across the energy, utilities, education, and technology sectors, GeoPark said. Ocampo, who served as the company’s CEO for three years and CFO for more than eight years, will continue to support the company and ensure a seamless handover, GeoPark said. Sylvia Escovar, Chair of GeoPark’s board said, “The board is very pleased to welcome Felipe Bayon to GeoPark. We believe he will be a catalyst to unlock the abundant opportunities in our region and drive us to transformational growth. Felipe is a true explorer, operator,

Read More »

Afreximbank Forms $3B Fund for Africa, Caribbean Oil Import Needs

The African Export-Import Bank (Afreximbank) has launched a $3-billion financing platform to help countries in the continent and the Caribbean bridge the cost of importing refined petroleum products. The Revolving Intra-African Oil Trade Financing Program only supports importation from African refineries. Afreximbank expects the program to back about $10-14 billion of imports. The Cairo-based bank said refined products have accounted for $30 billion of annual oil import costs in Africa, driven by inadequate refining capacity. “This program seeks to leverage the growing refining capacity that Afreximbank has helped establish across the continent, while aligning with the objectives of the African Continental Free Trade Area agreement, which includes facilitating intra-African trade, promoting industrialization, and creating jobs on the continent”, it said in an online statement. “Afreximbank is on its way to creating over 1.3 million bpd refining capacity and helping to convert the Gulf of Guinea from an exporter of crude oil into an important refining hub for the continent and the world”, the bank said. It said it has financed refineries in Angola, Cote d’Ivoire and Nigeria, including the Dangote refinery, which started production last year as Africa’s biggest refinery with a capacity of 650,000 barrels per day (bpd). The funding program will “mainly provide critical trade finance to oil traders (both African and international), banks, and Governments – represented by their Ministry of Finance or Ministry of Petroleum Resources/Energy – and state-owned enterprises mandated to import refined petroleum products, who seek to source refined products from African Refineries for onward consumption within the continent and export opportunities as may be applicable”, the bank said. Afreximbank president and chair Benedict Oramah said, “Whilst the program will have a direct impact on the volume of the refined petroleum products produced and consumed in Africa, it will also have a multiplier effect on the

Read More »

UK’s energy transition a ‘massive opportunity’, says NSTA chief

The UK’s transition towards zero emissions presents a “massive opportunity”, according to the chief of the body responsible for the North Sea transition. North Sea Transition Authority (NSTA) chief executive Stuart Payne said, while speaking at the Innovation Zero conference at Kensington Olympia in London on Tuesday, that carbon capture and storage will enable the country to realise its climate targets. The NSTA has a mandate to enable the orderly transition of oil and gas projects in the North Sea, and it can issue fines to operators that fail to decommission old assets, but it is ultimately responsible for ensuring that value is extracted from the region’s natural resources. While the body has the capacity to fine operators up to £1 million for failing to decommission old oil and gas platforms, to date it has taken only limited action to clamp down on delayed decommissioning. “The government is currently consulting on its vision for the future of the North Sea,” said Payne. “The consultation covers many things including licensing, skills and the workforce. But at its core is a recognition of the importance of managing the transition from oil and gas. “If we get it right, the North Sea can have a prosperous future which creates and safeguards employment and generates multibillion pound investments in all offshore energy projects. If we get it wrong, we risk losing the vital support of the public and from investors and risk missing out on opportunities for growth, jobs and energy security.” The regulator’s chief said that despite efforts to decarbonise, he expects oil and gas will remain “part of the picture for decades to come”. Two carbon capture and storage projects have been permitted in five months, the NSTA said, which in Payne’s view will enable the UK to grow its economy on

Read More »

TotalEnergies Launches Another $2B Buyback Plan even as Profit Falls

TotalEnergies SE said Wednesday it would repurchase up to $2 billion worth of shares in the second quarter (Q2), even as its Q1 earnings dropped. The French energy giant redeemed 33.3 million shares in the January-March 2025 quarter for $2 billion, according to quarterly results it published online. TotalEnergies’ board also approved a first interim dividend for 2025 of EUR 0.85 per share ($0.97). That is the same as the prior quarter’s rate but up 7.6 percent from the first three interim dividends of 2024. Chief executive Patrick Pouyanne said the newly declared payouts reflect TotalEnergies’ confidence in its balance sheet despite “a softening price environment with Brent below $70/b [per barrel] since the beginning of April and an uncertain geopolitical and macroeconomic context”. Adjusted net income declined 18 percent against Q1 2024 to $4.19 billion as weaker oil prices offset higher natural gas and liquefied natural gas prices, as well as higher production. Adjusted net profit per share post-dilution was $1.83. TotalEnergies opened lower at EUR 51.19 in Paris on results day. Hydrocarbon output totaled 2.56 million barrels of oil equivalent a day (MMboed), up 4 percent year-over-year thanks to “the continued ramp-up of projects in Brazil, the United States, Malaysia, Argentina and Denmark”, Pouyanne said. Production comprised 1.36 MMbd of oil including bitumen and 1.2 MMboed of gas including condensates and associated natural gas liquids. “The start-ups of the Ballymore offshore field in the United States during the second quarter and Mero-4 in Brazil expected in the third quarter will continue to add high-margin barrels and further reinforce the Company’s 2025 hydrocarbon production growth objective of more than 3 percent”, Pouyanne added. The exploration and production segment generated $2.45 billion in adjusted net operating profit, down 4 percent year-over-year. Integrated LNG logged $1.29 billion in adjusted net operating profit,

Read More »

Aberdeen drilling services firm Enteq Technologies enters administration

Engineering services firm Enteq Technologies has entered administration after failing to find a buyer. The AIM-listed company experienced a sharp drop in its share price in recent weeks after warning of cash flow issues from the development of its SABER drilling technology. The SABER (Steer-At-Bit Enteq Rotary Tool) is an alternative to traditional rotary steerable systems. Enteq acquired an exclusive licence for the SABER technology from Shell in 2019, before embarking on efforts to commercialise the technology. Alongside oil and gas applications, the SABER tool can also be used in geothermal drilling and methane capture. A year ago, the Enteq’s shares traded a £9, but this had fallen to 43p before trading was suspended. In a statement to the market, Enteq said while the company “continues to require funding” the board “now no longer considers that suitable funding can be realistically raised”. “The board has continued to seek advice on its appropriate next steps, and regrettably has concluded that, after detailed consideration of the company’s current financial situation, it will not be able to meet its liabilities as they fall due and is therefore required to take the necessary steps to seek to preserve value for creditors,” the statement continued. According to documents submitted to Companies House, Enteq reported a $3.2 million (£2.4m) loss in 2024, which came after a $1.7m (£1.27m) loss in 2023. Enteq expanded into Aberdeen in 2023 in an attempt to drive sales of the SABER product in the North Sea. At the time, the company employed 11 people across the UK and the US. The company also maintained offices in Cheltenham and Houston alongside its London headquarters.

Read More »

Scale, Speed of Spain, Portugal Power Outage Raises Concerns

The scale and speed of the power outage that brought much of Spain and Portugal to a standstill on Monday have raised significant concerns both regionally and internationally. That’s what Rystad Energy stated in a market update sent to Rigzone by the Rystad team on Tuesday, which was penned by the company’s senior analyst Pratheeksha Ramdas. “Spain’s national grid operator, Red Electrica (REE), and Portugal’s E-Redes are investigating the exact causes that led to abnormal oscillations in the high-voltage lines and synchronization failures across the interconnected power grid into France,” Rystad said in the update. “As power has now returned to large parts of the region, the generation mix in each country played a significant role in both the failure and the recovery, preliminary Rystad Energy analysis shows,” it added. Rystad noted in the update that the blackout began with a sharp fluctuation in the Spanish electricity grid, which it said caused the entire electricity system to disconnect from the rest of the European system at around 12.30pm on Monday. “Within minutes, the intense fluctuations led to a complete collapse of the Spanish mainland’s electricity transmission grid,” Rystad highlighted. “The failure had an immediate impact on everyday life, with metro systems grounding to a halt across multiple cities, forcing emergency evacuations of underground transport, while airports, traffic signals, and communications networks ceased functioning,” it added. Rigzone experienced the power outage in Lisbon, Portugal, first-hand, seeing several nonfunctional traffic lights and several petrol stations closed around the area on Monday. In a statement posted on its official X account on Monday, which was translated from Portuguese, the Portuguese government said the GNR (Guarda Nacional Republicana) “appeals to the population to remain calm and serene, paying special attention to road traffic due to the lack of traffic lights, and remains available to

Read More »

Nvidia AI supercluster targets agents, reasoning models on Oracle Cloud

Oracle has previously built an OCI Supercluster with 65,536 Nvidia H200 GPUs using the older Hopper GPU technology and no CPU that offers up to 260 exaflops of peak FP8 performance. According to the blog post announcing the availability, the Blackwell GPUs are available via Oracle’s public, government, and sovereign clouds, as well as in customer-owned data centers through its OCI Dedicated Region and Alloy offerings. Oracle joins a growing list of cloud providers that have made the GB200 NVL72 system available, including Google, CoreWeave and Lambda. In addition, Microsoft offers the GB200 GPUs, though they are not deployed as an NVL72 machine.

Read More »

Deep Data Center: Neoclouds as the ‘Picks and Shovels’ of the AI Gold Rush

In 1849, the discovery of gold in California ignited a frenzy, drawing prospectors from around the world in pursuit of quick fortune. While few struck it rich digging and sifting dirt, a different class of entrepreneurs quietly prospered: those who supplied the miners with the tools of the trade. From picks and shovels to tents and provisions, these providers became indispensable to the gold rush, profiting handsomely regardless of who found gold. Today, a new gold rush is underway, in pursuit of artificial intelligence. And just like the days of yore, the real fortunes may lie not in the gold itself, but in the infrastructure and equipment that enable its extraction. This is where neocloud players and chipmakers are positioned, representing themselves as the fundamental enablers of the AI revolution. Neoclouds: The Essential Tools and Implements of AI Innovation The AI boom has sparked a frenzy of innovation, investment, and competition. From generative AI applications like ChatGPT to autonomous systems and personalized recommendations, AI is rapidly transforming industries. Yet, behind every groundbreaking AI model lies an unsung hero: the infrastructure powering it. Enter neocloud providers—the specialized cloud platforms delivering the GPU horsepower that fuels AI’s meteoric rise. Let’s examine how neoclouds represent the “picks and shovels” of the AI gold rush, used for extracting the essential backbone of AI innovation. Neoclouds are emerging as indispensable players in the AI ecosystem, offering tailored solutions for compute-intensive workloads such as training large language models (LLMs) and performing high-speed inference. Unlike traditional hyperscalers (e.g., AWS, Azure, Google Cloud), which cater to a broad range of use cases, neoclouds focus exclusively on optimizing infrastructure for AI and machine learning applications. This specialization allows them to deliver superior performance at a lower cost, making them the go-to choice for startups, enterprises, and research institutions alike.

Read More »

Soluna Computing: Innovating Renewable Computing for Sustainable Data Centers

Dorothy 1A & 1B (Texas): These twin 25 MW facilities are powered by wind and serve Bitcoin hosting and mining workloads. Together, they consumed over 112,000 MWh of curtailed energy in 2024, demonstrating the impact of Soluna’s model. Dorothy 2 (Texas): Currently under construction and scheduled for energization in Q4 2025, this 48 MW site will increase Soluna’s hosting and mining capacity by 64%. Sophie (Kentucky): A 25 MW grid- and hydro-powered hosting center with a strong cost profile and consistent output. Project Grace (Texas): A 2 MW AI pilot project in development, part of Soluna’s transition into HPC and machine learning. Project Kati (Texas): With 166 MW split between Bitcoin and AI hosting, this project recently exited the Electric Reliability Council of Texas, Inc. planning phase and is expected to energize between 2025 and 2027. Project Rosa (Texas): A 187 MW flagship project co-located with wind assets, aimed at both Bitcoin and AI workloads. Land and power agreements were secured by the company in early 2025. These developments are part of the company’s broader effort to tackle both energy waste and infrastructure bottlenecks. Soluna’s behind-the-meter design enables flexibility to draw from the grid or directly from renewable sources, maximizing energy value while minimizing emissions. Competition is Fierce and a Narrower Focus Better Serves the Business In 2024, Soluna tested the waters of providing AI services via a  GPU-as-a-Service through a partnership with HPE, branded as Project Ada. The pilot aimed to rent out cloud GPUs for AI developers and LLM training. However, due to oversupply in the GPU market, delayed product rollouts (like NVIDIA’s H200), and poor demand economics, Soluna terminated the contract in March 2025. The cancellation of the contract with HPE frees up resources for Soluna to focus on what it believes the company does best: designing

Read More »

Quiet Genius at the Neutral Line: How Onics Filters Are Reshaping the Future of Data Center Power Efficiency

Why Harmonics Matter In a typical data center, nonlinear loads—like servers, UPS systems, and switch-mode power supplies—introduce harmonic distortion into the electrical system. These harmonics travel along the neutral and ground conductors, where they can increase current flow, cause overheating in transformers, and shorten the lifespan of critical power infrastructure. More subtly, they waste power through reactive losses that don’t show up on a basic utility bill, but do show up in heat, inefficiency, and increased infrastructure stress. Traditional mitigation approaches—like active harmonic filters or isolation transformers—are complex, expensive, and often require custom integration and ongoing maintenance. That’s where Onics’ solution stands out. It’s engineered as a shunt-style, low-pass filter: a passive device that sits in parallel with the circuit, quietly siphoning off problematic harmonics without interrupting operations.  The result? Lower apparent power demand, reduced electrical losses, and a quieter, more stable current environment—especially on the neutral line, where cumulative harmonic effects often peak. Behind the Numbers: Real-World Impact While the Onics filters offer a passive complement to traditional mitigation strategies, they aren’t intended to replace active harmonic filters or isolation transformers in systems that require them—they work best as a low-complexity enhancement to existing power quality designs. LoPilato says Onics has deployed its filters in mission-critical environments ranging from enterprise edge to large colos, and the data is consistent. In one example, a 6 MW data center saw a verified 9.2% reduction in energy consumption after deploying Onics filters at key electrical junctures. Another facility clocked in at 17.8% savings across its lighting and support loads, thanks in part to improved power factor and reduced transformer strain. The filters work by targeting high-frequency distortion—typically above the 3rd harmonic and up through the 35th. By passively attenuating this range, the system reduces reactive current on the neutral and helps stabilize

Read More »

New IEA Report Contrasts Energy Bottlenecks with Opportunities for AI and Data Center Growth

Artificial intelligence has, without question, crossed the threshold—from a speculative academic pursuit into the defining infrastructure of 21st-century commerce, governance, and innovation. What began in the realm of research labs and open-source models is now embedded in the capital stack of every major hyperscaler, semiconductor roadmap, and national industrial strategy. But as AI scales, so does its energy footprint. From Nvidia-powered GPU clusters to exascale training farms, the conversation across boardrooms and site selection teams has fundamentally shifted. It’s no longer just about compute density, thermal loads, or software frameworks. It’s about power—how to find it, finance it, future-proof it, and increasingly, how to generate it onsite. That refrain—“It’s all about power now”—has moved from a whisper to a full-throated consensus across the data center industry. The latest report from the International Energy Agency (IEA) gives this refrain global context and hard numbers, affirming what developers, utilities, and infrastructure operators have already sensed on the ground: the AI revolution will be throttled or propelled by the availability of scalable, sustainable, and dispatchable electricity. Why Energy Is the Real Bottleneck to Intelligence at Scale The major new IEA report puts it plainly: The transformative promise of AI will be throttled—or unleashed—by the world’s ability to deliver scalable, reliable, and sustainable electricity. The stakes are enormous. Countries that can supply the power AI craves will shape the future. Those that can’t may find themselves sidelined. Importantly, while AI poses clear challenges, the report emphasizes how it also offers solutions: from optimizing energy grids and reducing emissions in industrial sectors to enhancing energy security by supporting infrastructure defenses against cyberattacks. The report calls for immediate investments in both energy generation and grid capabilities, as well as stronger collaboration between the tech and energy sectors to avoid critical bottlenecks. The IEA advises that, for countries

Read More »

Colorado Eyes the AI Data Center Boom with Bold Incentive Push

Even as states work on legislation to limit data center development, it is clear that some locations are looking to get a bigger piece of the huge data center spending that the AI wave has created. It appears that politicians in Colorado took a look around and thought to themselves “Why is all that data center building going to Texas and Arizona? What’s wrong with the Rocky Mountain State?” Taking a page from the proven playbook that has gotten data centers built all over the country, Colorado is trying to jump on the financial incentives for data center development bandwagon. SB 24-085: A Statewide Strategy to Attract Data Center Investment Looking to significantly boost its appeal as a data center hub, Colorado is now considering Senate Bill 24-085, currently making its way through the state legislature. Sponsored by Senators Priola and Buckner and Representatives Parenti and Weinberg, this legislation promises substantial economic incentives in the form of state sales and use tax rebates for new data centers established within the state from fiscal year 2026 through 2033. Colorado hopes to position itself strategically to compete with neighboring states in attracting lucrative tech investments and high-skilled jobs. According to DataCenterMap.com, there are currently 53 data centers in the state, almost all located in the Denver area, but they are predominantly smaller facilities. In today’s era of massive AI-driven hyperscale expansion, Colorado is rarely mentioned in the same breath as major AI data center markets.  Some local communities have passed their own incentive packages, but SB 24-085 aims to offer a unified, statewide framework that can also help mitigate growing NIMBY (Not In My Backyard) sentiment around new developments. The Details: How SB 24-085 Works The bill, titled “Concerning a rebate of the state sales and use tax paid on new digital infrastructure

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »