Stay Ahead, Stay ONMINE

Winning the war against adversarial AI needs to start with AI-native SOCs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Faced with increasingly sophisticated multi-domain attacks slipping through due to alert fatigue, high turnover and outdated tools, security leaders are embracing AI-native security operations centers (SOCs) as the future of defense. This year, attackers are setting […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Faced with increasingly sophisticated multi-domain attacks slipping through due to alert fatigue, high turnover and outdated tools, security leaders are embracing AI-native security operations centers (SOCs) as the future of defense.

This year, attackers are setting new speed records for intrusions by capitalizing on the weaknesses of legacy systems designed for perimeter-only defenses and, worse, of trusted connections across networks.

Attackers trimmed 17 minutes off their average eCrime intrusion activity time results over the last year and reduced the average breakout time for eCrime intrusions from 79 minutes to 62 minutes in just a year. The fastest observed breakout time was just two minutes and seven seconds.

Attackers are combining generative AI, social engineering, interactive intrusion campaigns and an all-out assault on cloud vulnerabilities and identities. With this playbook they seek to capitalize on the weaknesses of organizations with outdated or no cybersecurity arsenals in place.   

“The speed of today’s cyberattacks requires security teams to rapidly analyze massive amounts of data to detect, investigate and respond to threats faster. This is the failed promise of SIEM [security information and event management]. Customers are hungry for better technology that delivers instant time-to-value and increased functionality at a lower total cost of ownership,” said George Kurtz, president, CEO and cofounder of cybersecurity company CrowdStrike.

“SOC leaders must find the balance in improving their detection and blocking capabilities. This should reduce the number of incidents and improve their response capabilities, ultimately reducing attacker dwell time,” Gartner writes in its report, Tips for Selecting the Right Tools for Your Security Operations Center.

AI-native SOCs: The sure cure for swivel-chair integration

Visit any SOC, and it’s clear most analysts are being forced to rely on “swivel-chair integration” because legacy systems weren’t designed to share data in real time with each other.

That means analysts are often swiveling their rolling chairs from one monitor to another, checking on alerts and clearing false positives. Accuracy and speed are lost in the fight against growing multi-domain attempts that are not intuitively obvious and distinct among the real-time torrent of alerts streaming in.

Here are just a few of the many challenges that SOC leaders are looking to an AI-native SOC to help solve:

Chronic levels of alert fatigue: Legacy systems, including SIEMs, are producing an increasingly overwhelming number of alerts for SOC analysts with to track and analyze. SOC analysts who spoke on anonymity said that four out of every 10 alerts they produce are false positives. Analysts often spend more time triaging false positives than investigating actual threats, which severely affects productivity and response time. Making an SOC AI-native would make an immediate dent in this time, which every SOC analyst and leader has to deal with on a daily basis.

Ongoing talent shortage and churn: Experienced SOC analysts who excel at what they do and whose leaders can influence budgets to get them raises and bonuses are, for the most part, staying put in their current roles. Kudos to the organizations who realize investing in retaining talented SOC teams is core to their business. A commonly cited statistic is that there is a global cybersecurity workforce gap of 3.4 million professionals. There is indeed a chronic shortage of SOC analysts in the industry, so it’s up to organizations to close the pay gaps and double down on training to grow their teams internally. Burnout is pervasive in understaffed teams who are forced to rely on swivel-chair integration to get their jobs done.

Multi-domain threats are growing exponentially. Adversaries, including cybercrime gangs, nation-states and well-funded cyber-terror organizations, are doubling down on exploiting gaps in endpoint security and identities. Malware-free attacks have been growing throughout the past year, increasing in their variety, volume and ingenuity of attack strategies. SOC teams protecting enterprise software companies developing AI-based platforms, systems and new technologies are being especially hard-hit. Malware-free attacks are often undetectable, trading on trust in legitimate tools, rarely generating a unique signature, and relying on file-less execution. Kurtz told VentureBeat that attackers who target endpoint and identity vulnerabilities frequently move laterally within systems in under two minutes. Their advanced techniques, including social engineering, ransomware-as-a-service (RaaS), and identity-based attacks, demand faster and more adaptive SOC responses.

Increasingly complex cloud configurations increase the risks of an attack. Cloud intrusions have surged by 75% year-over-year, with adversaries exploiting native cloud vulnerabilities such as insecure APIs and identity misconfigurations. SOCs often struggle with limited visibility and inadequate tools to mitigate threats in complex multicloud environments.

Data overload and tool sprawl create defense gaps that SOC teams are called on to fill. Legacy perimeter-based systems, including many decades-old SIEM systems, struggle to process and analyze the immense amount of data generated by modern infrastructure, endpoints, and sources of telemetry data. Asking SOC analysts to keep on top of multiple sources of alerts and reconcile data across disparate tools slows their effectiveness, leads to burnout and holds them back from achieving the necessary accuracy, speed and performance.

How AI is improving SOC accuracy, speed and performance

“AI is already being used by criminals to overcome some of the world’s cybersecurity measures,” warns Johan Gerber, executive vice president of security and cyber innovation at MasterCard. “But AI has to be part of our future, of how we attack and address cybersecurity.”

“It’s extremely hard to go out and do something if AI is thought about as a bolt-on; you have to think about it [as integral],” Jeetu Patel, EVP and GM of security and collaboration for Cisco, told VentureBeat, citing findings from the 2024 Cisco Cybersecurity Readiness Index. “The operative word over here is AI being used natively in your core infrastructure.”

Given the many accuracy, speed and performance advantages of transitioning to an AI-native SOC, it’s understandable why Gartner is supportive of the idea. The research firm predicts that by 2028, multi-agent AI in threat detection and incident response (including within SOCs) will increase from 5% to 70% of AI implementations — primarily augmenting, not replacing, staff.

Chatbots making an impact

Core to the value that AI-driven SOCs bring to cybersecurity and IT teams are accelerated threat detection and triage based on improved predictive accuracy using real-time telemetry data.

SOC teams report that AI-based tools, including chatbots, are providing faster turnarounds on a broad spectrum of queries, from simple analysis to more complex analysis of anomalies. The latest generation of chatbots designed to streamline SOC workflows and assist security analysts include CrowdStrike’s Charlotte AI, Google’s Threat Intelligence Copilot, Microsoft Security Copilot, Palo Alto Networks’ series of AI Copilots, and SentinelOne Purple AI.

Graph databases are core to SOCs’ future

Graph database technologies are helping defenders see their vulnerabilities as attackers do. Attackers think in terms of traversing the system graph of a business, while SOC defenders have traditionally relied on lists they use to cycle through deterrent-based actions. The graph database arms race aims to get SOC analysts to parity with attackers when it comes to tracking threats, intrusions and breaches across the graph of their identities, systems and networks.  

AI is already proving effective in reducing false positives, automating incident responses, enhancing threat analysis and continually finding new ways to streamline SOC operations.

Combining AI with graph databases is also helping SOCs track and stop multi-domain attacks. Graph databases are core to SOC’s future because they excel at visualizing and analyzing interconnected data in real time, enabling faster and more accurate threat detection, attack path analysis, and risk prioritization.

John Lambert, corporate vice president for Microsoft Security Research, underscored the critical importance of graph-based thinking for cybersecurity, explaining to VentureBeat, “Defenders think in lists, cyberattackers think in graphs. As long as this is true, attackers win.”

AI-native SOCs need humans in the middle to reach their potential

SOCs that are deliberate in designing human-in-the-middle workflows as a core part of their AI-native SOC strategies are best positioned for success. The overarching goal needs to be strengthening SOC analysts’ knowledge and providing them with the data, insights and intelligence they need to excel and grow in their roles. Also implicit in a human-in-the-middle workflow design is retention.

Organizations that have created a culture of continuous learning and see AI as a tool for accelerating training and on-the-job results are already ahead of competitors. VentureBeat continues to see SOCs that put a high priority on enabling analysts to focus on complex, strategic tasks, while AI manages routine operations, retaining their teams. There are many stories of small wins, like stopping an intrusion or a breach. AI should not be seen as a replacement for SOC analysts or for experienced human threat hunters. Instead, AI apps and platforms are tools that threat hunters need to protect enterprises better.

AI-driven SOCs can significantly reduce incident response times, with some organizations reporting up to a 50% decrease. This acceleration enables security teams to address threats more promptly, minimizing potential damage.

AI’s role in SOCs is expected to expand, incorporating proactive adversary simulations, continuous health monitoring of SOC ecosystems, and advanced endpoint and identity security through zero-trust integration. These advancements will further strengthen organizations’ defenses against evolving cyber threats.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia is still working with suppliers on RAM chips for Rubin

Nvidia changed its requirements for suppliers of the next generation of high-bandwidth memory, HBM4, but is close to certifying revised chips from Samsung Electronics for use in its AI systems, according to reports. Nvidia revised its specifications for memory chips for its Rubin platform in the third quarter of 2025,

Read More »

Storage shortage may cause AI delays for enterprises

Higher prices ahead All indicators are showing a steep price increase for memory and storage in 2026. Brad Gastwirth, for example, says he met with many of the most important players in the market at CES earlier this month, and his analysis suggests there will be a 50% or more

Read More »

Baker Hughes Sees Record Year for Industrial, Energy Tech Bookings

Baker Hughes Co has reported record orders of $14.87 billion from its industrial and energy technology (IET) business for 2025, including $4.02 billion for the fourth quarter. “IET achieved a record backlog of $32.4 billion at year-end, and book-to-bill exceeded 1x”, chair and chief executive Lorenzo Simonelli said in an online statement. “For the second consecutive year, non-LNG equipment orders represented approximately 85 percent of total IET orders, which highlights the end-market diversity and versatility of our IET portfolio”. IET delivered $3.81 billion in revenue for October-December 2025, up 13 percent from the prior quarter and nine percent year-on-year. “The increase was driven by gas technology equipment, up $189 million, or 11 percent year-over-year, [and] gas technology services, up $86 million, or 11 percent year-over-year”, Baker Hughes said. Q4 2025 IET orders totaled $4.02 billion, down three percent against the prior three-month period but up seven percent compared to Q4 2024. “The [year-over-year] increase was driven by continued strength in climate technology solutions, industrial technology, and gas technology services”, the Houston, Texas-based company said. Segment EBITDA came at $761 million, up 20 percent sequentially and 19 percent year-on-year. “The year-over-year increase in EBITDA was driven by productivity, volume, price and FX [foreign exchange], partially offset by inflation”, Baker Hughes said. Its other segment, oilfield services and equipment (OFSE), logged $3.57 billion in revenue for Q4 2025, down two percent quarter-on-quarter and eight percent year-on-year. That was driven by declines in its main markets, North America and the Middle East/Asia, with both regions registering quarter-on-quarter and year-on-year drops in revenue. OFSE orders in Q4 2025 totaled $3.86 billion, down five percent quarter-on-quarter but up three percent year-on-year. OFSE EBITDA landed at $647 million, down four percent quarter-on-quarter and 14 percent year-on-year. IET “more than offset continued macro‑driven softness in OFSE, where margins remained resilient

Read More »

Analysts Explain Tuesday’s USA NatGas Price Drop

In separate exclusive interviews with Rigzone on Tuesday, Phil Flynn, a senior market analyst at the PRICE Futures Group, and Art Hogan, Chief Market Strategist at B. Riley Wealth, explained today’s U.S. natural gas price drop. “Natural gas is pulling back after the worst of the cold has passed,” Flynn told Rigzone. “We’ve lifted some of the winter storm warnings, and this should allow some of the freeze-offs in the basins to get production back up,” he added. “We saw [a] significant drop in production because of the cold weather and now some of that will be coming back online,” he continued. In his interview with Rigzone, Flynn warned that the weather is still going to be “key”. “Some forecasters are predicting a warm-up, but then after that another blast of the cold,” he said. “If that’s the case … these huge moves in natural gas may be far from over”, Flynn told Rigzone. He added, however, that, “at least in the short term, [a] return to more moderate temperatures from what we had experienced should allow for the market to recover as far as production goes, and exports”. When he was asked to explain the U.S. natural gas price drop today, Hogan told Rigzone that “trees don’t grow to the sky”. “U.S. natural gas prices dipped today amid profit-taking by traders, after soaring by over 117 percent in the five days to Monday,” he said. “The benchmark jumped by 30 percent on Monday alone. Last week, gas prices went up by as much as 70 percent amid frigid weather that apparently took gas traders by surprise,” he added. “This surprise led to frantic short-covering and position exits at a hefty loss. Currently, natural gas is trading at over $6.60 per million British thermal units [MMBtu], which is the highest in

Read More »

In 2026, virtual power plants must scale or risk being left behind

Listen to the article 13 min This audio is auto-generated. Please let us know if you have feedback. Rising demand and new technologies are forcing utilities to coordinate distributed energy resources on an unprecedented scale, a trend likely to continue in 2026, analysts and stakeholders say. But intimidating demand forecasts from power-hungry data centers, coupled with aggressive policy shifts away from renewables and efficiency standards, are turning power providers toward large-scale generation like nuclear, geothermal, gas and coal — possibly to the detriment of aggregation and demand response programs, they say.  “Utilities are shifting away from DER to focus on [utility-scale] wind and solar in the near term and then new natural gas, [extending the life of] aging coal, and [restarting] shuttered nuclear plants,” said Sally Jacquemin, vice president of power and utilities at AspenTech Digital Grid Management, Emerson.  Investment in distribution system modernization is also growing, but DER “is a lower priority,” she added. But grid advocates and utility leaders say distributed resources could provide crucial benefits at a time of rising prices and accelerate the interconnection of large loads, which is a priority of the Trump administration. In order to do that, virtual power plants must evolve and scale more rapidly or skyrocketing electricity demand and costs will force attention to traditional resources, industry sources say. The value of DER to the system will be determined by policies set by states, grid operators, federal regulators and officials in the Trump administration. Allison Wannop, vice president of regulatory affairs and wholesale markets for Sparkfund, predicted that demand growth and affordability challenges will drive innovation to make the most of distribution system resources. “20th century solutions will not build a 21st century grid,” she said.   ‘Visibility will be key to VPP proliferation’ 2025 was a good year for distributed energy

Read More »

3D Energi Runs Out of Cash for Victoria Drill Campaign, Suspends Trading

3D Energi Ltd said Tuesday it has voluntarily halted trading on the Australian Securities Exchange (ASX), having defaulted on the payment of its share of costs in a ConocoPhillips-led exploration campaign in the Otway basin offshore Victoria. “Joint venture cash calls for the drilling program are higher than originally forecast and a balance of approximately $2.5 million remains outstanding by the company which it does not currently have”, Melbourne-based 3D Energi said in a stock filing. “A default notice has been issued by the joint venture operator to the company with a remedy period to 6th February. “Additional forecast company drilling program expenditure subject to cash calls due on 6th February is currently estimated at approximately $5.3 million, which if not paid by that date may well become the subject of an additional default notice and remedy period. “Consequently, the company is implementing a suspension of the trading of its shares on ASX while it addresses its funding position and the implications of payment default on the level of its ongoing interest in the permit”. 3D Energi plans to resume ASX trading in the first week of February. Earlier this month it announced the Charlemont-1 gas discovery, the joint venture’s second discovery under the VIC/P79 exploration after Essington-1. The newest well targeted the penultimate prospect in the Charlemont trend, which culminates with the La Bella discovery, according to 3D Energi. “Phase 1 of the Otway Exploration Drilling Program has identified important new natural gas resources close to existing offshore gas production and processing infrastructure in the Otway basin, supplying the Australian domestic gas market”, 3D Energi executive chair Noel Newell said in a statement January 14 announcing the second discovery. “This enhances the strategic significance of the discovery and supports future development optionality, subject to further technical and commercial evaluation,

Read More »

EIA Sees NatGas Price Dropping in 2026 and Rising in 2027

The U.S. Energy Information Administration (EIA) projected that the U.S. natural gas Henry Hub spot price will drop this year and rise next year in its latest short term energy outlook (STEO). In the EIA’s January STEO, which was released on January 13 and completed its forecast on January 8, the EIA forecast that the commodity will average $3.46 per million British thermal units (MMBtu) in 2026 and $4.59 per MMBtu in 2027. The U.S. natural gas Henry Hub spot price averaged $3.53 per MMBtu in 2025, the EIA’s latest STEO showed. According to a quarterly breakdown included in its latest STEO, the EIA sees the U.S. natural gas Henry Hub spot price coming in at $3.38 per MMBtu in the first quarter of 2026, $2.75 per MMBtu in the second quarter, $3.42 per MMBtu in the third quarter, $4.28 per MMBtu in the fourth quarter, $4.78 per MMBtu in the first quarter of 2027, $4.30 per MMBtu in the second quarter, $4.43 per MMBtu in the third quarter, and $4.84 per MMBtu in the fourth quarter of next year. Last year, the Henry Hub spot price averaged $4.15 per MMBtu in the first quarter, $3.19 per MMBtu in the second quarter, $3.03 per MMBtu in the third quarter, and $3.75 per MMBtu in the fourth quarter, the EIA’s January STEO showed. “On an annual basis, U.S. natural gas prices are relatively flat in 2026 before rising in 2027 as market conditions tighten,” the EIA said in its latest STEO. “We expect the Henry Hub natural gas spot price will average just under $3.50 per million British thermal units (MMBtu) this year, a two percent decrease from 2025, and then rise by 33 percent in 2027 to an annual average of almost $4.60 per MMBtu,” it added. In its STEO,

Read More »

Aramco Raises $4B in 1st Bond Sale of Year

(Update) January 26, 2026, 9:39 PM GMT: Article updated with pricing details in first three paragraphs. Saudi Aramco priced a $4 billion bond sale, its first note sale this year, as the world’s largest oil producer steps up borrowing to fund investment and dividends. The government-owned oil producer sold four bonds maturing in three to 30 years, according to a person familiar with the matter. The longest portion of the deal will pay 1.3 percentage point above Treasuries, said the person, who asked not to be identified because they are not authorized to speak publicly. That’s roughly a quarter-point less than initial pricing discussions. Overall, the sale attracted more than $22 billion of bids at the peak, with final books over $14 billion, the person said. Aramco is a key contributor to Saudi state finances, with large dividend payments supplementing royalties linked to crude sales. As oil prices have dipped and OPEC+ policy limited Saudi production, cash flows lagged payouts before a rebound in the third quarter. Aramco’s $17 billion in debt sales over the last two years helped support payouts. Saudi Arabia’s budget remains heavily dependent on oil revenue as the kingdom pursues an ambitious modernization drive. Crude prices remain well below levels needed to balance the state budget, forcing the government to project spending shortfalls for the coming years. Aramco has turned to debt to augment its cash flow and plans to invest more than $50 billion this year in oil and natural gas production, while maintaining its high base dividend of $21 billion. In November, the company reported a surprise jump in third-quarter profit as rising production outweighed lower crude prices. Earnings are set to slip for the full year, estimates compiled by Bloomberg show. While Brent crude has risen this year amid geopolitical tensions, including US attacks on

Read More »

Gauging the real impact of AI agents

That creates the primary network issue for AI agents, which is dealing with implicit and creeping data. There’s a singular important difference between an AI agent component and an ordinary software component. Software is explicit in its use of data. The programming includes data identification. AI is implicit in its data use; the model was trained on data, and there may well be some API linkage to databases that aren’t obvious to the user of the model. It’s also often true that when an agentic component is used, it’s determined that additional data resources are needed. Are all these resources in the same place? Probably not. The enterprises with the most experience with AI agents say it would be smart to expect some data center network upgrades to link agents to databases, and if the agents are distributed away from the data center, it may be necessary to improve the agent sites’ connection to the corporate VPN. As agents evolve into real-time applications, this requires they also be proximate to the real-time system they support (a factory or warehouse), so the data center, the users, and any real-time process pieces all pull at the source of hosting to optimize latency. Obviously, they can’t all be moved into one place, so the network has to make a broad and efficient set of connections. That efficiency demands QoS guarantees on latency as well as on availability. It’s in the area of availability, with a secondary focus on QoS attributes like latency, that the most agent-experienced enterprises see potential new service opportunities. Right now, these tend to exist within a fairly small circle—a plant, a campus, perhaps a city or town—but over time, key enterprises say that their new-service interest could span a metro area. They point out that the real-time edge applications

Read More »

Photonic chip vendor snags Gates investment

“Moore’s Law is slowing, but AI can’t afford to wait. Our breakthrough in photonics unlocks an entirely new dimension of scaling, by packing massive optical parallelism on a single chip,” said Patrick Bowen, CEO of Neurophos. “This physics-level shift means both efficiency and raw speed improve as we scale up, breaking free from the power walls that constrain traditional GPUs.” The new funding includes investments from Microsoft’s investment fund M12 that will help speed up delivery of Neurophos’ first integrated photonic compute system, including datacenter-ready OPU modules. Neurophos is not the only company exploring this field. Last April, Lightmatter announced the launch of photonic chips to tackle data center bottlenecks, And in 2024, IBM said its researchers were exploring optical chips and developing a prototype in this area.

Read More »

Intel wrestling with CPU supply shortage

“We have important customers in the data center side. We have important OEM customers on both data center and client and that needs to be our priority to get the limited supply we have to those customers,” he added. CEO Lip-Bu Tan added that the continuing proliferation and diversification of AI workloads is placing significant capacity constraints on traditional and new hardware infrastructure, reinforcing the growing and essential role CPUs play in the AI era. Because of this, Intel decided to simplify its server road map, focusing resources on the 16-channel Diamond Rapids product and accelerate the introduction of Coral Rapids. Intel had removed multithreading from diamond Rapids, presumably to get rid of the performance bottlenecks. With each core running two threads, they often competed for resources. That’s why, for example, Ampere does not use threading but instead applies many more cores per CPU. With Coral Rapids, Intel is not only reintroducing multi-threading back into our data center road map but working closely with Nvidia to build a custom Xeon fully integrated with their NVLink technology to Build the tighter connection between Intel Xeon processors and Nvidia GPUs. Another aspect impacting supply has been yields or the new 18A process node. Tan said he was disappointed that the company could not fully meet the demand of the markets, and that while yields are in line with internal plans, “they’re still below where I want them to be,” Tan said.  Tan said yields for 18A are improving month-over-month and Intel is targeting a 7% to 8% improvement each month.

Read More »

Intel’s AI pivot could make lower-end PCs scarce in 2026

However, he noted, “CPUs are not being cannibalized by GPUs. Instead, they have become ‘chokepoints’ in AI infrastructure.” For instance, CPUs such as Granite Rapids are essential in GPU clusters, and for handling agentic AI workloads and orchestrating distributed inference. How pricing might increase for enterprises Ultimately, rapid demand for higher-end offerings resulted in foundry shortages of Intel 10/7 nodes, Bickley noted, which represent the bulk of the company’s production volume. He pointed out that it can take up to three quarters for new server wafers to move through the fab process, so Intel will be “under the gun” until at least Q2 2026, when it projects an increase in chip production. Meanwhile, manufacturing capacity for Xeon is currently sold out for 2026, with varying lead times by distributor, while custom silicon programs are seeing lead times of 6 to 8 months, with some orders rolling into 2027, Bickley said. In the data center, memory is the key bottleneck, with expected price increases of more than 65% year over year in 2026 and up to 25% for NAND Flash, he noted. Some specific products have already seen price inflation of over 1,000% since 2025, and new greenfield capacity for memory is not expected until 2027 or 2028. Moor’s Sag was a little more optimistic, forecasting that, on the client side, “memory prices will probably stabilize this year until more capacity comes online in 2027.” How enterprises can prepare Supplier diversification is the best solution for enterprises right now, Sag noted. While it might make things more complex, it also allows data center operators to better absorb price shocks because they can rebalance against suppliers who have either planned better or have more resilient supply chains.

Read More »

Reports of SATA’s demise are overblown, but the technology is aging fast

The SATA 1.0 interface made its debut in 2003. It was developed by a consortium consisting of Intel, Dell, and storage vendors like Seagate and Maxtor. It quickly advanced to SATA III in 2009, but there never was a SATA IV. There was just nibbling around the edges with incremental updates as momentum and emphasis shifted to PCI Express and NVMe. So is there any life to be had in the venerable SATA interface? Surprisingly, yes, say the analysts. “At a high level, yes, SATA for consumer is pretty much a dead end, although if you’re storing TB of photos and videos, it is still the least expensive option,” said Bob O’Donnell, president and chief analyst with TECHnalysis Research. Similarly for enterprise, for massive storage demands, the 20 and 30 TB SATA drives from companies like Seagate and WD are apparently still in wide use in cloud data centers for things like cold storage. “In fact, both of those companies are seeing recording revenues based, in part, on the demand for these huge, high-capacity low-cost drives,” he said. “SATA doesn’t make much sense anymore. It underperforms NVMe significantly,” said Rob Enderle, principal analyst with The Enderle Group. “It really doesn’t make much sense to continue make it given Samsung allegedly makes three to four times more margin on NVMe.” And like O’Donnell, Enderle sees continued life for SATA-based high-capacity hard drives. “There will likely be legacy makers doing SATA for some time. IT doesn’t flip technology quickly and SATA drives do wear out, so there will likely be those producing legacy SATA products for some time,” he said.

Read More »

DCN becoming the new WAN for AI-era applications

“DCN is increasingly treated as an end-to-end operating model that standardizes connectivity, security policy enforcement, and telemetry across users, the middle mile, and cloud/application edges,” Sanchez said. Dell’Oro defines DCN as platforms and services that deliver consistent connectivity, policy enforcement, and telemetry from users, across the WAN, to distributed cloud and application edges spanning branch sites, data centers and public clouds. The category is gaining relevance as hybrid architectures and AI-era traffic patterns increase the operational penalty for fragmented control planes. DCN buyers are moving beyond isolated upgrades and are prioritizing architectures that reduce operational seams across connectivity, security and telemetry so that incident response and change control can follow a single thread, according to Dell’Oro’s research. What makes DCN distinct is that it links user-to-application experience with where policy and visibility are enforced. This matters as application delivery paths become more dynamic and workloads shift between on-premises data centers, public cloud, and edge locations. The architectural requirement is eliminating handoffs between networking and security teams rather than optimizing individual network segments. Where DCN is growing the fastest Cloud/application edge is the fastest-growing DCN pillar. This segment deploys policy enforcement and telemetry collection points adjacent to workloads rather than backhauling traffic to centralized security stacks. “Multi-cloud remains a reality, but it is no longer the durable driver by itself,” Sanchez said. “Cloud/application edge is accelerating because enterprises are trying to make application paths predictable and secure across hybrid environments, and that requires pushing application-aware steering, policy enforcement, and unified telemetry closer to workloads.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »