Stay Ahead, Stay ONMINE

Anthropic’s chief scientist on 5 ways agents will be even better in 2025

Agents are the hottest thing in tech right now. Top firms from Google DeepMind to OpenAI to Anthropic are racing to augment large language models with the ability to carry out tasks by themselves. Known as agentic AI in industry jargon, such systems have fast become the new target of Silicon Valley buzz. Everyone from Nvidia to Salesforce is talking about how they are going to upend the industry.  “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” Sam Altman claimed in a blog post last week. In the broadest sense, an agent is a software system that goes off and does something, often with minimal to zero supervision. The more complex that thing is, the smarter the agent needs to be. For many, large language models are now smart enough to power agents that can do a whole range of useful tasks for us, such as filling out forms, looking up a recipe and adding the ingredients to an online grocery basket, or using a search engine to do last-minute research before a meeting and producing a quick bullet-point summary. In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you. Anthropic notes that the feature is still cumbersome and error-prone. But it is already available to a handful of testers, including third-party developers at companies such as DoorDash, Canva, and Asana. Computer use is a glimpse of what’s to come for agents. To learn what’s coming next, MIT Technology Review talked to Anthropic’s cofounder and chief scientist Jared Kaplan. Here are five ways that agents are going to get even better in 2025. (Kaplan’s answers have been lightly edited for length and clarity.) 1/ Agents will get better at using tools “I think there are two axes for thinking about what AI is capable of. One is a question of how complex the task is that a system can do. And as AI systems get smarter, they’re getting better in that direction. But another direction that’s very relevant is what kinds of environments or tools the AI can use.  “So, like, if you go back almost 10 years now to [DeepMind’s Go-playing model] AlphaGo, we had AI systems that were superhuman in terms of how well they could play board games. But if all you can work with is a board game, then that’s a very restrictive environment. It’s not actually useful, even if it’s very smart. With text models, and then multimodal models, and now computer use—and perhaps in the future with robotics—you’re moving toward bringing AI into different situations and tasks, and making it useful.  “We were excited about computer use basically for that reason. Until recently, with large language models, it’s been necessary to give them a very specific prompt, give them very specific tools, and then they’re restricted to a specific kind of environment. What I see is that computer use will probably improve quickly in terms of how well models can do different tasks and more complex tasks. And also to realize when they’ve made mistakes, or realize when there’s a high-stakes question and it needs to ask the user for feedback.” 2/ Agents will understand context   “Claude needs to learn enough about your particular situation and the constraints that you operate under to be useful. Things like what particular role you’re in, what styles of writing or what needs you and your organization have. “I think that we’ll see improvements there where Claude will be able to search through things like your documents, your Slack, etc., and really learn what’s useful for you. That’s underemphasized a bit with agents. It’s necessary for systems to be not only useful but also safe, doing what you expected. “Another thing is that a lot of tasks won’t require Claude to do much reasoning. You don’t need to sit and think for hours before opening Google Docs or something. And so I think that a lot of what we’ll see is not just more reasoning but the application of reasoning when it’s really useful and important, but also not wasting time when it’s not necessary.” 3/ Agents will make coding assistants better “We wanted to get a very initial beta of computer use out to developers to get feedback while the system was relatively primitive. But as these systems get better, they might be more widely used and really collaborate with you on different activities. “I think DoorDash, the Browser Company, and Canva are all experimenting with, like, different kinds of browser interactions and designing them with the help of AI. “My expectation is that we’ll also see further improvements to coding assistants. That’s something that’s been very exciting for developers. There’s just a ton of interest in using Claude 3.5 for coding, where it’s not just autocomplete like it was a couple of years ago. It’s really understanding what’s wrong with code, debugging it—running the code, seeing what happens, and fixing it.” 4/ Agents will need to be made safe “We founded Anthropic because we expected AI to progress very quickly and [thought] that, inevitably, safety concerns were going to be relevant. And I think that’s just going to become more and more visceral this year, because I think these agents are going to become more and more integrated into the work we do. We need to be ready for the challenges, like prompt injection.  [Prompt injection is an attack in which a malicious prompt is passed to a large language model in ways that its developers did not foresee or intend. One way to do this is to add the prompt to websites that models might visit.] “Prompt injection is probably one of the No.1 things we’re thinking about in terms of, like, broader usage of agents. I think it’s especially important for computer use, and it’s something we’re working on very actively, because if computer use is deployed at large scale, then there could be, like, pernicious websites or something that try to convince Claude to do something that it shouldn’t do. “And with more advanced models, there’s just more risk. We have a robust scaling policy where, as AI systems become sufficiently capable, we feel like we need to be able to really prevent them from being misused. For example, if they could help terrorists—that kind of thing. “So I’m really excited about how AI will be useful—it’s actually also accelerating us a lot internally at Anthropic, with people using Claude in all kinds of ways, especially with coding. But, yeah, there’ll be a lot of challenges as well. It’ll be an interesting year.”

Agents are the hottest thing in tech right now. Top firms from Google DeepMind to OpenAI to Anthropic are racing to augment large language models with the ability to carry out tasks by themselves. Known as agentic AI in industry jargon, such systems have fast become the new target of Silicon Valley buzz. Everyone from Nvidia to Salesforce is talking about how they are going to upend the industry. 

“We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” Sam Altman claimed in a blog post last week.

In the broadest sense, an agent is a software system that goes off and does something, often with minimal to zero supervision. The more complex that thing is, the smarter the agent needs to be. For many, large language models are now smart enough to power agents that can do a whole range of useful tasks for us, such as filling out forms, looking up a recipe and adding the ingredients to an online grocery basket, or using a search engine to do last-minute research before a meeting and producing a quick bullet-point summary.

In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you.

Anthropic notes that the feature is still cumbersome and error-prone. But it is already available to a handful of testers, including third-party developers at companies such as DoorDash, Canva, and Asana.

Computer use is a glimpse of what’s to come for agents. To learn what’s coming next, MIT Technology Review talked to Anthropic’s cofounder and chief scientist Jared Kaplan. Here are five ways that agents are going to get even better in 2025.

(Kaplan’s answers have been lightly edited for length and clarity.)

1/ Agents will get better at using tools

“I think there are two axes for thinking about what AI is capable of. One is a question of how complex the task is that a system can do. And as AI systems get smarter, they’re getting better in that direction. But another direction that’s very relevant is what kinds of environments or tools the AI can use. 

“So, like, if you go back almost 10 years now to [DeepMind’s Go-playing model] AlphaGo, we had AI systems that were superhuman in terms of how well they could play board games. But if all you can work with is a board game, then that’s a very restrictive environment. It’s not actually useful, even if it’s very smart. With text models, and then multimodal models, and now computer use—and perhaps in the future with robotics—you’re moving toward bringing AI into different situations and tasks, and making it useful. 

“We were excited about computer use basically for that reason. Until recently, with large language models, it’s been necessary to give them a very specific prompt, give them very specific tools, and then they’re restricted to a specific kind of environment. What I see is that computer use will probably improve quickly in terms of how well models can do different tasks and more complex tasks. And also to realize when they’ve made mistakes, or realize when there’s a high-stakes question and it needs to ask the user for feedback.”

2/ Agents will understand context  

“Claude needs to learn enough about your particular situation and the constraints that you operate under to be useful. Things like what particular role you’re in, what styles of writing or what needs you and your organization have.

Jared Kaplan

“I think that we’ll see improvements there where Claude will be able to search through things like your documents, your Slack, etc., and really learn what’s useful for you. That’s underemphasized a bit with agents. It’s necessary for systems to be not only useful but also safe, doing what you expected.

“Another thing is that a lot of tasks won’t require Claude to do much reasoning. You don’t need to sit and think for hours before opening Google Docs or something. And so I think that a lot of what we’ll see is not just more reasoning but the application of reasoning when it’s really useful and important, but also not wasting time when it’s not necessary.”

3/ Agents will make coding assistants better

“We wanted to get a very initial beta of computer use out to developers to get feedback while the system was relatively primitive. But as these systems get better, they might be more widely used and really collaborate with you on different activities.

“I think DoorDash, the Browser Company, and Canva are all experimenting with, like, different kinds of browser interactions and designing them with the help of AI.

“My expectation is that we’ll also see further improvements to coding assistants. That’s something that’s been very exciting for developers. There’s just a ton of interest in using Claude 3.5 for coding, where it’s not just autocomplete like it was a couple of years ago. It’s really understanding what’s wrong with code, debugging it—running the code, seeing what happens, and fixing it.”

4/ Agents will need to be made safe

“We founded Anthropic because we expected AI to progress very quickly and [thought] that, inevitably, safety concerns were going to be relevant. And I think that’s just going to become more and more visceral this year, because I think these agents are going to become more and more integrated into the work we do. We need to be ready for the challenges, like prompt injection. 

[Prompt injection is an attack in which a malicious prompt is passed to a large language model in ways that its developers did not foresee or intend. One way to do this is to add the prompt to websites that models might visit.]

“Prompt injection is probably one of the No.1 things we’re thinking about in terms of, like, broader usage of agents. I think it’s especially important for computer use, and it’s something we’re working on very actively, because if computer use is deployed at large scale, then there could be, like, pernicious websites or something that try to convince Claude to do something that it shouldn’t do.

“And with more advanced models, there’s just more risk. We have a robust scaling policy where, as AI systems become sufficiently capable, we feel like we need to be able to really prevent them from being misused. For example, if they could help terrorists—that kind of thing.

“So I’m really excited about how AI will be useful—it’s actually also accelerating us a lot internally at Anthropic, with people using Claude in all kinds of ways, especially with coding. But, yeah, there’ll be a lot of challenges as well. It’ll be an interesting year.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

TotalEnergies farms out 40% participating interest in certain licenses offshore Nigeria to Chevron

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style

Read More »

AI-driven network management gains enterprise trust

The way the full process works is that the raw data feed comes in, and machine learning is used to identify an anomaly that could be a possible incident. That’s where the generative AI agents step up. In addition to the history of similar issues, the agents also look for

Read More »

Chinese cyberspies target VMware vSphere for long-term persistence

Designed to work in virtualized environments The CISA, NSA, and Canadian Cyber Center analysts note that some of the BRICKSTORM samples are virtualization-aware and they create a virtual socket (VSOCK) interface that enables inter-VM communication and data exfiltration. The malware also checks the environment upon execution to ensure it’s running

Read More »

Russian Crude Output Lags OPEC+ Quota

Russia’s crude oil output last month was well below its OPEC+ quota, as the country struggled to find buyers for its sanctioned barrels and Ukrainian drone attacks hampered refineries. An average 9.43 million barrels a day were pumped in November, according to people with knowledge of the data, who asked not to be identified discussing confidential information. While that was 19,000 barrels a day above the October level, it lagged the nation’s November target by more than 100,000 barrels a day, Bloomberg calculations show. That’s also the most that Russia’s actual reported production has lagged its monthly OPEC+ quota, including compensation cuts, in over two years. It’s another sign that Moscow faces a challenge in offloading its oil, a key source of revenue that the Kremlin uses to fund its war with Ukraine. Russia had historically been one of the biggest laggards in complying with OPEC+ output agreements, pumping above the targets, and even had to make additional cuts to compensate for overproduction. The final cuts were made in October, according to the Organization of Petroleum Exporting Countries. US sanctions that hit oil giants Rosneft PJSC and Lukoil PJSC have in recent weeks reduced appetite for Russian barrels in key buyer India. Still, traders and refiners have said that volumes could rebound as unsanctioned suppliers and new trading intermediaries appear. The difficulty in finding buyers has led to the amount of Russian oil on water growing. In addition to vessels idling for long periods, some are taking longer voyages. Drone Attacks Meanwhile, Ukraine carried out record attacks on Russian refineries last month, pressuring crude-processing volumes as refinery owners rushed to repair infrastructure. The two sides are fighting an increasingly intense energy war as they attempt to gain a meaningful advantage as peace efforts drag on. The Energy Ministry didn’t immediately respond to a request for comment on

Read More »

Oil Falls Again on Oversupply Signs

Oil declined for a second day, dragged lower by weakness in refined products as traders await data expected to shed light on the extent of crude surpluses. West Texas Intermediate dipped 1.1% to settle near $58 a barrel, pressured by routs in diesel, gasoline and other products. The difference between the price of US gasoline and crude oil, known as a crack spread, fell to the weakest since February, while a comparable gauge for diesel also slid. Refined products had been one of few tailwinds for crude this year, and the recent demand-driven weakness is exacerbating a sense of bearish gloom ahead of a widely telegraphed glut. Some trend-following commodity trading advisers were selling positions in the products, according to data from Bridgeton Research Group. Such market participants can intensify price momentum. Traders are looking ahead to a slew of reports from the International Energy Agency and OPEC set to be published this week, as well as a Wednesday decision on monetary policy from the Federal Reserve. US crude output is expected to hit a record 13.61 million barrels a day this year, according to the Energy Information Administration’s Short-Term Energy Outlook released Tuesday, adding to short-term oversupply concerns. The IEA has predicted a record oil surplus next year, and the volume of crude crossing oceans is rising. Fuel prices have softened in recent days, removing one factor that had supported crude during the past few weeks. Still, the US oil benchmark remains in the tight $4-a-barrel range it has traded in since the start of November. “Eventually, the current huge blob of oil at sea will move onshore where the sensation of rising crude oil stocks will be more tangible,” said Bjarne Schieldrop, chief commodities analyst at SEB AB. “The only reason why Brent crude hasn’t fallen faster and

Read More »

Morocco Gets Closer to Creating $1B LNG Import Hub

Morocco is getting closer to creating an almost $1 billion liquefied natural gas hub at a new deep-sea port on its Mediterranean coast, as it plans to boost imports to curb the use of dirtier fuels. The nation this week issued a tender for a company to supply a floating storage and regasification unit that will be moored at the Nador West Med port that’s due to start operating next year. It’s also looking to pick firms to build, finance and operate new pipelines connecting the port to major industrial areas.  Morocco aims to become a player in LNG imports, with the government planning to spend $3.5 billion to boost gas consumption from 1.2 billion cubic meters to 12 billion cubic meters by 2030. The new projects will help counter the loss of Algerian supplies in 2021 following a diplomatic dispute, while gas is an important bridge fuel for manufacturing industries that export goods to Europe. The Ministry of Energy Transition and Sustainable Development estimated the FSRU would cost about $273 million, while the new pipelines would require investments of $681 million. The pipelines will be connected to the Maghreb-Europe link, through which Morocco imports gas from Europe, as the projects will also form the backbone of a gas network that may one day carry green hydrogen both home and abroad. The country’s gas plans involve spending $1.5 billion on infrastructure to import LNG to replace dirtier feedstocks such as fuel oil and coal in the industrial sector, and investing $2 billion to construct gas-fired plants that would triple the amount of power generated by gas. Morocco plans to decarbonize its economy by 2050 — phasing out coal along the way — including by expanding in solar and wind generation as well as battery-storage facilities. Authorities expect about $11 billion in investment to add

Read More »

YPF lets contract for Vaca Muerta drilling in Argentina

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } YPF SA has let a 5-year contract to Archer Ltd. for drilling services in the Vaca Muerta area of Argentina. Under the terms of the agreement, Archer will provide and operate seven drilling rigs, equipped with integrated Managed Pressure Drilling (MPD) systems. Two of these rigs will be leased internationally, bringing additional drilling capacity to Argentina. In addition to the firm 5-year term, the contract includes a 2-year extension option. The contract has a total estimated value of $600 million. The Vaca Muerta formation is the main source rock and one of the largest areal stratigraphic units in Neuquén basin, onshore Argentina. The formation trends to calcareous sandstones on the western and middle sections of the basin and towards limestones to the east in a shelf. Depositional depths are less than 300 m and extend about 90,000 sq km, of which 30,000 sq km are prospective for unconventional exploitation. On Aug. 6, 2025, YPF acquired interest in two unconventional oil and gas blocks in Vaca Muerta.

Read More »

OPEC+ keeps output increase on hold, approves new quota system

OPEC and its allies (OPEC+) agreed on Sunday, Nov. 30, to keep oil production policy unchanged into early 2026 while approving a new capacity-based quota system that will reshape how the group allocates output from 2027 onward. Meeting virtually, the eight participating OPEC+ members—Algeria, Iraq, Kazakhstan, Kuwait, Oman, Russia, Saudi Arabia, and the UAE—reaffirmed their Nov. 2 decision to maintain current production levels through first-quarter 2026. They will continue meeting monthly to track adherence and discuss any need for additional action, with the next gathering set for Jan. 4, 2026. Since April 2025, the OPEC+ group has introduced about 2.9 million b/d into the market, while continuing to restrict around 3.24 million b/d of supply, which accounts for roughly 3% of global demand. The meeting took place amid renewed US efforts to negotiate a peace agreement between Russia and Ukraine. A successful deal could potentially increase global oil supply if sanctions on Russia are lifted. In parallel, the broader OPEC+ ministerial meeting confirmed group-wide 2026 quotas previously agreed earlier this year, signaling that no fresh changes in baseline targets are planned before the end of next year unless market conditions deteriorate sharply. New Maximum Sustainable Capacity audits Beyond near-term policy, the most consequential move from the Nov. 30 meetings was approval of a new quota framework based on audited Maximum Sustainable Production Capacity (MSC), which will be used to set production baselines starting in 2027. Under the mechanism, OPEC+ will commission third-party audits of most its members’ sustainable production capacity between January and September 2026. A US consultancy, DeGolyer and MacNaughton, will assess most producers, while separate arrangements will be used for Russia and Venezuela and domestic figures for Iran because of sanctions and data-sharing constraints. MSC is defined as the level of output a country can sustain for a

Read More »

EIA: US crude inventories up 600,000 bbl

US crude oil inventories for the week ended Nov. 28, excluding the Strategic Petroleum Reserve, increased by 600,000 bbl from the previous week, according to data from the US Energy Information Administration. At 427.5 million bbl, US crude oil inventories are about 3% below the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories increased by 4.5 million bbl from last week and are about 2% below the 5-year average for this time of year. Finished gasoline inventories and blending components inventories both increased last week. Distillate fuel inventories increased by 2.1 million bbl last week and are about 7% below the 5-year average for this time of year. Propane-propylene inventories decreased by 700,000 bbl from last week and are about 15% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.9 million b/d for the week ended Nov. 28, which was 433,000 b/d more than the previous week’s average. Refineries operated at 94.1% of capacity. Gasoline production increased, averaging 9.8 million b/d. Distillate fuel production increased by 53,000 b/d, averaging 5.1 million b/d. US crude oil imports averaged 6.0 million b/d, down 456,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 5.9 million b/d, 14.4% less than the same 4-week period last year. Total motor gasoline imports averaged 772,000 b/d. Distillate fuel imports averaged 190,000 b/d.

Read More »

Aviz Networks launches enterprise-grade community SONiC distribution

First, the company enabled FRR (Free Range Routing) features that exist in the community code but aren’t consistently implemented across different ASICs. VRRP (Virtual Router Redudancy Protocol) provides router redundancy for high availability. Spanning tree variants prevent network loops in layer 2 topologies. MLAG allows two switches to act as a single logical device for link aggregation. EVPN enhancements support layer 2 and layer 3 VPN services over VXLAN overlays. These protocols work differently depending on the underlying silicon, so Aviz normalized their implementation across Broadcom, Nvidia, Cisco and Marvell chips. Second, Aviz fixed bugs discovered in production deployments. One customer deployed community SONiC with OpenStack and started migrating virtual machines between hosts. The network fabric couldn’t handle the workload and broke. Aviz identified the failure modes and patched them.  Third, Aviz built a software component that normalizes monitoring data across vendors. Broadcom’s Tomahawk ASIC generates different telemetry formats than Nvidia’s Spectrum or Cisco’s Silicon One. Network operators need consistent data for troubleshooting and capacity planning. The software collects ASIC-specific logs and network operating system telemetry, then translates them into a standardized format that works the same way regardless of which silicon vendor’s chips are running in the switches. Validated for enterprise deployment scenarios The distribution supports common enterprise network architectures.  IP CLOS provides the leaf-spine topology used in modern data centers for predictable latency and scalability. EVPN/VXLAN creates layer 2 and layer 3 overlay networks that span physical network boundaries. MLAG configurations provide link redundancy without spanning tree limitations. Aviz provides validated runbooks for these deployments across data center, edge and AI fabric use cases. 

Read More »

US approves Nvidia H200 exports to China, raising questions about enterprise GPU supply

Shifting demand scenarios What remains unclear is how much demand Chinese firms will actually generate, given Beijing’s recent efforts to steer its tech companies away from US chips. Charlie Dai, VP and principal analyst at Forrester, said renewed H200 access is likely to have only a modest impact on global supply, as China is prioritizing domestic AI chips and the H200 remains below Nvidia’s latest Blackwell-class systems in performance and appeal. “While some allocation pressure may emerge, most enterprise customers outside China will see minimal disruption in pricing or lead times over the next few quarters,” Dai added. Neil Shah, VP for research and partner at Counterpoint Research, agreed that demand may not surge, citing structural shifts in China’s AI ecosystem. “The Chinese ecosystem is catching up fast, from semi to stack, with models optimized on the silicon and software,” Shah said. Chinese enterprises might think twice before adopting a US AI server stack, he said. Others caution that even selective demand from China could tighten global allocation at a time when supply of high-end accelerators remains stretched, and data center deployments continue to rise.

Read More »

What does Arm need to do to gain enterprise acceptance?

But in 2017, AMD released the Zen architecture, which was equal if not superior to the Intel architecture. Zen made AMD competitive, and it fueled an explosive rebirth for a company that was near death a few years prior. AMD now has about 30% market share, while Intel suffers from a loss of technology as well as corporate leadership. Now, customers have a choice of Intel or AMD, and they don’t have to worry about porting their applications to a new platform like they would have to do if they switched to Arm. Analysts weigh in on Arm Tim Crawford sees no demand for Arm in the data center. Crawford is president of AVOA, a CIO consultancy. In his role, he talks to IT professionals all the time, but he’s not hearing much interest in Arm. “I don’t see Arm really making a dent, ever, into the general-purpose processor space,” Crawford said. “I think the opportunity for Arm is special applications and special silicon. If you look at the major cloud providers, their custom silicon is specifically built to do training or optimized to do inference. Arm is kind of in the same situation in the sense that it has to be optimized.” “The problem [for Arm] is that there’s not necessarily a need to fulfill at this point in time,” said Rob Enderle, principal analyst with The Enderle Group. “Obviously, there’s always room for other solutions, but Arm is still going to face the challenge of software compatibility.” And therein lies what may be Arm’s greatest challenge: software compatibility. Software doesn’t care (usually) if it’s on Intel or AMD, because both use the x86 architecture, with some differences in extensions. But Arm is a whole new platform, and that requires porting and testing. Enterprises generally don’t like disruption —

Read More »

Intel decides to keep networking business after all

That doesn’t explain why Intel made the decision to pursue spin-off in the first place. In July, NEX chief Sachin Katti issued a memo that outlined plans to establish key elements of the Networking and Communications business as a stand-alone company. It looked like a done deal, experts said. Jim Hines, research director for enabling technologies and semiconductors at IDC, declined to speculate on whether Intel could get a decent offer but noted NEX is losing ground. IDC estimates Intel’s market share in overall semiconductors at 6.8% in Q3 2025, which is down from 7.4% for the full year 2024 and 9.2% for the full year 2023. Intel’s course reversal “is a positive for Intel in the long term, and recent improvements in its financial situation may have contributed to the decision to keep NEX in house,” he said. When Tan took over as CEO earlier this year, prioritized strengthening the balance sheet and bringing a greater focus on execution. Divest NEX was aligned with these priorities, but since then, Intel has secured investments from the US Government, Nvidia and SoftBank that have reduced the need to raise cash through other means, Hines notes. “The NEX business will prove to be a strategic asset for Intel as it looks to protect and expand its position in the AI datacenter market. Success in this market now requires processor suppliers to offer a full-stack solution, not just silicon. Scale-up and scale-out networking solutions are a key piece of the package, and Intel will be able to leverage its NEX technologies and software, including silicon photonics, to develop differentiated product offerings in this space,” Hines said.

Read More »

At the Crossroads of AI and the Edge: Inside 1623 Farnam’s Rising Role as a Midwest Interconnection Powerhouse

That was the thread that carried through our recent conversation for the DCF Show podcast, where Severn walked through the role Farnam now plays in AI-driven networking, multi-cloud connectivity, and the resurgence of regional interconnection as a core part of U.S. digital infrastructure. Aggregation, Not Proximity: The Practical Edge Severn is clear-eyed about what makes the edge work and what doesn’t. The idea that real content delivery could aggregate at the base of cell towers, he noted, has never been realistic. The traffic simply isn’t there. Content goes where the network already concentrates, and the network concentrates where carriers, broadband providers, cloud onramps, and CDNs have amassed critical mass. In Farnam’s case, that density has grown steadily since the building changed hands in 2018. At the time an “underappreciated asset,” the facility has since become a meeting point for more than 40 broadband providers and over 60 carriers, with major content operators and hyperscale platforms routing traffic directly through its MMRs. That aggregation effect feeds on itself; as more carrier and content traffic converges, more participants anchor themselves to the hub, increasing its gravitational pull. Geography only reinforces that position. Located on the 41st parallel, the building sits at the historical shortest-distance path for early transcontinental fiber routes. It also lies at the crossroads of major east–west and north–south paths that have made Omaha a natural meeting point for backhaul routes and hyperscale expansions across the Midwest. AI and the New Interconnection Economy Perhaps the clearest sign of Farnam’s changing role is the sheer volume of fiber entering the building. More than 5,000 new strands are being brought into the property, with another 5,000 strands being added internally within the Meet-Me Rooms in 2025 alone. These are not incremental upgrades—they are hyperscale-grade expansions driven by the demands of AI traffic,

Read More »

Schneider Electric’s $2.3 Billion in AI Power and Cooling Deals Sends Message to Data Center Sector

When Schneider Electric emerged from its 2025 North American Innovation Summit in Las Vegas last week with nearly $2.3 billion in fresh U.S. data center commitments, it didn’t just notch a big sales win. It arguably put a stake in the ground about who controls the AI power-and-cooling stack over the rest of this decade. Within a single news cycle, Schneider announced: Together, the deals total about $2.27 billion in U.S. data center infrastructure, a number Schneider confirmed in background with multiple outlets and which Reuters highlighted as a bellwether for AI-driven demand.  For the AI data center ecosystem, these contracts function like early-stage fuel supply deals for the power and cooling systems that underpin the “AI factory.” Supply Capacity Agreements: Locking in the AI Supply Chain Significantly, both deals are structured as supply capacity agreements, not traditional one-off equipment purchase orders. Under the SCA model, Schneider is committing dedicated manufacturing lines and inventory to these customers, guaranteeing output of power and cooling systems over a multi-year horizon. In return, Switch and Digital Realty are providing Schneider with forecastable volume and visibility at the scale of gigawatt-class campus build-outs.  A Schneider spokesperson told Reuters that the two contracts are phased across 2025 and 2026, underscoring that this arrangement is about pipeline, as opposed to a one-time backlog spike.  That structure does three important things for the market: Signals confidence that AI demand is durable.You don’t ring-fence billions of dollars of factory output for two customers unless you’re highly confident the AI load curve runs beyond the current GPU cycle. Pre-allocates power & cooling the way the industry pre-allocated GPUs.Hyperscalers and neoclouds have already spent two years locking up Nvidia and AMD capacity. These SCAs suggest power trains and thermal systems are joining chips on the list of constrained strategic resources.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »