Stay Ahead, Stay ONMINE

Salesforce used AI to cut support load by 5% — but the real win was teaching bots to say ‘I’m sorry’

Salesforce has crossed a significant threshold in the enterprise AI race, surpassing 1 million autonomous agent conversations on its help portal — a milestone that offers a rare glimpse into what it takes to deploy AI agents at massive scale and the surprising lessons learned along the way.The achievement, confirmed by company executives in exclusive interviews with VentureBeat, comes just nine months after Salesforce launched Agentforce on its Help Portal in October. The platform now resolves 84% of customer queries autonomously, has led to a 5% reduction in support case volume, and enabled the company to redeploy 500 human support engineers to higher-value roles.But perhaps more valuable than the raw numbers are the hard-won insights Salesforce gleaned from being what executives call “customer zero” for their own AI agent technology — lessons that challenge conventional wisdom about enterprise AI deployment and reveal the delicate balance required between technological capability and human empathy.“We started really small. We launched basically to a cohort of customers on our Help Portal. It had to be English to start with. You had to be logged in and we released it to about 10% of our traffic,” explains Bernard Shaw, SVP of Digital Customer Success at Salesforce, who led the Agentforce implementation. “The first week, I think there was 126 conversations, if I remember rightly. So me and my team could read through each one of them.”

Salesforce has crossed a significant threshold in the enterprise AI race, surpassing 1 million autonomous agent conversations on its help portal — a milestone that offers a rare glimpse into what it takes to deploy AI agents at massive scale and the surprising lessons learned along the way.

The achievement, confirmed by company executives in exclusive interviews with VentureBeat, comes just nine months after Salesforce launched Agentforce on its Help Portal in October. The platform now resolves 84% of customer queries autonomously, has led to a 5% reduction in support case volume, and enabled the company to redeploy 500 human support engineers to higher-value roles.

But perhaps more valuable than the raw numbers are the hard-won insights Salesforce gleaned from being what executives call “customer zero” for their own AI agent technology — lessons that challenge conventional wisdom about enterprise AI deployment and reveal the delicate balance required between technological capability and human empathy.

“We started really small. We launched basically to a cohort of customers on our Help Portal. It had to be English to start with. You had to be logged in and we released it to about 10% of our traffic,” explains Bernard Shaw, SVP of Digital Customer Success at Salesforce, who led the Agentforce implementation. “The first week, I think there was 126 conversations, if I remember rightly. So me and my team could read through each one of them.”


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


This methodical approach — starting with a controlled rollout before expanding to handle the current average of 45,000 conversations weekly — stands in stark contrast to the “move fast and break things” ethos often associated with AI deployment. The phased release allowed Salesforce to identify and fix critical issues before they could impact the broader customer base.

The technical foundation proved crucial. Unlike traditional chatbots that rely on decision trees and pre-programmed responses, Agentforce leverages Salesforce’s Data Cloud to access and synthesize information from 740,000 pieces of content across multiple languages and product lines.

“The biggest difference here is, coming back to my data cloud thing is we were able to go out the gate and answer pretty much any question about any Salesforce product,” Shaw notes. “I don’t think we could have done it without data cloud.”

Why Salesforce taught its AI agents empathy after customers rejected cold, robotic responses

One of the most striking revelations from Salesforce’s journey involves what Joe Inzerillo, the company’s Chief Digital Officer, calls “the human part” of being a support agent.

“When we first launched the agent, we were really concerned about, like, data factualism, you know, what is it getting the right data? Is it given the right answers and stuff like that? And what we realized is we kind of forgot about the human part,” Inzerillo reveals. “Somebody calls down and they’re like, hey, my stuff’s broken. I have a sub one incident right now, and you just come into like, ‘All right, well, I’ll open a ticket for you.’ It doesn’t feel great.”

This realization led to a fundamental shift in how Salesforce approached AI agent design. The company took its existing soft skills training program for human support engineers—what they call “the art of service” — and integrated it directly into Agentforce’s prompts and behaviors.

“If you come now and say, ‘Hey, I’m having a Salesforce outage,’ agent force will apologize. ‘I’m so sorry. Like, that’s terrible. Let me get you through,’ and we’ll get that through to our engineering team,” Shaw explains. The impact on customer satisfaction was immediate and measurable.

The surprising reason Salesforce increased human handoffs from 1% to 5% for better customer outcomes

Perhaps no metric better illustrates the complexity of deploying enterprise AI agents than Salesforce’s evolving approach to human handoffs. Initially, the company celebrated a 1% handoff rate — meaning only 1% of conversations were escalated from AI to human agents.

“We were literally high fiving each other, going, ‘oh my god, like only 1%,’” Shaw recalls. “And then we look at the actual conversation. Was terrible. People were frustrated. They wanted to go to a human. The agent kept trying. It was just getting in the way.”

This led to a counterintuitive insight: making it harder for customers to reach humans actually degraded the overall experience. Salesforce adjusted its approach, and the handoff rate rose to approximately 5%.

“I actually feel really good about that,” Shaw emphasizes. “If you want to create a case, you want to talk to a support engineer, that’s fine. Go ahead and do that.”

Inzerillo frames this as a fundamental shift in thinking about service metrics: “At 5% you really did get the vast, vast, vast majority in that 95% solved, and the people who didn’t got to a human faster. And so therefore their CSAT went up in the hybrid approach, where you had an agent and a human working together, you got better results than each of them had independently.”

How ‘content collisions’ forced Salesforce to delete thousands of help articles for AI accuracy

Salesforce’s experience also revealed critical lessons about content management that many enterprises overlook when deploying AI. Despite having 740,000 pieces of content across multiple languages, the company discovered that abundance created its own problems.

“There’s this words my team has been using that are new words to me, of content collisions,” Shaw explains. “Loads of password reset articles. And so it struggles on what’s the right article for me to take the chunks into Data Cloud and go to OpenAI and back and answer?”

This led to an extensive “content hygiene” initiative where Salesforce deleted outdated content, fixed inaccuracies, and consolidated redundant articles. The lesson: AI agents are only as good as the knowledge they can access, and sometimes less is more.

The Microsoft Teams integration that exposed why rigid AI guardrails backfire

One of the most enlightening mistakes Salesforce made involved being overly restrictive with AI guardrails. Initially, the company instructed Agentforce not to discuss competitors, listing every major rival by name.

“We were worried people were going to come in and go, ‘is HubSpot better than Salesforce’ or something like that,” Shaw admits. But this created an unexpected problem: when customers asked legitimate questions about integrating Microsoft Teams with Salesforce, the agent refused to answer because Microsoft was on the competitor list.

The solution was elegantly simple: instead of rigid rules, Salesforce replaced the restrictive guardrails with a single instruction to “act in Salesforce’s best interest in everything you do.”

“We realized we were still treating it like an old school chatbot, and what we needed to do is we needed to let the LLM be an LLM,” Shaw reflects.

Voice interfaces and multilingual support drive Salesforce’s next phase of AI agent evolution

Looking ahead, Salesforce is preparing for what both executives see as the next major evolution in AI agents: voice interfaces.

“I actually believe voice is the UX of agents,” Shaw states. The company is developing iOS and Android native apps with voice capabilities, with plans to showcase them at Dreamforce later this year.

Inzerillo, drawing on his experience leading digital transformation at Disney, adds crucial context: “What’s important about voice is to understand that the chat is really foundational to the voice. Because chat, like, you still have to have all your information, you still have to have all those rules… If you jump right to voice, the real problem with voice is it’s got to be very fast and it’s got to be very accurate.”

The company has already expanded Agentforce to support Japanese using an innovative approach—rather than translating content, the system translates customer queries to English, retrieves relevant information, and translates responses back. With 87% resolution rates in Japanese after just three weeks, Salesforce plans to add French, German, Italian, and Spanish support by the end of July.

Four critical lessons from Salesforce’s million-conversation journey for enterprise AI deployment

For enterprises considering their own AI agent deployments, Salesforce’s journey offers several critical insights:

  • Start Small, Think Big: “Start small and then grow it out,” Shaw advises. The ability to review every conversation in early stages provides invaluable learning opportunities that would be impossible at scale.
  • Data Hygiene Matters: “Be really conscious of your data,” Inzerillo emphasizes. “Don’t over curate your data, but also don’t under curate your data and really think through, like, how do you best position the company?”
  • Embrace Flexibility: Traditional organizational structures may not align with AI capabilities. As Inzerillo notes, “If they try to take an agentic future and shove it into yesterday’s org chart, it’s going to be a very frustrating experience.”
  • Measure What Matters: Success metrics for AI agents differ from traditional support metrics. Response accuracy is important, but so are empathy, appropriate escalation, and overall customer satisfaction.

The billion-dollar question: what happens after you beat human performance?

As Salesforce’s AI agents now outperform human agents on key metrics like resolution rate and handle time, Inzerillo poses a thought-provoking question: “What do you measure after you beat the human?”

This question gets to the heart of what may be the most significant implication of Salesforce’s million-conversation milestone. The company isn’t just automating customer service—it’s redefining what good service looks like in an AI-first world.

“We wanted to be the showcase to our customers and how we use agent force in our own experiences,” Shaw explains. “Part of why we do this… is so that we can learn these things, feed it back into our product teams, into our engineering teams to improve the product and then share these learnings with our customers.”

With enterprise spending on generative AI solutions projected to reach $143 billion by 2027, according to forecasts from International Data Corporation (IDC), Salesforce’s real-world lessons from the frontlines of deployment offer a crucial roadmap for organizations navigating their own AI transformations. Deloitte also estimates that global enterprise investments in generative AI could surpass $150 billion by 2027, reinforcing the scale and urgency of this technological shift.

The message is clear: success in the AI agent era requires more than just sophisticated technology. It demands a fundamental rethinking of how humans and machines work together, a commitment to continuous learning and iteration, and perhaps most surprisingly, a recognition that the most advanced AI agents are those that remember to be human.

As Shaw puts it: “You now have two employees. You have an agentic AI agent, and you have a human employee. You need to train both on the soft skills, the art of service.”

In the end, Salesforce’s million conversations may be less about the milestone itself and more about what it represents: the emergence of a new paradigm where digital labor doesn’t replace human work but transforms it, creating possibilities that neither humans nor machines could achieve alone.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Kyndryl service aims to control agentic AI across the enterprise

Kyndryl has launched a new service aimed at helping customers manage the growing use of AI agents across the enterprise. Its Agentic AI Framework is an orchestration platform built to deploy and manage autonomous, self-learning agents across business workflows in on-prem, cloud, or hybrid IT environments, according to the company. 

Read More »

Why enterprises need to drive telecom standards

Cutting access costs by supporting VPN-over-FWA or standardizing SD-WAN interconnects could save enterprises as much as a quarter of their VPN costs, but neither is provided in 5G or assured in 6G. Enterprises could change that if they applied appropriate pressure. Reason No. 3: Satellite, private mobile, public mobile, and

Read More »

SLB Sees ‘Constructive’ Second Half of 2025

SLB, the world’s largest oil-services provider, sees resiliency in the industry and remains constructive about the second half of 2025 despite uncertainties in customer demand.  “Despite pockets of activity adjustments in key markets, the industry has shown that it can operate through uncertainty without a significant drop in upstream spending,” SLB Chief Executive Officer Olivier Le Peuch said in a statement Friday. “This has been driven by the combination of capital discipline and the need for energy security.” His comments came as SLB posted second-quarter adjusted profit of 74 cents a share, exceeding analyst expectations. SLB, which gets about 82% of its revenue from international markets, has mitigated some of the negative impacts facing smaller peers that are more levered to domestic production. The company is seen as a gauge for the health of the sector through its broad footprint in all major crude-producing theaters.  US oil drilling has dropped 12% this year to the lowest since September 2021, driven by demand concerns triggered by US President Donald Trump’s tariff proposals and faster-than-expected increases in OPEC+ production. Government forecasters have trimmed domestic crude-production estimates for 2025, signaling a lower-for-longer activity environment for service companies. “Looking ahead, assuming commodity prices stay range bound, we remain constructive for the second half of the year,” Le Peuch said. Traders and analysts will also be listening closely to SLB’s quarterly conference call Friday for more details on the completion of the merger with ChampionX Corp. which the company announced Wednesday, according to a statement. SLB is a “leader in digital services for the energy industry and could soon become a leader in production services and equipment post the close of the acquisition,” Citigroup Global Markets Inc. analyst Scott Gruber wrote in a note to clients. SLB is the first of the biggest oilfield contractors

Read More »

WTI Flat as EU Targets Russian Refined Fuels

Oil ended the day little changed as traders weighed fresh efforts from the European Union to crimp Russian energy exports. West Texas Intermediate crude held steady to close near $67 a barrel after the EU agreed to a lower price cap for Moscow’s crude as part of a package of sanctions on Moscow. The measures include curbs on fuels made from Russian petroleum, additional banking limitations and a ban on a large oil refinery in India. The Asian country, which buys large amounts of Russian crude, is a major exporter of refined products to Europe, where markets for fuels like diesel have been tight. “While the EU measures may not drastically impact crude flows, the restrictions on refined products and expanded shadow fleet targeting are fueling concern in the diesel complex,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Oil has trended higher since early May, with both Morgan Stanley and Goldman Sachs Group Inc. making the case that a buildup in global crude stockpiles has occurred in regions that don’t hold much sway in price-setting. Meanwhile, spreads in the diesel market are indicating tightness. The gap between the first and second month of New York heating oil futures climbed to $4.17 a gallon at one point in the session, up from $2.99 on Thursday. (Diesel and heating oil are the same product in the US, just taxed differently.) “The logic of diesel tightness propping up crude flat prices remains unchanged,” said Huang Wanzhe, an analyst at Dadi Futures Co., who added that the peak-demand season had seen a solid start. “The key question is how long this strength can last,” she said. In wider markets, strong US data on consumer sentiment eased concerns about the world’s largest economy, helping to underpin a risk-on mood. Crude

Read More »

EU Slaps New Sanctions on Russia and Its Oil Trade

European Union states have approved a fresh sanctions package on Russia over its war against Ukraine including a revised oil price cap, new banking restrictions, and curbs on fuels made from Russian petroleum.  The package, the bloc’s 18th since Moscow’s full scale invasion, will see about 20 more Russian banks cut off the international payments system SWIFT and face a full transaction ban, as well as restrictions imposed on Russian petroleum refined in third countries. A large oil refinery in India, part-owned by Russia’s state-run oil company, Rosneft PJSC, was also blacklisted. The cap on Russian oil, currently set at $60 per barrel, will be set dynamically at 15 percent below market rates moving forward. The new mechanism will see the threshold start off somewhere between $45-$50 and automatically revised at least twice a year based on market prices, Bloomberg previously reported. The latest sanctions by the European Union are aimed at further crimping the Kremlin’s energy revenue, the bulk of which comes from oil exports to India and China.  However, the original price cap imposed by the Group of Seven has had a limited impact on Russia’s oil flows, as the nation has built up a huge shadow fleet of tankers to haul its oil without using western services. The EU has also so far failed to convince the US to offer crucial support to the lower cap. Discussions are ongoing with other G-7 members but the US opposition is making it hard to reach agreement, according to people familiar with the matter. The UK, however, is expected to be on board with the move, the people said. The EU’s move to restrict fuels such as diesel made from Russian crude could have some market impact, as Europe imports the fuel from India, which in turn buys large amounts of

Read More »

Aramco Nears $10B Jafurah Pipeline Stake Sale to GIP

Saudi Aramco is in advanced talks to sell a roughly $10 billion stake in midstream infrastructure serving the giant Jafurah natural gas project to a group led by BlackRock Inc., according to people with knowledge of the matter.  The consortium is backed by BlackRock’s Global Infrastructure Partners unit and could reach an agreement as soon as the coming days, said the people, who asked not to be identified discussing confidential information.  The deal will involve pipelines and other infrastructure serving the $100 billion-plus Jafurah project, which Aramco is developing to supply domestic power plants as well as for export. It’s an unconventional field, meaning the gas is trapped in hard-to-access rock formations and requires special techniques to extract. Reuters reported on Thursday that GIP was nearing a deal, citing unidentified people. Aramco didn’t respond to emailed queries outside regular business hours in Saudi Arabia.  Bloomberg News first revealed in 2021 that Aramco was considering introducing outside investors into parts of the Jafurah project. Aramco was approaching infrastructure funds to gauge their interest in the midstream assets, people with knowledge of the matter said the next year.  State-controlled Aramco has been seeking to bring in international capital and sell stakes in some assets as the government pursues massive projects to build futuristic cities and diversify its economy. The kingdom is pushing ahead with a vast expansion, including developing new tourism destinations and building up a manufacturing base, to prepare for a future in which oil demand will begin to wane. BlackRock was earlier among investors that bought stakes in Aramco’s national gas pipeline network.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Germany’s Top Performing Smallcap Surges Again

A breakneck rally in the shares of a German pipeline builder accelerated this week after the company won a role plugging LNG terminals on the coast into the nation’s gas grid.  Friedrich Vorwerk Group SE’s stock is up 24% since last Friday’s close, the biggest gain on Germany’s small-cap SDAX index. The bulk of the advance came after it secured a contract valued in the hundreds of millions of euros to build a 86km-long pipeline with a consortium of companies.  It’s an example of how European firms are benefiting from the wall of money Chancellor Friedrich Merz has unleashed to overhaul the nation’s infrastructure and military. The contract is the latest deal to help revive the fortunes of the builder of underground gas, electricity and hydrogen pipes, sending its stock price to a record high.  It’s “more like an add-on. It’s just nice to have,” said Nikolas Demeter, an analyst at B Metzler Seel Sohn & Co AG. For now, the company still has three buy ratings out of five from analysts. That may change because their targets trail the company’s current share price after this week’s contract win took its advance in the year past 200%. The shares now trade at almost 32 times forward blended earnings, compared with about 14 times for the SDAX index and the Stoxx 600 Index, the European benchmark. Labor Challenge Leon Mühlenbruch at mwb research AG, who has a valuation-driven sell rating on the stock, warns that Vorwerk’s full order book could become a problem. “Capacity constraints are becoming increasingly relevant,” Mühlenbruch said. “Further growth depends on expanding that capacity, a challenge due to the persistent shortage of specialized skilled labor.” But for now the Tostedt-based company is on a roll, and its rebound in recent years has been dramatic. After an initial

Read More »

Trump wants to use AI to prevent wildfires. Utilities are trying. Will it work?

The United States has already experienced more wildfires this year than it has over same period in any other year this decade, according to the National Interagency Fire Center. With the risk of fire expected to grow due to climate change and other factors, utilities have increasingly turned to technology to help them keep up. And those efforts could get a boost following President Donald Trump’s June 12 executive order calling on federal agencies to deploy technology to address “a slow and inadequate response to wildfires.” The order directed agencies to create a roadmap for using “artificial intelligence, data sharing, innovative modeling and mapping capabilities, and technology to identify wildland fire ignitions and weather forecasts to inform response and evacuation.” It also told federal authorities to declassify historical satellite datasets that could be used to improve wildfire prediction, and called for strengthening coordination among agencies and improving wildland and vegetation management. Additionally, the order laid out a vision for consolidating federal wildfire prevention and suppression efforts that are currently spread across agencies. The White House’s proposed 2026 budget blueprint would create a new, unified federal wildland fire service under the Department of Interior. So far, Trump’s directive has drawn a mixed response from wildfire experts. While some said it could empower local governments and save utilities money, others said the order’s impact will be limited. “I think some people read into the order more than is there, and some people read less,” said Chet Wade, a spokesperson for the Partners in Wildfire Prevention coalition. “I don’t know exactly what will come of it, but getting technology into the right hands could be very helpful.” Fire prevention goes high tech Since the 2018 Camp Fire that bankrupted PG&E and set a nationwide precedent for suing utilities that trigger large fires, energy companies around

Read More »

Cisco upgrades 400G optical receiver to boost AI infrastructure throughput

“In the data center, what’s really changed in the last year or so is that with AI buildouts, there’s much, much more optics that are part of 400G and 800G. It’s not so much using 10G and 25G optics, which we still sell a ton of, for campus applications. But for AI infrastructure, the 400G and 800G optics are really the dominant optics for that application,” Gartner said. Most of the AI infrastructure builds have been for training models, especially in hyperscaler environments, Gartner said. “I expect, towards the tail end of this year, we’ll start to see more enterprises deploying AI infrastructure for inference. And once they do that, because it has an Nvidia GPU attached to it, it’s going to be a 400G or 800G optic.” Core enterprise applications – such as real-time trading, high-frequency transactions, multi-cloud communications, cybersecurity analytics, network forensics, and industrial IoT – can also utilize the higher network throughput, Gartner said. 

Read More »

Supermicro bets big on 4-socket X14 servers to regain enterprise trust

In April, Dell announced its PowerEdge R470, R570, R670, and R770 servers with Intel Xeon 6 Processors with P-cores, but with single and double-socket servers. Similarly, Lenovo’s ThinkSystem V4 servers are also based on the Intel Xeon 6 processor but are limited to dual socket configurations. The launch of 4-socket servers by Supermicro reflects a growing enterprise need for localized compute that can support memory-bound AI and reduce the complexity of distributed architectures. “The modern 4-socket servers solve multiple pain points that have intensified with GenAI and memory-intensive analytics. Enterprises are increasingly challenged by latency, interconnect complexity, and power budgets in distributed environments. High-capacity, scale-up servers provide an architecture that is more aligned with low-latency, large-model processing, especially where data residency or compliance constraints limit cloud elasticity,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “Launching a 4-socket Xeon 6 platform and packaging it within their modular ‘building block’ strategy shows Supermicro is focusing on staying ahead in enterprise and AI data center compute,” said Devroop Dhar, co-founder and MD at Primus Partner. A critical launch after major setbacks Experts peg this to be Supermicro’s most significant product launch since it became mired in governance and regulatory controversies. In 2024, the company lost Ernst & Young, its second auditor in two years, following allegations by Hindenburg Research involving accounting irregularities and the alleged export of sensitive chips to sanctioned entities. Compounding its troubles, Elon Musk’s AI startup xAI redirected its AI server orders to Dell, a move that reportedly cost Supermicro billions in potential revenue and damaged its standing in the hyperscaler ecosystem. Earlier this year, HPE signed a $1 billion contract to provide AI servers for X, a deal Supermicro was also bidding for. “The X14 launch marks a strategic reinforcement for Supermicro, showcasing its commitment

Read More »

Moving AI workloads off the cloud? A hefty data center retrofit awaits

“If you have a very specific use case, and you want to fold AI into some of your processes, and you need a GPU or two and a server to do that, then, that’s perfectly acceptable,” he says. “What we’re seeing, kind of universally, is that most of the enterprises want to migrate to these autonomous agents and agentic AI, where you do need a lot of compute capacity.” Racks of brand-new GPUs, even without new power and cooling infrastructure, can be costly, and Schneider Electric often advises cost-conscious clients to look at previous-generation GPUs to save money. GPU and other AI-related technology is advancing so rapidly, however, that it’s hard to know when to put down stakes. “We’re kind of in a situation where five years ago, we were talking about a data center lasting 30 years and going through three refreshes, maybe four,” Carlini says. “Now, because it is changing so much and requiring more and more power and cooling you can’t overbuild and then grow into it like you used to.”

Read More »

My take on the Gartner Magic Quadrant for LAN infrastructure? Highly inaccurate

Fortinet being in the leader quadrant may surprise some given they are best known as a security vendor, but the company has quietly built a broad and deep networking portfolio. I have no issue with them being considered a leader and believe for security conscious companies, Fortinet is a great option. Challenger Cisco is the only company listed as a challenger, and its movement out of the leader quadrant highlights just how inaccurate this document is. There is no vendor that sells more networking equipment in more places than Cisco, and it has led enterprise networking for decades. Several years ago, when it was a leader, I could argue the division of engineering between Meraki and Catalyst could have pushed them out, but it didn’t. So why now? At its June Cisco Live event, the company launched a salvo of innovation including AI Canvas, Cisco AI Assistant, and much more. It’s also continually improved the interoperability between Meraki and Catalyst and announced several new products. AI Canvas is a completely new take, was well received by customers at Cisco Live, and reinvents the concept of AIOps. As I stated above, because of the December cutoff time for information gathering, none of this was included, but that makes Cisco’s representation false. Also, I find this MQ very vague in its “Cautions” segment. As an example, it states: “Cisco’s product strategy isn’t well-aligned with key enterprise needs.” Some details here would be helpful. In my conversations with Cisco, which includes with Chief Product Officer and President Jeetu Patel, the company has reiterated that its strategy is to help customers be AI-ready with products that are easier to deploy and manage, more automated, and with a lower cost to run. That seems well-aligned with customer needs. If Gartner is hearing customers want networks

Read More »

Equinix, AWS embrace liquid cooling to power AI implementations

With AWS, it deployed In-Row Heat Exchangers (IRHX), a custom-built liquid cooling system designed specifically for servers using Nvidia’s Blackwell GPUs, it’s most powerful but also its hottest running processors used for AI training and inference. The IRHX unit has three components: a water‑distribution cabinet, an integrated pumping unit, and in‑row fan‑coil modules. It uses direct to chip liquid cooling just like the equinox servers, where cold‑plates attached to the chip draw heat from the chips and is cooled by the liquid. The warmed coolant then flows through the coils of heat exchangers, where high‑speed fans Blow on the pipes to cool them, like a car radiator. This type of cooling is nothing new, and there are a few direct to chip liquid cooling solutions on the market from Vertiv, CoolIT, Motivair, and Delta Electronics all sell liquid cooling options. But AWS separates the pumping unit from the fan-coil modules, letting a single pumping system to support large number of fan units. These modular fans can be added or removed as cooling requirements evolve, giving AWS the flexibility to adjust the system per row and site. This led to some concern that Amazon would disrupt the market for liquid cooling, but as a Dell’Oro Group analyst put it, Amazon develops custom technologies for itself and does not go into competition or business with other data center infrastructure companies.

Read More »

Intel CEO: We are not in the top 10 semiconductor companies

The Q&A session came on the heels of layoffs across the company. Tan was hired in March, and almost immediately he began to promise to divest and reduce non-core assets. Gelsinger had also begun divesting the company of losers, but they were nibbles around the edge. Tan is promising to take an axe to the place. In addition to discontinuing products, the company has outsourced marketing and media relations — for the first time in more than 25 years of covering this company, I have no internal contacts at Intel. Many more workers are going to lose their jobs in coming weeks. So far about 500 have been cut in Oregon and California but many more is expected — as much as 20% of the overall company staff may go, and Intel has over 100,000 employees, according to published reports. Tan believes the company is bloated and too bogged down with layers of management to be reactive and responsive in the same way that AMD and Nvidia are. “The whole process of that (deciding) is so slow and eventually nobody makes a decision,” he is quoted as saying. Something he has decided on is AI, and he seems to have decided to give up. “On training, I think it is too late for us,” Tan said, adding that Nvidia’s position in that market is simply “too strong.” So there goes what sales Gaudi3 could muster. Instead, Tan said Intel will focus on “edge” artificial intelligence, where AI capabilities Are brought to PCs and other remote devices rather than big AI processors in data centers like Nvidia and AMD are doing. “That’s an area that I think is emerging, coming up very big and we want to make sure that we capture,” Tan said.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »