Stay Ahead, Stay ONMINE

Zip debuts 50 AI agents to kill procurement inefficiencies—OpenAI is already on board

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Zip, the $2.2 billion procurement platform startup, unveiled a suite of 50 specialized artificial intelligence agents on Tuesday designed to automate the tedious manual work that plagues enterprise purchasing departments worldwide. This […]

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Zip, the $2.2 billion procurement platform startup, unveiled a suite of 50 specialized artificial intelligence agents on Tuesday designed to automate the tedious manual work that plagues enterprise purchasing departments worldwide. This marks what industry analysts call the most significant advancement in procurement technology in decades.

The AI agents, announced at Zip’s inaugural AI Summit in New York, can autonomously handle complex tasks ranging from contract reviews and tariff assessments to regulatory compliance checks — work that currently consumes millions of hours across corporate America. Early adopters, including OpenAI, Canva and Webflow are already testing the technology, which Zip says represents a fundamental shift from AI-assisted workflows to fully autonomous task completion.

“Today Zip is cutting through the agentic AI hype with AI agents that actually work,” said Rujul Zaparde, Zip’s co-founder and CEO, in an exclusive interview with VentureBeat. “Not vague chatbots. Not generic assistants. Real, specialized AI agents that do one job and do it perfectly.”

The announcement comes as enterprises increasingly struggle with procurement bottlenecks that can involve 30 or more approval steps for major purchases, particularly in heavily regulated industries like financial services. Procurement represents the second-largest corporate expense category after payroll, yet remains largely managed through manual, error-prone processes that leave trillions of dollars inefficiently managed.

How AI agents tackle the 30-step procurement approval process

Zip’s approach centers on what the company calls “agentic procurement orchestration”—embedding specialized AI agents directly into existing procurement workflows rather than requiring employees to adopt separate AI tools. The system addresses a critical enterprise challenge: while companies have top-down mandates to adopt AI, most employees don’t know how to integrate tools like ChatGPT effectively into their daily procurement tasks.

“The unique insight we’ve had is that the technology actually is good enough to solve very specific tasks,” explained Lu Cheng, Zip’s co-founder and chief technology officer, in an interview with VentureBeat. “It’s effectively a junior level employee that’s very good at following specific instructions.”

The agents tackle diverse procurement pain points with surgical precision. A tariff analysis agent dynamically assesses how global trade policies affect vendor pricing, while a GDPR compliance agent flags potential privacy risks in vendor documents. An intake validation agent can spot discrepancies in purchase requests — for instance, catching when an employee claims a software purchase won’t involve customer data sharing. At the same time, the vendor’s documentation indicates otherwise.

One enterprise customer processing 1,410 procurement requests in their first month with Zip would traditionally require human review of every request’s pricing, categorization and compliance details. With Zip’s agents, that work happens automatically.

From 4.6 million AI insights to $4.4 billion in enterprise savings

Zip’s aggressive push into AI automation builds on substantial momentum. Since it was founded in 2020, the company has already delivered over 4.6 million AI insights to customers and helped enterprises save $4.4 billion in trackable procurement costs. In 2024 alone, Zip processed 14 million reviews across its customer base — work that previously required human analysts to manually examine contracts, security documentation and compliance materials.

“We had a customer just went live — an 8,000 person, well-regarded tech company — in their first month they processed 1,410 requests,” Zaparde said. “The first step for all 1,410 requests was someone in procurement checking if the price was correct, if the categories aligned. With this agent, they basically don’t have to do that 1,410 times.”

The company has set an ambitious goal: within five years, as Zip processes over one billion reviews annually, 90% should be handled entirely by AI agents. That scale of automation could reshape how enterprises manage supplier relationships and spending decisions.

Why Zip’s data access gives it an edge over SAP and Oracle

Zip’s agents gain effectiveness through privileged access to comprehensive enterprise data that competitors cannot easily replicate. As the orchestration layer connecting finance, legal, procurement, IT and security teams, Zip already integrates with an average of seven enterprise systems per customer, including contract management platforms, risk assessment tools and ERP financial systems.

“We have a really deep understanding of what a legal review, what a security review actually constitutes because we literally have the documents that they’re reviewing thousands or hundreds of thousands of times across our customer base,” Zaparde explained. This data advantage allows Zip agents to access contract renewal dates, payment histories, vendor relationships spanning decades, and real-time regulatory changes — context that isolated enterprise systems cannot provide.

The company built its agents using a no-code platform that enterprise customers can customize for their specific needs. Configuration typically takes two to four hours per agent, though complex implementations can require up to 20 hours for customers with intricate approval processes.

OpenAI and Canva lead early adoption of automated procurement

OpenAI, which has partnered closely with Zip through the startup’s AI Lab initiative, exemplifies the early adoption trend. “We’ve worked closely with the Zip team to power their agentic platform and it’s been really exciting to see how quickly they’ve turned real-world procurement pain points into focused AI task agents,” said Kathryn Devlin, Head of Procure-to-Pay Operations, Travel and Expense at OpenAI.

The collaboration reflects a broader enterprise imperative: procurement automation has become strategically critical as companies face mounting pressure to optimize spending and control costs. Research firm IDC projects the global procurement software market will grow from $8.03 billion in 2024 to $18.28 billion by 2032, with AI-powered solutions driving much of that expansion.

Wiz Technology Procurement leader Idan Cohen highlighted the strategic shift AI enables: “We’ll save so much time on the technical work and day-to-day tasks that we need to do as part of the procurement process, and be enabled to really focus on what we’re supposed to do — being a true partner to the business and to our vendors.”

Building enterprise trust with citations and human oversight

Zip has designed its agent architecture to address enterprise concerns about AI accuracy and data security. The system provides detailed citations for every recommendation, allowing human reviewers to verify the sources behind AI decisions. Customers can configure agents to either provide insights for human review or automatically approve certain transactions based on predefined parameters.

“If our agent is saying we don’t believe there is security risk because of X, Y and Z, it’s forced to actually cite where it got X from,” Zaparde said. “You can see, ‘Oh, in the MSA and the contract, it says this. That’s why I think it’s not risky.’”

The company maintains strict data isolation, never training its AI models on customer data to prevent cross-company information leakage — a critical consideration for enterprises handling sensitive supplier relationships and pricing negotiations.

A $2.2 billion valuation positions Zip against procurement giants

Zip’s agent launch comes from a position of significant market strength. The company raised $190 million in Series D funding in October 2024 at a $2.2 billion valuation, marking what the company called the largest investment in procurement technology in over two decades. The funding round, led by BOND, attracted new investors including DST Global and Alkeon alongside existing backers Y Combinator and CRV.

Rather than competing directly with enterprise resource planning giants like SAP and Oracle, Zip positions itself as a complementary orchestration layer. “Those systems are great systems, but they’re systems of record,” Zaparde explained. “Zip is really the orchestration, the procurement orchestration layer that sits on top of those systems.”

The company’s customer roster includes hundreds of large enterprises across technology, financial services, and healthcare sectors. Notable clients include Snowflake, Discover, Reddit, Northwestern Mutual and Arm Holdings, collectively processing over $107 billion in spending through Zip’s platform.

The future of enterprise automation beyond procurement

Industry analysts view Zip’s agent suite as validation of a broader shift toward task-specific AI automation in enterprise software. “Zip created an entirely new category of procurement applications, so it is appropriate to see them pressing forward and launching a suite of AI Agents,” said Patrick Reymann, Research Director for Procurement and Enterprise Applications at IDC.

The agents will become available in late 2025, with early access granted to select customers now undergoing beta testing. Zip plans to expand beyond its initial 50 agents, developing new capabilities in partnership with consulting firms KPMG and The Hackett Group.

As Zip’s Q1 2025 marked the company’s largest quarter ever with 155% growth in its strategic enterprise segment, the startup’s trajectory suggests AI-powered procurement automation has moved from experimental to essential. The success of Zip’s specialized agent architecture could accelerate similar automation initiatives across other enterprise functions, potentially reshaping how large organizations handle complex, multi-stakeholder business processes far beyond purchasing departments.

But perhaps the most telling indicator of the technology’s transformative potential lies in a simple prediction from Zaparde: “I think in 10 years, people are going to look back and be like, ‘Wait, humans were approving all this stuff?’” If Zip’s vision proves correct, the question won’t be whether AI agents will automate enterprise workflows — it will be why it took so long to deploy them.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Meter secures $170 million to scale NaaS stack from the ground up

The architecture extends beyond traditional switching and routing. It encompasses power distribution units, security appliances, wireless access points and cellular connectivity under a single management plane. This integration enables custom protocols for inter-device communication across the entire infrastructure stack. “We have something that’s our own secure protocol between all of

Read More »

At Cisco Live, it’s all about AI for networking and security

Key features of the Cisco Deep Network Model include: Purpose-built for networking: 20% more precise reasoning for troubleshooting, configuration, and automation. Trusted training: Fine-tuned on 40+ years of expertise and expert-vetted for accuracy. Continuous learning: Evolves with live telemetry and real-world Cisco TAC and CX services insights. Another new Agentic

Read More »

Unleashing the Demand-Side Revolution: The Case for a Unified VPP Platform

As we accelerate the energy transition, we face a critical juncture. While significant investment and policy focus have been directed towards strengthening renewable generation — from large-scale solar and wind farms to battery storage — the influence of subsidies for these endeavors highlights fundamental vulnerabilities. The acceleration to clean energy lies not solely in what we generate, but in how intelligently and efficiently we manage what we consume. Consumers are bringing more low-carbon devices into their home, with household spending on these types of assets reaching $184 billion in 2023, a 340% increase from the year before. Utilities need a way to manage these distributed energy resources effectively. Virtual Power Plants (VPPs) are an undeniable powerhouse of the future, particularly when managed within a unified, intelligent platform. The potential of VPPs, which aggregate diverse distributed energy resources (DERs) like smart thermostats, electric vehicles (EVs), and water heaters, is immense and largely untapped. For instance, in the U.S., a mere 20% of eligible devices are currently enrolled in VPP programs, which contrasts with the 50% enrollment rates observed by the UK’s largest energy supplier, Octopus Energy, for their customers with smart meters. By addressing this gap, utilities have the opportunity to unlock gigawatts of flexible capacity, alleviate grid strain, and accelerate decarbonization without the same policy uncertainties facing large-scale generation projects. And utilities are taking notice: just last month, the Mercury Consortium met to discuss the importance of bringing customers along this journey, and how interoperability is paramount to unlocking the true potential of consumer devices. Increased demand, and driving factors like electrification of transportation and increased power needs of emerging technologies, such as AI, characterize the current energy landscape. This escalating demand, coupled with limitations on conventional supply-side expansion, highlights the urgent need for demand-side innovation. VPPs offer a powerful

Read More »

Oil Steadies as Traders Weigh Tariff Threats Against Iran Risk

Oil held steady as traders weighed renewed tariff threats from the US against the potential for widespread conflict in the Middle East.  West Texas Intermediate traded in a roughly $2.50 range before closing the session with a small drop to near $68 a barrel. The commodity temporarily inched into positive territory on an ABC report that Israel is considering taking military action against Iran in the coming days. Traders have been on edge since Iran threatened to strike American bases if nuclear talks fell through. Weighing on prices were earlier comments from President Donald Trump that he intended to set unilateral tariff rates on trading partners in the next one to two weeks, which blunted appetite for risk assets. Iran’s threats on Wednesday jolted crude out of the narrow range it had traded in for most of the past month, highlighting oil’s sensitivity to geopolitical tensions. The Middle East produces about a third of the world’s oil, driven by OPEC+ members Iran, Saudi Arabia and Iraq. Prices are up about 12% this month, and JPMorgan Chase & Co. on Thursday said oil could reach $130 in a worst-case scenario.  The move has also been coupled with big shifts in options pricing as traders assess the risk of escalation. Bullish call options on the global Brent benchmark are trading at premiums to bearish puts, and volatility spiked.  Oil still is down for the year on expectations the US-led trade war would erode demand as OPEC+ revives idled production.  On Iran, Trump has consistently said he wants an agreement that curbs the nation’s atomic activities and that the US could strike Iran if talks break down, before saying that he would “love to avoid conflict” with the country. Tehran says it is preparing a fresh proposal regarding the program before a sixth round of negotiations in

Read More »

Solar industry posts record Q1 growth but projects longer-term decline

Dive Brief: The U.S. doubled its solar cell manufacturing capacity and added 8.6 GW of solar module manufacturing capacity in the first quarter of 2025, marking the third-largest quarter for new solar manufacturing capacity on record, according to a report by Wood Mackenzie for the Solar Energy Industries Association. Despite the strong first quarter figures, Wood Mackenzie expects the solar industry to contract about 2% annually between 2025-2030, adding an average 43 GW of new solar generation per year in that time. New solar installations are expected to decline 7% between 2025-2027. Wood Mackenzie’s projections for the solar industry do not take the proposed wind-down of clean energy tax credits that has passed the House into account. Cutting the tax credits could trigger project cancellations and a possible energy shortage, according to Sean Gallagher, senior vice president of policy for SEIA. Dive Insight: Despite growing demand for energy, the solar industry faces a rocky road over the next few years — particularly if the Senate concurs with renewable energy tax credit cuts that have already passed the House, according to this week’s report from Wood Mackenzie and SEIA. Solar manufacturing posted particularly strong growth in the first quarter of 2025, though Wood Mackenzie notes that the growth upstream manufacturing of solar components, especially polysilicon and wafers, “remains slow or non-existent.” New solar generation capacity totaled 10.8 GW, 7% lower than first quarter installations in 2024 and 43% lower than the fourth quarter of 2024 — but still the fourth largest quarter for deployment on record, according to the report. The first quarter records don’t appear to represent an attempt by the industry to wrap up projects before the potential application of new tariffs or cuts to applicable tax credits, Gallagher said. Most of the projects that came online in the

Read More »

US Moves Some Diplomats as Iran Tensions Rise

The US ordered some staff to leave its embassy in Baghdad, officials said, after Iran threatened to strike American assets in the Middle East in the event it’s attacked over its nuclear program. The decision to reduce staffing in Iraq was “based on our latest analysis,” according to the US State Department. Defense Secretary Pete Hegseth authorized family members of US military stationed across the region to leave, according to a Pentagon statement. The State Department also said US government employees and family members in Israel are restricted from traveling outside major cities such as Tel Aviv and Jerusalem until further notice. None of the statements cited a specific threat. But they came after the New York Post published an interview with President Donald Trump in which he said he was less confident the US will reach a deal with Iran. The countries are negotiating an agreement that would curb the Islamic Republic’s nuclear activities in return for sanctions relief. Trump has consistently said he wants an agreement with Iran and to avoid a war, but that the US could resort to military action if Tehran doesn’t accept a deal. “I sincerely hope it won’t come to that and that the talks reach a resolution,” Iran’s Defense Minister Aziz Nasirzadeh said on Wednesday. “But if they don’t, and conflict is imposed on us, the other side will undoubtedly suffer greater losses.” Iran announced the start of military drills on Thursday, “with a focus on enemy movements.” The head of its Islamic Revolutionary Guard Corps said the force was “ready for any scenario.” The same day, the Islamic Republic said it would establish a new uranium-enrichment center in response to a decision by the United Nations atomic watchdog to censure it. Oil prices jumped on the reports the US would move

Read More »

JERA Enters Long-Term Deals to Buy Up To 5.5 MMtpa of US LNG

Six liquefied natural gas (LNG) projects in the United States have secured agreements with JERA Co. Inc. for the sale of up to 5.5 million metric tons per annum (MMtpa) over 20 years to the Japanese power utility. JERA signed a sale and purchase agreement (SPA) for about two MMtpa from NextDecade Corp.’s Rio Grande LNG in Brownsville, Texas. The agreement is for train 5; NextDecade has yet to make an FID (final investment decision) on the fourth and fifth trains. The three-train first phase of the project is under construction. Rio Grande LNG holds a permit to export to FTA and non-FTA countries. Another SPA involves around one MMtpa from Kimmeridge’s proposed Commonwealth LNG in Calcasieu Pass, Louisiana. In February the Department of Energy (DOE) granted the project a conditional permit to export to countries with no free trade agreement (FTA) with the U.S. Kimmeridge expects to make an FID in the third quarter of 2025 and start production 2029. JERA also signed an SPA with Venture Global Inc.’s CP2 LNG in Cameron Paris, Louisiana, for approximately one MMtpa. The project recently started site work after completing clearance from the Federal Energy Regulatory Commission. In March the DOE granted a conditional permit for CP2 LNG to export to non-FTA countries. The project has already secured authorization for its export volume when it received FTA approval April 2022. Meanwhile Sempra’s Port Arthur LNG phase 2 in Jefferson County, Texas secured heads of agreement with JERA for about 1.5 MMtpa. Late last month the DOE granted Port Arthur LNG Phase a non-FTA permit, marking the resumption of the issuance of final orders for LNG export following the previous administration’s pause. JERA also penned heads of agreement with Cheniere Energy Inc.’s Corpus Christi LNG and Sabine Pass LNG. Train 3 of Corpus

Read More »

GOP lawmakers reiterate asks for clean energy credit tweaks in reconciliation bill

Dive Brief: A group of 13 House Republicans sent a letter to their Senate counterparts Friday asking the Senate to make further alterations to changes made to the Inflation Reduction Act’s clean energy tax credits, as the chamber takes up its portion of the budget reconciliation process. The Republicans were led by Virginia Rep. Jen Kiggans, who said in a June 6 press release that though she supported the House-passed “One Big Beautiful Bill Act” as it stands, “there remains significant room for improvement in preserving the clean energy tax credits.” The requested changes largely mirror those requested by Kiggans and her colleagues prior to the House’s passage of the bill. The group’s letter cites more than $14 billion in clean energy projects that have been canceled this year, with signatories stating that “project cancellations will continue to snowball” if Congress does not provide “business certainty as these provisions are phased out.” Dive Insight: Kiggans and a nearly identical group of House Republicans wrote to their House colleagues last month seeking positive changes to the clean energy tax credit phaseout plan that came out of committee work on the reconciliation bill. Instead, the House-passed version of the bill imposes a faster phaseout timeline, including a repeal of tax credits for projects that don’t begin construction within 60 days of the bill’s enactment. The Republican lawmakers in support of IRA’s clean energy credits asked that the Senate alter that provision — along with a foreign entity of concern provision that was again called “overly prescriptive” — and maintain tax credit transferability throughout the credits’ lifetime. The group said that though they were “proud to have worked to ensure that the bill did not include a full repeal of the clean energy tax credits,” they remain “deeply concerned about those provisions.” The

Read More »

Winners and losers in the Top500 supercomputer ranking

GPU winner: AMD AMD is finally making a showing for itself, albeit modestly, in GPU accelerators. For the June 2025 edition of the list, AMD Instinct accelerators are in 23 systems, a nice little jump from the 10 systems on the June 2024 list. Of course, it helps with the sales pitch when AMD processors and coprocessors can be found powering the No. 1 and No. 2 supercomputers in the world. GPU loser: Intel Intel’s GPU efforts have been a disaster. It failed to make a dent in the consumer space with its Arc GPUs, and it isn’t making much headway in the data center, either. There were only four systems running GPU Max processors on the list, and that’s up from three a year ago. Still, it’s pitiful showing given the effort Intel made. Server winners: HPE, Dell, EVIDAN, Nvidia The four server vendors — servers, not component makers — all saw share increases. Nvidia is also a server vendor, selling its SuperPOD AI servers directly to customers. They all gained at the expense of Lenovo and Arm. Server loser: Lenovo It saw the sharpest drop in server share, going from 163 systems in June of 2024 to 136 in this most recent listing. Loser: Arm Other than the 13 Nvidia Grace chips, the ARM architecture was completely absent from this spring’s list.

Read More »

Micron joins HBM4 race with 36GB 12-high stack, eyes AI and data center dominance

Race to power the next generation of AI By shipping samples of the HMB4 to the key customers, Micron has joined SK hynix in the HBM4 race. In March this year, SK hynix shipped the 12-Layer HBM4 samples to customers. SK hynix’s HBM4 has implemented bandwidth capable of processing more than 2TB of data per second, processing data equivalent to more than 400 full-HD movies (5GB each) in a second, said the company. “HBM competitive landscape, SK hynix has already sampled and secured approval of HBM4 12-high stack memory early Q1’2025 to NVIDIA for its next generation Rubin product line and plans to mass produce HBM4 in 2H 2025,” said Danish Faruqui, CEO, Fab Economics. “Closely following, Micron is pending Nvidia’s tests for its latest HBM4 samples, and Micron plans to mass produce HBM4 in 1H 2026. On the other hand, the last contender, Samsung is struggling with Yield Ramp on HBM4 Technology Development stage, and so has to delay the customer samples milestones to Nvidia and other players while it earlier shared an end of 2025 milestone for mass producing HBM4.” Faruqui noted another key differentiator among SK hynix, Micron, and Samsung: the base die that anchors the 12-high DRAM stack. For the first time, both SK hynix and Samsung have introduced a logic-enabled base die on 3nm and 4nm process technology to enable HBM4 product for efficient and faster product performance via base logic-driven memory management. Both Samsung and SK hynix rely on TSMC for the production of their logic-enabled base die. However, it remains unclear whether Micron is using a logic base die, as the company lacks in-house capability to fabricate at 3nm.

Read More »

Cisco reinvigorates data center, campus, branch networking with AI demands in mind

“We have a number of … enterprise data center customers that have been using bi-directional optics for many generations, and this is the next generation of that feature,” said Bill Gartner, senior vice president and general manager of Cisco’s optical systems and optics business. “The 400G lets customer use their existing fiber infrastructure and reduces fiber count for them so they can use one fiber instead of two, for example,” Gartner said. “What’s really changed in the last year or so is that with AI buildouts, there’s much, much more optics that are part of 400G and 800G, too. For AI infrastructure, the 400G and 800G optics are really the dominant optics going forward,” Gartner said. New AI Pods Taking aim at next-generation interconnected compute infrastructures, Cisco expanded its AI Pod offering with the Nvidia RTX 6000 Pro and Cisco UCS C845A M8 server package. Cisco AI Pods are preconfigured, validated, and optimized infrastructure packages that customers can plug into their data center or edge environments as needed. The Pods include Nvidia AI Enterprise, which features pretrained models and development tools for production-ready AI, and are managed through Cisco Intersight. The Pods are based on Cisco Validated Design principals, which offer customers pre-tested and validated network designs that provide a blueprint for building reliable, scalable, and secure network infrastructures, according to Cisco. Building out the kind of full-scale AI infrastructure compute systems that hyperscalers and enterprises will utilize is a huge opportunity for Cisco, said Daniel Newman, CEO of The Futurum Group. “These are full-scale, full-stack systems that could land in a variety of enterprise and enterprise service application scenarios, which will be a big story for Cisco,” Newman said. Campus networking For the campus, Cisco has added two new programable SiliconOne-based Smart Switches: the C9350 Fixed Access Smart Switches and C9610

Read More »

Qualcomm’s $2.4B Alphawave deal signals bold data center ambitions

Qualcomm says its Oryon CPU and Hexagon NPU processors are “well positioned” to meet growing demand for high-performance, low-power compute as AI inferencing accelerates and more enterprises move to custom CPUs housed in data centers. “Qualcomm’s advanced custom processors are a natural fit for data center workloads,” Qualcomm president and CEO Cristiano Amon said in the press release. Alphawave’s connectivity and compute technologies can work well with the company’s CPU and NPU cores, he noted. The deal is expected to close in the first quarter of 2026. Complementing the ‘great CPU architecture’ Qualcomm has been amassing Client CPUs have been a “big play” for Qualcomm, Moor’s Kimball noted; the company acquired chip design company Nuvia in 2021 for $1.4 billion and has also announced that it will be designing data center CPUs with Saudi AI company Humain. “But there was a lot of data center IP that was equally valuable,” he said. This acquisition of Alphawave will help Qualcomm complement the “great CPU architecture” it acquired from Nuvia with the latest in connectivity tools that link a compute complex with other devices, as well as with chip-to-chip communications, and all of the “very low level architectural goodness” that allows compute cores to deliver “absolute best performance.” “When trying to move data from, say, high bandwidth memory to the CPU, Alphawave provides the IP that helps chip companies like Qualcomm,” Kimball explained. “So you can see why this is such a good complement.”

Read More »

LiquidStack launches cooling system for high density, high-powered data centers

The CDU is serviceable from the front of the unit, with no rear or end access required, allowing the system to be placed against the wall. The skid-mounted system can come with rail and overhead piping pre-installed or shipped as separate cabinets for on-site assembly. The single-phase system has high-efficiency dual pumps designed to protect critical components from leaks and a centralized design with separate pump and control modules reduce both the number of components and complexity. “AI will keep pushing thermal output to new extremes, and data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise,” said Joe Capes, CEO of LiquidStack in a statement. “With up to 10MW of cooling capacity at N, N+1, or N+2, the GigaModular is a platform like no other—we designed it to be the only CDU our customers will ever need. It future-proofs design selections for direct-to-chip liquid cooling without traditional limits or boundaries.”

Read More »

Enterprises face data center power design challenges

” Now, with AI, GPUs need data to do a lot of compute and send that back to another GPU. That connection needs to be close together, and that is what’s pushing the density, the chips are more powerful and so on, but the necessity of everything being close together is what’s driving this big revolution,” he said. That revolution in new architecture is new data center designs. Cordovil said that instead of putting the power shelves within the rack, system administrators are putting a sidecar next to those racks and loading the sidecar with the power system, which serves two to four racks. This allows for more compute per rack and lower latency since the data doesn’t have to travel as far. The problem is that 1 mW racks are uncharted territory and no one knows how to manage the power, which is considerable now. ”There’s no user manual that says, hey, just follow this and everything’s going to be all right. You really need to push the boundaries of understanding how to work. You need to start designing something somehow, so that is a challenge to data center designers,” he said. And this brings up another issue: many corporate data centers have power plugs that are like the ones that you have at home, more or less, so they didn’t need to have an advanced electrician certification. “We’re not playing with that power anymore. You need to be very aware of how to connect something. Some of the technicians are going to need to be certified electricians, which is a skills gap in the market that we see in most markets out there,” said Cordovil. A CompTIA A+ certification will teach you the basics of power, but not the advanced skills needed for these increasingly dense racks. Cordovil

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »