Stay Ahead, Stay ONMINE

Out-analyzing analysts: OpenAI’s Deep Research pairs reasoning LLMs with agentic RAG to automate work — and replace jobs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Enterprise companies need to take note of OpenAI’s Deep Research. It provides a powerful product based on new capabilities, and is so good that it could put a lot of people out of jobs. Deep Research […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Enterprise companies need to take note of OpenAI’s Deep Research. It provides a powerful product based on new capabilities, and is so good that it could put a lot of people out of jobs.

Deep Research is on the bleeding edge of a growing trend: integrating large language models (LLMs) with search engines and other tools to greatly expand their capabilities. (Just as this article was being reported, for example, Elon Musk’s xAI unveiled Grok 3, which claims similar capabilities, including a Deep Search product. However, it’s too early to assess Grok 3’s real-world performance, since most subscribers haven’t actually gotten their hands on it yet.)

OpenAI’s Deep Research, released on February 3, requires a Pro account with OpenAI, costing $200 per month, and is currently available only to U.S. users. So far, this restriction may have limited early feedback from the global developer community, which is typically quick to dissect new AI advancements.

With Deep Research mode, users can ask OpenAI’s leading o3 model any question. The result? A report often superior to what human analysts produce, delivered faster and at a fraction of the cost.

How Deep Research works

While Deep Research has been widely discussed, its broader implications have yet to fully register. Initial reactions praised its impressive research capabilities, despite its occasional hallucinations in its citations. There was the guy who said he used it to help his wife who had breast cancer. It provided deeper analysis than what her oncologists provided on how radiation therapy was the right course of action, he said. The consensus, summarized by Wharton AI professor Ethan Mollick, is that its advantages far outweigh occasional inaccuracies, as fact-checking takes less time than what the AI saves overall. This is something I agree with, based on my own usage.

Financial institutions are already exploring applications. BNY Mellon, for instance, sees potential in using Deep Research for credit risk assessments. Its impact will extend across industries, from healthcare to retail, manufacturing, and supply chain management — virtually any field that relies on knowledge work.

A smarter research agent

Unlike traditional AI models that attempt one-shot answers, Deep Research first asks clarifying questions. It might ask four or more questions to make sure it understands exactly what you want. It then develops a structured research plan, conducts multiple searches, revises its plan based on new insights, and iterates in a loop until it compiles a comprehensive, well-formatted report. This can take between a few minutes and half an hour. Reports range from 1,500 to 20,000 words, and typically include citations from 15 to 30 sources with exact URLs, at least according to my usage over the past week and a half.

The technology behind Deep Research: reasoning LLMs and agentic RAG

Deep Research does this by merging two technologies in a way we haven’t seen before in a mass-market product. 

Reasoning LLMs: The first is OpenAI’s cutting-edge model, o3, which leads in logical reasoning and extended chain-of-thought processes. When it was announced in December 2024, o3 scored an unprecedented 87.5% on the super-difficult ARC-AGI benchmark designed to test novel problem-solving abilities. What’s interesting is that o3 hasn’t been released as a standalone model for developers to use. Indeed, OpenAI’s CEO Sam Altman announced last week that the model instead would be wrapped into a “unified intelligence” system, which would unite models with agentic tools like search, coding agents and more. Deep Research is an example of such a product. And while competitors like DeepSeek-R1 have approached o3’s capabilities (one of the reasons why there was so much excitement a few weeks ago), OpenAI is still widely considered to be slightly ahead.

Agentic RAG: The second, agentic RAG, is a technology that has been around for about a year now. It uses agents ​​to autonomously seek out information and context from other sources, including searching the internet. This can include other tool-calling agents to find non-web information via APIs; coding agents that can complete complex sequences more efficiently; and database searches. Initially, OpenAI’s Deep Research is primarily searching the open web, but company leaders have suggested it would be able to search more sources over time.

OpenAI’s competitive edge (and its limits)

While these technologies are not entirely new, OpenAI’s refinements — enabled by things like its jump-start on working on these technologies, massive funding, and its closed-source development model — have taken Deep Research to a new level. It can work behind closed doors, and leverage feedback from the more than 300 million active users of OpenAI’s popular ChatGPT product. OpenAI has led in research in these areas, for example in how to do verification step by step to get better results. And it has clearly implemented search in an interesting way, perhaps borrowing from Microsoft’s Bing and other technologies.

While it is still hallucinating some results from its searches, it’s doing so less than competitors, perhaps in part because the underlying o3 model itself has set an industry low for these hallucinations at 8%. And there are ways to reduce mistakes still further, by using mechanisms like confidence thresholds, citation requirements and other sophisticated credibility checks

At the same time, there are limits to OpenAI’s lead and capabilities. Within two days of Deep Research’s launch, HuggingFace introduced an open-source AI research agent called Open Deep Research that got results that weren’t too far off of OpenAI’s — similarly merging leading models and freely available agentic capabilities. There are few moats. Open-source competitors like DeepSeek appear set to stay close in the area of reasoning models, and Microsoft’s Magentic-One offers a framework for most of OpenAI’s agentic capabilities, to name just two more examples. 

Furthermore, Deep Research has limitations. The product is really efficient at researching obscure information that can be found on the web. But in areas where there is not much online and where domain expertise is largely private — whether in peoples’ heads or in private databases — it doesn’t work at all. So this isn’t going to threaten the jobs of high-end hedge-fund researchers, for example, who are paid to go talk with real experts in an industry to find out otherwise very hard-to-obtain information, as Ben Thompson argued in a recent post (see graphic below). In most cases, OpenAI’s Deep Research is going to affect lower-skilled analyst jobs. 

Deep Research’s value first increases as information online gets scarce, then drops off when it gets really scarce. Source: Stratechery.

The most intelligent product yet

When you merge top-tier reasoning with agentic retrieval, it’s not really surprising that you get such a powerful product. OpenAI’s Deep Research achieved 26.6% on Humanity’s Last Exam, arguably the best benchmark for intelligence. This is a relatively new AI benchmark designed to be the most difficult for any AI model to complete, covering 3,000 questions across 100 different subjects. On this benchmark, OpenAI’s Deep Research significantly outperforms Perplexity’s Deep Research (20.5%) and earlier models like o3-mini (13%) and DeepSeek-R1 (9.4%) that weren’t hooked up with agentic RAG. But early reviews suggest OpenAI leads in both quality and depth. Google’s Deep Research has yet to be tested against this benchmark, but early reviews suggest OpenAI leads in both quality and depth.

How it’s different: the first mass-market AI that could displace jobs

What’s different with this product is its potential to eliminate jobs. Sam Witteveen, cofounder of Red Dragon and a developer of AI agents, observed in a deep-dive video discussion with me that a lot of people are going to say: “Holy crap, I can get these reports for $200 that I could get from some top-4 consulting company that would cost me $20,000.” This, he said, is going to cause some real changes, including likely putting people out of jobs.

Which brings me back to my interview last week with Sarthak Pattanaik, head of engineering and AI at BNY Mellon, a major U.S. bank.

To be sure, Pattanaik didn’t say anything about the product’s ramifications for actual job counts at his bank. That’s going to be a particularly sensitive topic that any enterprise is probably going to shy away from addressing publicly. But he said he could see OpenAI’s Deep Research being used for credit underwriting reports and other “topline” activities, and having significant impact on a variety of jobs: “Now that doesn’t impact every job, but that does impact a set of jobs around strategy [and] research, like comparison vendor management, comparison of product A versus product B.” He added: “So I think everything which is more on system two thinking — more exploratory, where it may not have a right answer, because the right answer can be mounted once you have that scenario definition — I think that’s an opportunity.”

A historical perspective: job loss and job creation

Technological revolutions have historically displaced workers in the short term while creating new industries in the long run. From automobiles replacing horse-drawn carriages to computers automating clerical work, job markets evolve. New opportunities created by the disruptive technologies tend to spawn new hiring. Companies that fail to embrace these advances will fall behind their competitors.

OpenAI’s Altman acknowledged the link, even if indirect, between Deep Research and labor. At the AI Summit in Paris last week, he was asked about his vision for artificial general intelligence (AGI), or the stage at which AI can perform pretty much any task that a human can. As he answered, his first reference was to Deep Research: “It’s a model I think is capable of doing like a low-single-digit percentage of all the tasks in the economy in the world right now, which is a crazy statement, and a year ago I don’t think something that people thought is going to be coming.” (See minute three of this video). He continued: “For 50 cents of compute, you can do like $500 or $5,000 of work. Companies are implementing that to just be way more efficient.” 

The takeaway: a new era for knowledge work

Deep Research represents a watershed moment for AI in knowledge-based industries. By integrating cutting-edge reasoning with autonomous research capabilities, OpenAI has created a tool that is smarter, faster and significantly more cost-effective than human analysts.

The implications are vast, from financial services to healthcare to enterprise decision-making. Organizations that leverage this technology effectively will gain a significant competitive edge. Those that ignore it do so at their peril.

For a deeper discussion on how OpenAI’s Deep Research works, and how it is reshaping knowledge work, check out my in-depth conversation with Sam Witteveen in our latest video:

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco routers knocked out due to Cloudflare DNS change

Exposes architectural fragility Networking consultant Yvette Schmitter, CEO of the Fusion Collective consulting firm, said the Cloudflare change “exposed Cisco’s architectural fragility when [some Cisco] switches worldwide entered fatal reboot loops every 10-30 minutes.” What happened? “Cloudflare changed record ordering. Cisco’s firmware, instead of handling unexpected DNS responses gracefully, treated

Read More »

Venezuela Oil Being Held at Sea Swells

The volume of Venezuelan crude floating at sea has spiked to the highest level in more than three years after the US seized the country’s leader, Nicolas Maduro, and asserted control over its energy resources. More than 29 million barrels of Venezuelan oil are now on vessels stationary at sea, up from about 20 million barrels earlier this week, according to data from Kpler. Most of the increase has been seen in waters in Asia, where China has long been the largest importer of the South American nation’s output. “Chinese teapots are already bracing for the possibility that the barrels now in transit will be their last,” said Muyu Xu, a senior crude analyst at Kpler, referring to independent Chinese processors. The oil market has been rocked this week by the US intervention into OPEC member Venezuela, which sits on the world’s largest proven crude reserves. The Trump administration has said it plans to control future sales of Venezuelan oil and hold the proceeds, with the new arrangement to last “indefinitely,” according to Energy Secretary Chris Wright. It has also maintained a naval blockade on flows, although US-bound cargoes have been allowed. The upheaval has cast doubt on where the Venezuelan oil that’s now in transit or floating storage will end up. Still, Wright also said Washington would not prevent China from accessing Venezuelan oil, according to comments to Fox News. “We’re not going to cut off China,” he said. “The illicit trade in oil with Iran and Russia, the illegal gun-running stuff, that’s going to be cut off.” WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Iran Turmoil Pushes Oil to Weekly Gain Streak

Oil notched its longest streak of weekly gains since June as Iran intensified a crackdown on protests across the country and US President Donald Trump threatened repercussions if demonstrators were targeted. West Texas Intermediate futures settled near $59 a barrel after rising more than 5% over the prior two sessions. Tehran said that “rioters” who damage public property or clash with security forces will face the death penalty, just a day after the US president warned the country’s regime would “pay hell” if protesters were killed. The unrest is the most significant challenge to Supreme Leader Ayatollah Ali Khamenei since a nationwide uprising in 2022. Protests are disrupting air travel in and out of the country, which produces more than 3 million barrels a day of crude. The scale of risk shows up clearest in options markets, where the skew toward bullish calls is the biggest for US crude futures since July. The Iranian turmoil shifted the focus away from Venezuela, where Trump said further attacks were canceled, citing improved cooperation from the country, leading to a brief dip in oil prices earlier. An energy quarantine is still in effect, though, and the US continues to have its military in position for further action in the region after the capture of Venezuelan President Nicolas Maduro last week. Trump met with oil executives at the White House on Friday and said the US intends to decide which companies will be allowed to go into Venezuela. “We’re dealing with the country, so we’re empowered to make that deal,” he said, adding that “giant” oil companies will spend $100 billion of their own money in investment. Venezuela’s acting President Delcy Rodriguez, for her part, issued a statement Friday saying the country is a victim of an “illegitimate and illegal criminal aggression” by the

Read More »

Russia’s Crude Output in December Made Deep Plunge

Russia’s crude oil production plunged by the most in 18 months in December, pincered by western sanctions that are causing the nation’s barrels to pile up at sea and a surge of Ukrainian drone attacks on its energy infrastructure. The nation pumped an average 9.326 million barrels a day of crude oil last month, according to people with knowledge of government data, who asked not to be identified discussing classified information. The figure — which doesn’t include output of condensate — is more than 100,000 barrels a day below November, and almost 250,000 barrels a day lower than Russia is allowed to pump under agreement with the Organization of the Petroleum Exporting Countries and allies. The slump comes at a time when Ukraine has been carrying out wide ranging drone attacks on Russian oil infrastructure — directly curbing output and affecting refineries that consume the barrels. At the same time, Russian cargoes are amassing at sea amid signs of reticence among some buyers to take them following sweeping US sanctions targeting the nation’s two largest producers, Rosneft PJSC and Lukoil PJSC. Russia’s Energy Ministry didn’t immediately respond to a Bloomberg request for comment on the December crude production figures. It’s a public holiday in Russia. The December decline was also the deepest since June 2024 — a period when Russia was supposed to be cutting its production anyway under an agreement with OPEC+. The producer group agreed to return barrels to the market between April and December 2025, and then hold output steady in the first quarter of 2026.  Until December, Russia’s output had been rising, even if growth had been petering out before year end. Russia’s required level of production for the final month of 2025 was 9.574 million barrels a day, according to OPEC data. Historically, Russia had been a laggard in complying with

Read More »

Burgum Says VEN Oil Revival Won’t Rely on Funding From USA

The Trump administration is unlikely to provide financial support to help US oil companies revitalize Venezuela’s oil sector, Interior Secretary Doug Burgum said Friday, throwing cold water on hopes the multibillion-dollar effort would be subsidized by the US government.  “The capital is going to come from the capital markets and come from the energy companies,” Burgum, who also leads the White House’s National Energy Dominance Council, told Bloomberg Television. “I don’t see that these companies are going to need support from the US, other than things around security. If we can provide a secure, stable environment, the resource here is so significant and so large that it’s going to be attractive for people to go in and develop.”  Burgum’s remarks come after President Donald Trump previously suggested the effort, estimated to cost upwards of $100 billion over the next decade, could be reimbursed by the US. The president on Monday told NBC News “a tremendous amount of money will have to be spent and the oil companies will spend it, and then they’ll get reimbursed by us or through revenue.” Oil companies, which are set to meet with Trump, Burgum and other administration officials at the White House later Friday, have been wary of committing tens of billions of dollars to Venezuela over the next decade. Executives have sought assurances on physical and financial security amid concerns about the stability of a post-Nicolás Maduro government.  Energy Secretary Chris Wright said on Fox News Friday the US Export-Import Bank could be used to provide credit support.  “I have been deluged with companies interested to go to Venezuela, and so far, no one’s asked for money,” Wright said in response to a question about providing direct grants to oil firms. “What they want is the US to use our leverage to make

Read More »

Texas Oil, Gas Industry Employed Nearly 500K Texans in 2025

The Texas oil and natural gas industry employed 495,501 Texans last year, according to the Texas Oil & Gas Association’s (TXOGA) 2025 Energy and Economic Impact report, which was released this week. The sector that employed the most workers in 2025 was ‘support activities for oil and gas operations’, with 110,612 employees, followed by ‘gasoline stations with convenience stores’, with 81,268 employees, and ‘oil and gas pipeline and related structures construction’, with 50,667 employees, the report showed. ‘Crude petroleum extraction’ ranked as the oil and gas sector with the fourth most employees in 2025, with 49,187, and ‘oil and gas field machinery and equipment’ ranked fifth, with 29,280, the report revealed. TXOGA stated in the report that “every direct job in the Texas oil and natural gas industry creates approximately two additional jobs”, outlining that “1.4 million total jobs [were] supported across the Texas economy” in 2025. Texas oil and natural gas employers paid an average of $133,095 per job in 2025, according to the report, which noted that this was 68 percent more than the average paid by the rest of Texas’ private sector. The report showed that oil and gas taxes came in at $54,481 per employee last year, while “all other sector taxes” were $7,225 per employee. “Based on the combined state and local taxes and state royalties attributable to the industry, the oil and natural gas industry pays far more per employee than the average across all other Texas private-sector industries,” TXOGA stated in its report. According to TXOGA’s latest report, in 2025, the Texas oil and natural gas industry paid state and local taxes and state royalties totaling $27.0 billion. TXOGA pointed out in the report that this equates to nearly $74 million every day. A statement sent to Rigzone by the TXOGA team this

Read More »

Nodal Hits Record Annual Volumes in Power, Environmental Markets

Nodal Exchange LLC, a derivatives trading platform for North American commodity markets, saw 3.1 billion megawatt hours (MWh) of power futures and 749,222 lots of environmental futures and options traded in 2025, achieving new annual highs. Power futures traded last year on the Tysons, Virginia-based exchange rose four percent year-on-year to 3.1 billion MWh. The December volume of 235 million MWh was up 29 percent from December 2024, Nodal said in an online statement Thursday. “Nodal continues to be the market leader in North American monthly power futures having 56 percent of the open interest with 1.51 billion MWh at the end of 2025”, Nodal said. “The open interest represents over $166 billion of notional value (both sides)”. Meanwhile environmental market open interest ended 2025 at a record 391,264 lots, up one percent from 2024. “December deliveries of 37,173 lots marked the fifth-largest delivery month for environmental products on Nodal”, Nodal said. “Renewable energy certificate futures and options posted volume of 465,189 lots in 2025, up 11 percent from a year earlier and ended the year with open interest of 323,591 lots, up 10 percent. “Nodal continues to expand environmental offerings having over 68 percent of the North American Renewable Energy Certificate market measured in clean MWh generation. “Nodal, in collaboration with IncubEx, launched several new environmental futures contracts in 2025, including Auction Clearing Price contracts for California, Washington and RGGI carbon allowances.  Nodal was the first exchange to launch PJM Emission Free Energy Certificate Futures, which allow for delivery of nuclear energy certificates alongside hydro. Other new launches included Virginia In-State Compliance REC Futures, New York Environmental Disclosure Program REC Futures and Alberta TIER EPC Options”. For natural gas, traded volumes last year totaled 958 trillion British thermal units (TBtu), Nodal said. Traded gas volumes in January-November 2025 reached a

Read More »

DCF Poll: Analyzing AI Data Center Growth

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #1796c1 !important; border-color: #1796c1 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #1796c1 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #1796c1 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #1796c1 !important; border-color: #1796c1 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #1796c1 !important; border-color: #1796c1 !important; } Coming out of 2025, AI data center development remains defined by momentum. But momentum is not the same as certainty. Behind the headlines, operators, investors, utilities, and policymakers are all testing the assumptions that carried projects forward over the past two years, from power availability and capital conditions to architecture choices and community response. Some will hold. Others may not. To open our 2026 industry polling, we’re taking a closer look at which pillars of AI data center growth are under the most pressure. What assumption about AI data center growth feels most fragile right now?

Read More »

JLL’s 2026 Global Data Center Outlook: Navigating the AI Supercycle, Power Scarcity and Structural Market Transformation

Sovereign AI and National Infrastructure Policy JLL frames artificial intelligence infrastructure as an emerging national strategic asset, with sovereign AI initiatives representing an estimated $8 billion in cumulative capital expenditure by 2030. While modest relative to hyperscale investment totals, this segment carries outsized strategic importance. Data localization mandates, evolving AI regulation, and national security considerations are increasingly driving governments to prioritize domestic compute capacity, often with pricing premiums reaching as high as 60%. Examples cited across Europe, the Middle East, North America, and Asia underscore a consistent pattern: digital sovereignty is no longer an abstract policy goal, but a concrete driver of data center siting, ownership structures, and financing models. In practice, sovereign AI initiatives are accelerating demand for locally controlled infrastructure, influencing where capital is deployed and how assets are underwritten. For developers and investors, this shift introduces a distinct set of considerations. Sovereign projects tend to favor jurisdictional alignment, long-term tenancy, and enhanced security requirements, while also benefiting from regulatory tailwinds and, in some cases, direct state involvement. As AI capabilities become more tightly linked to economic competitiveness and national resilience, policy-driven demand is likely to remain a durable (if specialized) component of global data center growth. Energy and Sustainability as the Central Constraint Energy availability emerges as the report’s dominant structural constraint. In many major markets, average grid interconnection timelines now extend beyond four years, effectively decoupling data center development schedules from traditional utility planning cycles. As a result, operators are increasingly pursuing alternative energy strategies to maintain project momentum, including: Behind-the-meter generation Expanded use of natural gas, particularly in the United States Private-wire renewable energy projects Battery energy storage systems (BESS) JLL points to declining battery costs, seen falling below $90 per kilowatt-hour in select deployments, as a meaningful enabler of grid flexibility, renewable firming, and

Read More »

SoftBank, DigitalBridge, and Stargate: The Next Phase of OpenAI’s Infrastructure Strategy

OpenAI framed Stargate as an AI infrastructure platform; a mechanism to secure long-duration, frontier-scale compute across both training and inference by coordinating capital, land, power, and supply chain with major partners. When OpenAI announced Stargate in January 2025, the headline commitment was explicit: an intention to invest up to $500 billion over four to five years to build new AI infrastructure in the U.S., with $100 billion targeted for near-term deployment. The strategic backdrop in 2025 was straightforward. OpenAI’s model roadmap—larger models, more agents, expanded multimodality, and rising enterprise workloads—was driving a compute curve increasingly difficult to satisfy through conventional cloud procurement alone. Stargate emerged as a form of “control plane” for: Capacity ownership and priority access, rather than simply renting GPUs. Power-first site selection, encompassing grid interconnects, generation, water access, and permitting. A broader partner ecosystem beyond Microsoft, while still maintaining a working relationship with Microsoft for cloud capacity where appropriate. 2025 Progress: From Launch to Portfolio Buildout January 2025: Stargate Launches as a National-Scale Initiative OpenAI publicly launched Project Stargate on Jan. 21, 2025, positioning it as a national-scale AI infrastructure initiative. At this early stage, the work was less about construction and more about establishing governance, aligning partners, and shaping a public narrative in which compute was framed as “industrial policy meets real estate meets energy,” rather than simply an exercise in buying more GPUs. July 2025: Oracle Partnership Anchors a 4.5-GW Capacity Step On July 22, 2025, OpenAI announced that Stargate had advanced through a partnership with Oracle to develop 4.5 gigawatts of additional U.S. data center capacity. The scale of the commitment marked a clear transition from conceptual ambition to site- and megawatt-level planning. A figure of this magnitude reshaped the narrative. At 4.5 GW, Stargate forced alignment across transformers, transmission upgrades, switchgear, long-lead cooling

Read More »

Lenovo unveils purpose-built AI inferencing servers

There is also the Lenovo ThinkSystem SR650i, which offers high-density GPU computing power for faster AI inference and is intended for easy installation in existing data centers to work with existing systems. Finally, there is the Lenovo ThinkEdge SE455i for smaller, edge locations such as retail outlets, telecom sites, and industrial facilities. Its compact design allows for low-latency AI inference close to where data is generated and is rugged enough to operate in temperatures ranging from -5°C to 55°C. All of the servers include Lenovo’s Neptune air- and liquid-cooling technology and are available through the TruScale pay-as-you-go pricing model. In addition to the new hardware, Lenovo introduced new AI Advisory Services with AI Factory Integration. This service gives access to professionals for identifying, deploying, and managing best-fit AI Inferencing servers. It also launched Premier Support Plus, a service that gives professional assistance in data center management, freeing up IT resources for more important projects.

Read More »

Samsung warns of memory shortages driving industry-wide price surge in 2026

SK Hynix reported during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, while Micron recently exited the consumer memory market entirely to focus on enterprise and AI customers. Enterprise hardware costs surge The supply constraints have translated directly into sharp price increases across enterprise hardware. Samsung raised prices for 32GB DDR5 modules to $239 from $149 in September, a 60% increase, while contract pricing for DDR5 has surged more than 100%, reaching $19.50 per unit compared to around $7 earlier in 2025. DRAM prices have already risen approximately 50% year to date and are expected to climb another 30% in Q4 2025, followed by an additional 20% in early 2026, according to Counterpoint Research. The firm projected that DDR5 64GB RDIMM modules, widely used in enterprise data centers, could cost twice as much by the end of 2026 as they did in early 2025. Gartner forecast DRAM prices to increase by 47% in 2026 due to significant undersupply in both traditional and legacy DRAM markets, Chauhan said. Procurement leverage shifts to hyperscalers The pricing pressures and supply constraints are reshaping the power dynamics in enterprise procurement. For enterprise procurement, supplier size no longer guarantees stability. “As supply becomes more contested in 2026, procurement leverage will hinge less on volume and more on strategic alignment,” Rawat said. Hyperscale cloud providers secure supply through long-term commitments, capacity reservations, and direct fab investments, obtaining lower costs and assured availability. Mid-market firms rely on shorter contracts and spot sourcing, competing for residual capacity after large buyers claim priority supply.

Read More »

Eight Trends That Will Shape the Data Center Industry in 2026

For much of the past decade, the data center industry has been able to speak in broad strokes. Growth was strong. Demand was durable. Power was assumed to arrive eventually. And “the data center” could still be discussed as a single, increasingly important, but largely invisible, piece of digital infrastructure. That era is ending. As the industry heads into 2026, the dominant forces shaping data center development are no longer additive. They are interlocking and increasingly unforgiving. AI drives density. Density drives cooling. Cooling and density drive power. Power drives site selection, timelines, capital structure, and public response. And once those forces converge, they pull the industry into places it has not always had to operate comfortably: utility planning rooms, regulatory hearings, capital committee debates, and community negotiations. The throughline of this year’s forecast is clarity: Clarity about workload classes. Clarity about physics. Clarity about risk. And clarity about where the industry’s assumptions may no longer hold. One of the most important shifts entering 2026 is that it may increasingly no longer be accurate, or useful, to talk about “data centers” as a single category. What public discourse often lumps together now conceals two very different realities: AI factories built around sustained, power-dense GPU utilization, and general-purpose data centers supporting a far more elastic mix of cloud, enterprise, storage, and interconnection workloads. That distinction is no longer academic. It is shaping how projects are financed, how power is delivered, how facilities are cooled, and how communities respond. It’s also worth qualifying a line we’ve used before, and still stand by in spirit: that every data center is becoming an AI data center. In 2026, we feel that statement is best understood more as a trajectory, and less a design brief. AI is now embedded across the data center stack: in

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »