Stay Ahead, Stay ONMINE

Nvidia launches research center to accelerate quantum computing breakthrough

The new research center aims to tackle quantum computing’s most significant challenges, including qubit noise reduction and the transformation of experimental quantum processors into practical devices. “By combining quantum processing units (QPUs) with state-of-the-art GPU technology, Nvidia hopes to accelerate the timeline to practical quantum computing applications,” the statement added. Several prominent quantum computing companies, […]

The new research center aims to tackle quantum computing’s most significant challenges, including qubit noise reduction and the transformation of experimental quantum processors into practical devices.

“By combining quantum processing units (QPUs) with state-of-the-art GPU technology, Nvidia hopes to accelerate the timeline to practical quantum computing applications,” the statement added.

Several prominent quantum computing companies, including Quantinuum, Quantum Machines, and QuEra Computing, will collaborate with the center. The center will also have Academic partnerships including those with the Harvard Quantum Initiative in Science and Engineering (HQI) and the Engineering Quantum Systems (EQuS) group at the Massachusetts Institute of Technology (MIT).

The center will leverage Nvidia’s CUDA-Q quantum development platform to facilitate the creation of hybrid quantum algorithms and applications, addressing the complex integration of GPU and QPU hardware, the statement added.

Industry implications

Analysts suggest Nvidia’s approach represents a strategic departure from how other tech giants are tackling quantum computing challenges.

“Nvidia’s approach differentiates from peers like IBM, Google, and Microsoft by focusing on integration rather than qubit development,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “While others focus on quantum hardware and error correction, Nvidia is doubling down on hybrid quantum-classical computing architectures. Their CUDA framework provides a unified programming model that works across quantum simulators, GPUs, and QPUs regardless of vendor — creating an integration-first approach that leverages their existing strength in AI and accelerated computing.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

HPE unveils AI-powered network security and data protection technology

Also announced at Black Hat is the HPE Zerto integration hub, which the company says will simplify system security and disaster recovery. The integration hub connects HPE Zerto with cybersecurity software and enterprise networking devices to generate insights, automate workflows, and simplify data protections, HPE says. “HPE Zerto will streamline

Read More »

Cisco teams with Hugging Face for AI model anti-malware

ClamAV can now detect malicious code in AI models: “We are releasing this capability to the world. For free. In addition to its coverage of traditional malware, ClamAV can now detect deserialization risks in common model file formats such as .pt and .pkl (in milliseconds, not minutes). This enhanced functionality

Read More »

Riverbed banks on AI-driven network observability

“So, in the world of AI, as you know, data is everything, right?” Donatelli said. “So good data makes good AI, bad data makes bad AI.” Data is at the foundation of Riverbed’s product updates, with its April release of the Aternity Digital Experience Management (DEM) technology, which is all

Read More »

Data neutrality: Safeguarding your AI’s competitive edge

I recently had a discussion on this topic with Amith Nair, global vice president and general manager of AI service delivery for TELUS Digital, one of the leading, global providers of AI infrastructure and services. Nair reaffirmed the importance of data: “Data is the core of everything that happens in

Read More »

MISO could save $27B in system costs by 2035 with 11 GW of batteries: report

Adding about 11 GW of batteries in the Midcontinent Independent System Operator footprint could save about $27 billion in system costs by 2035, according to a study prepared for the American Clean Power Association. The batteries would store low-cost, excess electricity and deliver it to the grid during peak demand periods, helping lower costs in those hours, Aurora Energy Research said in the July 29 report. Aurora’s modeling found that under its “central” battery scenario, in which batteries are added based on projects’ economic viability, MISO peak power prices on a day in May 2035 would hit $85.90/MWh compared to $245.30/MWh if no more batteries are added to the grid operator’s system. Without the battery storage, average wholesale power prices would increase by $1.40/MWh by 2035, adding about $1.2 billion to overall electricity costs, according to the report. MISO — with a 127 GW record peak load — has about 125 MW of battery storage on its system, according to the report. The California Independent System Operator — with a 52 GW record peak load — had about 13 GW at the end of last year, according to a May 29 report from the grid operator. Aurora said it expects MISO will have almost 1 GW of battery storage by next spring.  MISO has about 60 GW of standalone storage in its interconnection queue, along with about 175 GW of solar and 42 GW of wind, according to the report. “Market drivers such as retiring thermal assets, rising demand and high renewables deployment create a favorable environment for battery buildout, coupled with declining [capital expenditures], strong federal clean energy tax credits and the relative ease of deployment of energy storage,” Aurora said in the report. Batteries in MISO can “stack” revenues from wholesale energy arbitrage, ancillary services and capacity payments, Aurora

Read More »

Essential Utilities Logs YoY Increase in Profit

Essential Utilities Inc. has reported a net income of $107.8 million for the second quarter of 2025, up from $75.4 million a year prior. The company said in its quarterly report that earnings had been helped by increased rates across both water and gas segments, partially offset by increases in depreciation and amortization expenses, interest expenses, and operations and maintenance expenses. “With both our water and gas divisions firing on all cylinders, we delivered strong second-quarter results and reaffirmed our commitment to growth, innovation and community”, Christopher Franklin, Essential Utilities Chairman and Chief Executive Officer, said. Revenues for the quarter were $514.9 million, up 18.5 percent from $434.4 million for the second quarter of 2024. Operations and maintenance expenses increased to $148.5 million for the second quarter of 2025, compared to $142.5 million for the second quarter of 2024, primarily due to increases in employee-related costs, bad debt expense, materials and supplies and other costs. Essential Utilities said its regulated water segment generated quarterly revenues of $332.3 million, up 9.9 percent from $302.5 million for the second quarter of 2024. The primary drivers of the revenue increase were water and wastewater rates. Operations and maintenance expenses for this segment rose to $100.1 million for the second quarter of 2025, compared to $95.6 million for the same period in 2024. Essential Utilities’ regulated natural gas segment had quarterly revenues of $177.3 million, up 38.3 percent from $128.2 million for Q2 2024. This growth was mainly due to higher purchased gas costs, increased rates, and additional surcharges. Meanwhile, operations and maintenance expenses for this segment rose slightly to $49.8 million, remaining relatively unchanged compared to the previous year. Revenues for the first half of the year reached $1.3 billion, 24.1 percent higher than the $1 billion reported for the first half of

Read More »

Aramco Profit Falls for 10th Straight Quarter

Saudi Aramco reported a decline in profit for a 10th straight quarter as lower oil prices outweighed the impact of higher production. Net income attributable to shareholders dropped 19% to a four-year low of 85.63 billion riyals ($22.8 billion) in the second quarter from a year earlier, according to a statement. That missed analysts estimates compiled by Bloomberg. Free cash flow again failed to cover the dividend and debt rose. The numbers are the latest sign of pressure on Aramco’s balance sheet. Earlier this year, the company said it would lower its dividend for 2025 by a third to about $85 billion, but it’s still struggling to churn out enough cash to cover the distribution. The smaller payout and weaker oil are cutting into Saudi government revenues just as Crown Prince Mohammed bin Salman pushes ahead with ambitious plans to transform the economy. Oil prices in London were on average almost $20 a barrel lower in the second quarter compared with a year earlier. Brent crude traded near $68 a barrel on Tuesday, below the more-than $90 that the International Monetary Fund says the Saudi government needs to balance its budget. The company’s total dividend for the quarter was $21.36 billion, almost unchanged from the first quarter but lower than the $31 billion a year earlier. That reduction is primarily because Aramco decided to vastly reduce the performance-linked component of the payout after completing the distribution of the bumper profits from 2022. Free cash flow — the funds left over from operations after accounting for investments and expenses — fell 20% to $15.2 billion in the second quarter. That wasn’t enough to cover the dividend. Net debt rose to $30.8 billion from $24.7 billion at the end of the first quarter. The higher borrowing drove up the gearing ratio to 6.5% from

Read More »

Oil Sinks as Russia Mulls Truce Deal

Oil fell for the fourth straight session as Russia weighed concessions to US President Donald Trump that may include an air truce with Ukraine. West Texas Intermediate crude slid 1.7% to settle near $65 a barrel, adding to a declines over the previous three sessions. Bloomberg reported that the Kremlin is weighing options, including a pause on air strikes, to try to fend off Trump’s threat of secondary sanctions. Crude bounced from intraday lows after the Financial Times reported that Trump is considering blacklisting Russia’s so-called “shadow fleet” of oil tankers if Putin doesn’t agree to a ceasefire by Friday. The developments come just days ahead of the Aug. 8 deadline for Russia to reach a truce with Ukraine. US Special Envoy Steve Witkoff is expected to visit the nation this week. “Trump’s sanctions against Russia are mostly noise, as the only thing that will impact flows against a geographically large, top three oil producer with heavy economic links to India and China is a physical blockade,” said Joe DeLaura, global energy strategist at Rabobank. Trump earlier said he would raise tariffs on India substantially, accusing the country of helping to prolong Russia’s war against Ukraine by purchasing Moscow’s crude. New Delhi slammed the move as unjustified. Oil has been on a round trip, rising a few dollars to trade around $70 and then falling back, as traders try to gauge whether Trump will follow through on his threats to punish Russian oil buyers. Crude prices have held up in recent months in part because inventory buildups haven’t appeared near vital pricing points and instead have been concentrated on China. “It’s pretty hard to predict what’s going to happen between Russian sanctions, Iranian sanctions, Chinese storage, and then the underlying fundamentals of the oil markets,” BP Plc Chief Executive Officer

Read More »

SLB Launches Well Logging Derisking Platform

Global energy technology company Schlumberger Limited (SLB) has unveiled an autonomous logging platform, OnWave. The company said in a media release that the platform enables more efficient and reliable acquisition of formation evaluation measurements in any well condition. This technology autonomously collects multiple high-fidelity measurements downhole without using a wireline unit or cable. SLB said that the OnWave platform’s cable-free design allows for deployment in less than half the time of traditional wireline systems. It also supports drill pipe rotation and mud circulation during logging, improving well safety and reducing the risk of stuck pipe events, SLB said. “The OnWave platform marks the beginning of a new era in formation evaluation”, Frederik Majkut, president of Reservoir Performance at SLB, said. “By streamlining how we gather high-fidelity measurements downhole, we are opening up key opportunities for our customers to integrate data-driven decision making into their workflows across the well life cycle – from exploration through to production and recovery”. The OnWave platform can be deployed in any well trajectory without requiring onsite SLB crew; it carries out downhole tasks that would usually be done manually by engineers on the surface, such as acquiring borehole measurements and conducting data quality checks. Additionally, SLB said the platform ensures the tool’s position and functionality downhole by maintaining continuous communication with the surface – something that most traditional cableless logging platforms lack. This guarantees confidence in the quality of data acquisition and prevents the need for remedial logging runs, the company said. SLB said OnWave has been implemented in various basins, including those in the United States and the Middle East, showcasing substantial efficiency improvements in complex well trajectories. In South Texas, the platform decreased the time taken to reach the total depth of a well from several hours to merely 27 minutes, a reduction of 70

Read More »

ExxonMobil completes first jumpers for Yellowtail

ExxonMobil Corp.’s Yellowtail development offshore Guyana has advanced with installation of the first water alternating gas injection (WAG) jumpers. Strohm fabricated the first 13 thermoplastic composite pipe (TCP) jumpers as part of its jumper on demand concept. The first two TCP Jumpers were integrated with vertical connections, pressure tested, and installed subsea at depths of more than 1,700 m earlier this month. The jumpers, installed by spreader bar, were locked in and back seal tested, the manufacturer said in a release Aug. 1. “This first Jumper on Demand campaign for ExxonMobil applies a high-volume fabrication method, proving that onsite fabrication of TCP jumpers has the flexibility to scale up or down as the installation schedule demands,” said Gavin Leiper, vice-president Americas & Global Field Services Group, Strohm.  Leiper said the company will collaborate with ExxonMobil during the next fabrication campaign in 2026. Production from the Yellowtail project in the Stabroek block is expected to start late-2025 following completion of installation and well activities, bringing daily production capacity across ExxonMobi Guyana operations to about 900,000 bbl (OGJ Online, Apr. 17, 2025). ExxonMobil Guyana is operator of the block with 45% interest. Partners are Chevron (30%) and CNOOC Petroleum Guyana Ltd. (25%).

Read More »

LiquidStack CEO Joe Capes on GigaModular, Direct-to-Chip Cooling, and AI’s Thermal Future

In this episode of the Data Center Frontier Show, Editor-in-Chief Matt Vincent speaks with LiquidStack CEO Joe Capes about the company’s breakthrough GigaModular platform — the industry’s first scalable, modular Coolant Distribution Unit (CDU) purpose-built for direct-to-chip liquid cooling. With rack densities accelerating beyond 120 kW and headed toward 600 kW, LiquidStack is targeting the real-world requirements of AI data centers while streamlining complexity and future-proofing thermal design. “AI will keep pushing thermal output to new extremes,” Capes tells DCF. “Data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise.” LiquidStack’s new GigaModular CDU, unveiled at the 2025 Datacloud Global Congress in Cannes, delivers up to 10 MW of scalable cooling capacity. It’s designed to support single-phase direct-to-chip liquid cooling — a shift from the company’s earlier two-phase immersion roots — via a skidded modular design with a pay-as-you-grow approach. The platform’s flexibility enables deployments at N, N+1, or N+2 resiliency. “We designed it to be the only CDU our customers will ever need,” Capes says. From Immersion to Direct-to-Chip LiquidStack first built its reputation on two-phase immersion cooling, which Joe Capes describes as “the highest performing, most sustainable cooling technology on Earth.” But with the launch of GigaModular, the company is now expanding into high-density, direct-to-chip cooling, helping hyperscale and colocation providers upgrade their thermal strategies without overhauling entire facilities. “What we’re trying to do with GigaModular is simplify the deployment of liquid cooling at scale — especially for direct-to-chip,” Capes explains. “It’s not just about immersion anymore. The flexibility to support future AI workloads and grow from 2.5 MW to 10 MW of capacity in a modular way is absolutely critical.” GigaModular’s components — including IE5 pump modules, dual BPHx heat exchangers, and intelligent control systems —

Read More »

Oracle’s Global AI Infrastructure Strategy Takes Shape with Bloom Energy and Digital Realty

Bloom Energy: A Leading Force in On-Site Power As of mid‑2025, Bloom Energy has deployed over 400 MW of capacity at data centers worldwide, working with partners including Equinix, American Electric Power (AEP), and Quanta Computing. In total, Bloom has delivered more than 1.5 GW of power across 1,200+ global installations, a tripling of its customer base in recent years. Several key partnerships have driven this rapid adoption. A decade-long collaboration with Equinix, for instance, began with a 1 MW pilot in 2015 and has since expanded to more than 100 MW deployed across 19 IBX data centers in six U.S. states, providing supplemental power at scale. Even public utilities are leaning in: in late 2024, AEP signed a deal to procure up to 1 GW of Bloom’s solid oxide fuel cell (SOFC) systems for fast-track deployments aimed at large data centers and commercial users facing grid connection delays. More recently, on July 24, 2025, Bloom and Oracle Cloud Infrastructure (OCI) announced a strategic partnership to deploy SOFC systems at select U.S. Oracle data centers. The deployments are designed to support OCI’s gigawatt-scale AI infrastructure, delivering clean, uninterrupted electricity for high-density compute workloads. Bloom has committed to providing sufficient on-site power to fully support an entire data center within 90 days of contract signing. With scalable, modular, and low-emissions energy solutions, Bloom Energy has emerged as a key enabler of next-generation data center growth. Through its strategic partnerships with Oracle, Equinix, and AEP, and backed by a rapidly expanding global footprint, Bloom is well-positioned to meet the escalating demand for multi-gigawatt on-site generation as the AI era accelerates. Oracle and Digital Realty: Accelerating the AI Stack Oracle, which continues to trail hyperscale cloud providers like Google, AWS, and Microsoft in overall market share, is clearly betting big on AI to drive its next phase of infrastructure growth.

Read More »

From Brownfield to Breakthrough: Aligned Data Centers Extends Its AI-First Infrastructure Vision from Ohio to the Edge of Innovation

In an AI-driven world of exponential compute demand, Aligned Data Centers is meeting the moment not just with scale, but with intent. The company’s recent blitz of strategic announcements, led by plans for a transformative new campus on legacy industrial land in Ohio, offers a composite image of what it means to build data center infrastructure for the AI era: rapid, resilient, regionally targeted, and relentlessly sustainable. From converting a former coal power plant site into a hub for digital progress in Coshocton County, to achieving new heights of energy efficiency in Phoenix, to enabling liquid-cooled, NVIDIA-accelerated AI deployments with Lambda in Dallas, Aligned is assembling a modular, AI-optimized framework designed to meet both today’s and tomorrow’s computational extremes. Ohio Expansion: A New Chapter for Conesville, and for Aligned Announced July 24, Aligned’s newest mega-scale data center campus in Central Ohio will rise on a 197-acre parcel adjacent to the retired AEP Conesville coal-fired power plant, a brownfield site that once symbolized legacy energy and is now poised to power the future of digital infrastructure. As noted by Andrew Schaap, CEO of Aligned Data Centers: “Through this strategic expansion, Aligned not only reinforces its commitment to providing future-ready digital infrastructure in vital growth markets but also directly catalyzes billions of dollars in investment for the state of Ohio and the Coshocton County community.” It’s a project with deep regional implications. The phased, multi-billion dollar development is expected to create thousands of construction jobs and hundreds of high-quality, long-term operational roles, while generating significant tax revenues that will support local services and infrastructure improvements. The campus has already secured a foundational customer, with the first facility targeting initial capacity delivery in mid-2026. This marks Aligned’s third campus in Ohio, a clear indication that the company sees the Buckeye State, with its

Read More »

Oklo Accelerates Aurora SMR Deployment with Nuclear-Backed Infrastructure Alliances Poised to Transform Data Center Power

In a coordinated wave of announcements, July 2025 marks a decisive pivot for Oklo as it moves its Aurora Powerhouse small modular reactor (SMR) from visionary concept to near-term reality. The company’s integrated momentum across licensing, construction, and commercial partnerships underscores its transition from development-stage innovator to first-mover in next-generation nuclear deployment. Strategic alliances with infrastructure leaders Vertiv and Liberty Energy reveal a clear market play: providing clean, high-availability energy solutions designed for hyperscale, colocation, and industrial-scale users. Meanwhile, the successful completion of a key NRC readiness assessment and the selection of Kiewit as lead constructor for the inaugural Aurora plant at Idaho National Laboratory (INL) indicate a strong glide path toward commissioning by late 2027 or early 2028. Major execution risks remain, including regulatory pacing, supply chain timing, and demonstration outcomes, but the foundational pieces for commercial deployment are now publicly locked into place. Oklo & Vertiv: Delivering Power and Cooling for High-Density Data Centers On July 22, 2025, Oklo and Vertiv (NYSE: VRT) announced a strategic collaboration to co-develop integrated power and thermal management systems tailored to the needs of hyperscale and colocation data centers. Under the agreement, Oklo will provide both electricity and high-temperature steam from its Aurora fast fission reactor, while Vertiv contributes its advanced portfolio of digital infrastructure and cooling systems. The goal: to tightly couple power generation with thermal management by leveraging reactor heat in Vertiv’s absorption chillers and thermal loops, significantly improving energy efficiency and sustainability. At the core of the partnership is a joint plan to deliver end-to-end reference design packages as blueprints for future-ready data centers that integrate Oklo’s nuclear powerhouses directly into the facility infrastructure. These designs capitalize on Oklo’s distinctive role not only as developer but as owner and operator of its power plants, enabling deeper coordination between energy

Read More »

Inside the DCF Trends Summit 2025: Power Moves, AI Factories, and Moonshots to Watch

As the AI era pushes digital infrastructure into overdrive, the 2025 Data Center Frontier Trends Summit (Aug. 26–28, Reston, VA) returns with its boldest and most consequential agenda yet. From power procurement and adaptive reuse to agentic supply chains, modular energy, and moonshot innovation, the Summit reflects an inflection point for the entire industry, where site constraints, grid bottlenecks, and high-density AI workloads are forcing operators to write a new playbook on the fly. In a QuickChat video leading up to the event, I sat down with longtime DCF contributor and Apolo CEO Bill Kleyman, who is not only moderating our flagship AI Factory panel but also serving as a judge in our closing Moonshot Trends session. Together, we unpacked the themes, tensions, and vision shaping this year’s gathering. With a little help from AI itself, here’s your companion guide to the Trends Summit 2025 agenda: a curated tour of the key sessions, standout speakers, and emerging priorities at the heart of the next data center frontier.  📘 Day 1: A New Playbook Begins (Tuesday, August 26) 🔑 Opening Keynote: “Playbook Interrupted” Chris Downie, CEO of Flexential, kicks off the event with a keynote that pulls no punches. Power scarcity, global policy, and AI’s ravenous infrastructure appetite are cracking old strategies wide open. Downie will chart out the new mandates facing operators, where infrastructure is not just a support system—it’s the bottleneck and the breakthrough. 🤖 AI for Good: Smarter Data Centers and Smarter AI Workloads What happens when Schneider Electric, Compass Datacenters, and Motivair sit down to talk AI? You get both sides of the coin—AI for data centers (via predictive maintenance, energy optimization) and data centers for AI (validated reference designs and liquid-cooled GPU clusters). A standout session on future-proofing both infrastructure and operations. Speakers:Steve Carlini (Schneider Electric),Sudhir

Read More »

Broadcom expands AI networking portfolio with Jericho4 Ethernet fabric router

According to Broadcom, a single Jericho4 system can scale to 36,000 HyperPorts, each running at 3.2 Tbps, with deep buffering, line-rate MACsec encryption, and RoCE transport over distances greater than 100 kilometers. HBM powers distributed AI Improving on previous designs, Jericho4’s use of HBM can significantly increase total memory capacity and reduce the power consumed by the memory I/O interface, enabling faster data processing than traditional buffering methods, according to Lian Jie Su, chief analyst at Omdia. While this may raise costs for data center interconnects, Su said higher-speed data processing and transfer can remove bottlenecks and improve AI workload distribution, increasing utilization of data centers across multiple locations. “Jericho4 is very different from Jericho3,” Su said. “Jericho4 is designed for long-haul interconnect, while Jericho3 focuses on interconnect within the same data center. As enterprises and cloud service providers roll out more AI data centers across different locations, they need stable interconnects to distribute AI workloads in a highly flexible and reliable manner.” Others pointed out that Jericho4, built on Taiwan Semiconductor Manufacturing Company’s (TSMC) 3‑nanometer process, increases transistor density to support more ports, integrated memory, and greater power efficiency, features that may be critical for handling large AI workloads. “It enables unprecedented scalability, making it ideal for coordinating distributed AI processing across expansive GPU farms,” said Manish Rawat, semiconductor analyst at TechInsights. “Integrated HBM facilitates real-time, localized congestion management, removing the need for complex signaling across nodes during high-traffic AI operations. Enhanced on-chip encryption ensures secure inter-data center traffic without compromising performance.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »