Stay Ahead, Stay ONMINE

Two Lenses on One Market: JLL and CBRE Show Data Centers in a Pinch

The two dominant real estate research houses, JLL and CBRE, have released midyear snapshots of the North American data center market, and both paint the same picture in broad strokes: demand remains insatiable, vacancy has plunged to record lows, and the growth of AI and hyperscale deployments is reshaping every aspect of the business. But […]

The two dominant real estate research houses, JLL and CBRE, have released midyear snapshots of the North American data center market, and both paint the same picture in broad strokes: demand remains insatiable, vacancy has plunged to record lows, and the growth of AI and hyperscale deployments is reshaping every aspect of the business.

But their lenses capture different angles of the same story: one emphasizing preleasing and capital flows, the other highlighting hyperscale requirements and regional shifts.

Vacancy Falls Through the Floor

JLL sets the stage with a stark headline: colocation vacancy is nearing 0%. The JLL Midyear 2025 North America Data Center report warns that this scarcity “is constraining economic growth and undermining national security,” underscoring the role of data centers as critical infrastructure.

CBRE’s North American Data Center Trends H1 2025 numbers back this up, recording an all-time low North America vacancy rate of 1.6%, the tightest in more than a decade. Both agree that market loosening is years away — JLL projecting vacancy hovering around 2% through 2027, CBRE noting 74.3% of new capacity already spoken for.

The takeaway seems clear: without preleasing, operators and tenants alike are effectively shut out of core markets.

Absorption and Preleasing Drive Growth

JLL drills down into the mechanics. With virtually all absorption tied to preleasing, the firm points to Northern Virginia (647 MW) and Dallas (575 MW) as the twin engines of growth in H1, joined by Chicago, Austin/San Antonio, and Atlanta.

CBRE’s absorption math is slightly different, but the conclusion aligns: Northern Virginia again leads the nation, with 538.6 MW net absorption and a remarkable 80% surge in under-construction capacity.

CBRE sharpens the view by noting that the fiercest competition is at the top end: single-tenant requirements of 10 MW or more are setting pricing records as hyperscalers lock down scarce power and space.

Regional Expansion and Market Differentiation

JLL flags the gradual shift of hyperscale projects into secondary and tertiary markets, though it cautions that colocation demand remains concentrated in the Big Five hubs.

CBRE highlights which regions are winning new builds: Atlanta posted nearly 1 GW of new inventory in H1, while Dallas-Fort Worth is on pace to double in size by 2026.

CBRE also points to Charlotte-Raleigh, Austin, and San Antonio as fast-growth destinations where cheaper power and faster delivery timelines are pulling development.

The nuance here is important: JLL emphasizes the enduring dominance of core markets, while CBRE emphasizes the acceleration of emerging ones.

Both views are true, depending on whether you’re a hyperscaler chasing land and megawatts, or a colocation provider chasing enterprise proximity.

Financing and the New Capital Stack

JLL goes deep into capital flows, noting that data centers remain a favored asset class with expanding lender pools ranging from CRE banks to debt funds. Financing needs are evolving as utilities demand earlier deposits and developers pursue behind-the-meter solutions.

CBRE’s analysis, by contrast, centers on pricing: lease rates have surged to levels not seen since 2011–2012 — $200+ per kW per month for large requirements — though CBRE suggests double-digit annual increases are unsustainable long term.

Together, these insights sketch the same truth: money is available and tenants are willing to pay, but cost structures are becoming more complex, with power procurement at the heart of every deal.

The Power Imperative

Perhaps the sharpest throughline in both reports is the scarcity of power.

JLL cites rising capital requirements to secure utility commitments. CBRE frames power availability as the single largest factor shaping site selection, pricing, and delivery.

Both firms agree: without faster, more predictable power pathways, the market risks gridlock.

Conclusion: One Market, Two Angles

Step back, and JLL and CBRE are telling the same story in complementary ways.

JLL offers a macro portrait of vacancy, absorption, and capital appetite, while CBRE zooms into the hyperscale/AI effect, regional breakouts, and pricing escalation.

Together, they illuminate a market running at full tilt, with virtually no slack in the system.

For operators, investors, and end users, the message is the same. Success in 2025 and beyond will hinge on early commitments, creative financing, and a willingness to explore new geographies.

On the frontier of digital infrastructure, it is no longer enough to wait for supply. The future belongs to those who can pre-lease it, pre-fund it, and pre-power it.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Network discovery gets a boost from Intel-spinout Articul8

Technical architecture: beyond traditional monitoring Weave’s technical foundation relies on a hybrid knowledge graph architecture. It processes different data types through specialized analytical engines. It does not attempt to force all network data through large language models (LLM). This design choice addresses accuracy concerns inherent in applying generative AI to

Read More »

JF Acquires 4 Corners

JF Petroleum Group (JF) is continuing its expansion with the acquisition of 4 Corners Petroleum Equipment, a service contractor based in Texarkana, Texas. 4 Corners was founded in 2015 by industry veteran Kenny Allen and it has since served Northeast Texas and Southwest Arkansas, JF noted in a media release.

Read More »

Oil Drops to Lowest Since May Ahead of OPEC+ Talks

Oil fell to the lowest since May ahead of a weekend OPEC+ meeting where Saudi Arabia will seek to steer the group toward more production increases in the coming months. West Texas Intermediate slid 2.5% to settle below $62 a barrel, down 3.3% this week. The alliance will hold a virtual meeting Sept. 7 to decide its next move after completing the restart of 2.5 million barrels a day of idled supply at its previous gathering. Saudi Arabia wants to boost production further in a bid to offset lower prices with higher volumes, people familiar with the matter said. No decision has been made, and it’s not clear whether any increase would be agreed upon as soon as Sunday or only in later months. “If the eight OPEC+ countries were to agree on another production increase, we believe this would place significant downward pressure on oil prices,” Commerzbank analysts Barbara Lambrecht and Carsten Fritsch wrote in a note. “After all, there is already a significant risk of a supply surplus.” West Texas Intermediate crude futures have retreated about 14% this year after the shift by OPEC+ — coupled with supply increases from drillers outside the group — exacerbated concerns about a global glut. Market sentiment has also been weighed down by mounting worries over the health of the US economy, where job growth slowed last month. Geopolitical tensions have also been in focus this week, with the US looking to pressure buyers of Russian crude to push Moscow into agreeing on a truce in Ukraine. As part of that effort, Washington has imposed a 50% levy on some imports from India. President Donald Trump said Friday that the US seems to have “lost India and Russia to deepest, darkest China.” “Sentiment in crude markets is poor,” said Daniel Ghali, a

Read More »

USA LNG Exporters Race to Tie Up Financing

US developers are racing to cash in on the nation’s natural-gas export boom while they still can.  The massive US buildout of terminals that process and ship liquefied natural gas, or LNG, has transformed the nation into the world’s top exporter of the fuel. But plants still in development are facing a tight deadline: By 2027, global LNG supply will exceed demand, BloombergNEF estimates. By 2030, US rival Qatar will have finished its own years-long LNG buildout, further damping appetite for new terminals. And by 2031, a massive pipeline expansion by Gazprom PJSC could begin funneling more of Russia’s natural gas to China, possibly displacing as much as 40 million metric tons of LNG demand per year, according to BloombergNEF. Four US projects with the capacity to export 63 million tons of LNG a year are still awaiting final investment decisions. Even the $35 billion in US plants already under construction face headwinds amid a tight labor market that’s threatening to push back timelines. Golden Pass LNG, being jointly developed in Texas by Exxon Mobil Corp. and QatarEnergy LNG, is coming online in 2025, one year later than scheduled following a worker shortage and the bankruptcy of one of its contractors.  Here are the projects to watch. Louisiana LNG (Under construction) Developer: Woodside Energy Capacity: 27.6 million tons per year Woodside Energy announced its $17.5 billion final investment decision to build Louisiana LNG in late April, after the company acquired Tellurian Inc. in 2024. The facility is under construction in Calcasieu Pass, Louisiana, and targeted to come online by 2029. Corpus Christi LNG Expansion (Under construction) Developer: Cheniere Energy Inc.  Capacity: 3 million tons per year Cheniere Energy Inc., the largest American exporter, last month announced a $2.9 billion expansion of its Corpus Christi plant in south Texas. Two new production trains are slated to start toward the end of the decade, which

Read More »

Plains to acquire 55% interest in EPIC Crude from Diamondback, Kinetik

A Plains All American Pipeline LP and Plains GP Holdings subsidiary has agreed to acquire from subsidiaries of Diamondback Energy Inc. and Kinetik Holdings Inc. a 55% non-operated interest in EPIC Crude Holdings LP, the entity that owns and operates the EPIC crude oil pipeline, in a deal valued at about $1.57 billion, inclusive of about $600 million of debt. “By further linking our Permian and Eagle Ford gathering systems to Corpus Christi, we are enhancing market access and ensuring our customers have reliable, cost-effective routes to multiple demand centers,” said Plains chairman, chief executive officer, and president, Willie Chiang.  Plains also has agreed to a potential $193 million earnout payment should an expansion of the pipeline to a capacity of at least 900,000 b/d be formally sanctioned before yearend 2027. Diamondback Energy and Kinetik Holdings each agreed to sell their respective 27.5% equity interest, which they reached with acquisitions in September 2024, for about $500 million in net upfront cash and a $96 million share of the total potential $193-million contingent cash payment related to the potential expansion. Diamondback will maintain its commercial relationship with the EPIC Crude and Plains teams as an anchor shipper on the EPIC Crude pipeline, said Kaes Van’t Hof, chief executive officer and director of Diamondback Energy, in a separate release Sept. 2. The remaining 45% interest in EPIC Crude Holdings is owned by a portfolio company of Ares Management Corp. (EPIC Management), which also serves as operator. The EPIC assets include over 800 miles of long-haul crude oil takeaway from the Permian and Eagle Ford basins to the Gulf Coast market at Corpus Christi, Tex., with current operating capacity over 600,000 b/d. Other assets include total operational storage of about 7 million bbl and over 200,000 b/d of export capacity. EPIC Crude includes terminals

Read More »

Equinor signs heads of agreement for Bay du Nord FPSO

Equinor Canada Ltd. signed a head of agreement (HoA) with BW Offshore, confirming its selection as preferred bidder for the floating production, storage and offloading (FPSO) unit for the Bay du Nord deepwater oil project offshore Newfoundland and Labrador, Canada. Equinor operates Bay du Nord, Canada’s first deepwater oil project, in partnership with bp plc. The project holds an estimated 400 million bbl of recoverable light crude in its initial phase. The oil discovery lies in the Jurassic reservoirs of the Flemish Pass basin, about 500 km east of St. John’s in 1,170 m of water. Later discoveries, and potential tie-ins, lie in adjacent exploration licence EL1156 (Cappahayden and Cambriol) in waters about 650 m deep.  Development of  the project was postposedin 2023 for up to 3 years due to “changing market conditions and subsequent high cost inflation,” according to Equinor. During that time, however, Equinor and bp have advanced work to actively mature the project toward future development.  Under the newly signed HoA, Equinor and BW Offshore will continue to advance discussions on all technical and commercial aspects of the FPSO project. These include further maturation of design through front-end engineering design (FEED) work, and agreeing on a commercial solution. The FPSO will be tailored for the harsh environment of the sub-Arctic. The unit is expected to support production of up to 160,000 b/d of oil and will feature a disconnectable turret system and extensive winterization, BW Offshore said. The topside will include emission reduction initiatives such as high-efficiency power generation and heat recovery, variable speed drives and a closed flare system. The FPSO also will be designed for future tiebacks. Following pre-FEED completion mid-September, the two companies are expected to enter into a bridging phase to prepare for FEED in early 2026, subject to approvals by Equinor and bp.

Read More »

Chevron takes over operatorship of block offshore Uruguay

Chevron Corp. has officially taken over operatorship of AREA OFF-1 block in Uruguay and 3D seismic acquisition on the block is expected late in this year’s fourth quarter. Handover of the South American block occurred in first-half 2025, partner Challenger Energy Group PLC said in a half-year report Sept. 3. In November 2024, Chevron completed a farm-in with Challenger to acquire a 60% interest in the offshore block, along with “various work streams necessary to prepare for 3D seismic acquisition,” Challenger’s chief executive officer Eytan Uliel told stakeholders. Uliel noted the start of seismic acquisition is still subject to finalization of permitting by the Uruguayan Ministry of Environment, “a process which is well advanced,” he said. In July, Challenger said the Ministry has consultations planned ahead of permit issuances, and that a final consultation was expected late that month. At the time, Challenger said it expected permits to be granted in August/September.  Chevron will carry the full cost of the seismic campaign up to a total program cost of $37.5 million. The 14,557-sq km block lies about 100 km offshore in water depths of 80-1,000 m, and holds prospective inventory of about 2 billion bbl of recoverable resource (Pmean) through multiple prospects (Teru Teru, Anapero, Lenteja) in a range of play types, according to Challenger.  Elsewhere in Uruguay, Challenger progressed work at the 13,000 sq km AREA OFF-3 block, substantially completing its planned technical work program in August. The primary geotechnical work focused on the licensing, reprocessing, and interpretation of a 1,250 sq km 3D seismic data set. Other subsurface studies addressed the geochemistry and further de-risked AREA OFF-3 exploration potential, the company said. The company began a formal farmout process for the block on Sept. 1.

Read More »

TPAO lets contract for Sakarya gas field FPU

Turkish Petroleum Corp. (TPAO) has let an engineering, procurement, construction, installation, and commissioning (EPIC) contract to Wison New Energies Ltd. for the floating production unit (FPU) to be utilized as part of Phase 3 of Sakarya gas field development. Sakarya lies about 170 km offshore the western Black Sea in block AR/TPO/KD/C26-C27-D26-D27 in 2,150 m of water. It is Türkiye’s largest-ever natural gas discovery and contains 405 billion cu m proven gas reserves. It was discovered in August 2020 and is being developed in three phases by TPAO. The gas field development project includes an offshore production system on the seabed, an onshore gas processing unit, and a 170-km pipeline system connecting the two.  The FPU must comply with deepwater operations and navigate the Bosphorus Strait’s 56-m air draft restriction. It will be anchored to the seabed in four groups with a total of 20 rope systems in an area about 180 km offshore where it will remain stationary. The unit is designed with a gas export rate of 25 MM std cu m/d (883 MMscfd), a produced water treatment capacity of 1,350 cu m/d, and a MEG regeneration and injection capacity of 2,503 cu m/d for hydrate inhibition, with a minimum 30-year design life. It is expected to be commissioned in mid-2028 with Phase 3 starting commercial production in 2030. Total Phase 3 production is expected to reach 40 million cu m/d. TPAO is operator of the field and holds 100% interest.

Read More »

Google adds Gemini to its on-prem cloud for increased data protection

Google has announced the general availability of its Gemini artificial intelligence models on Google Distributed Cloud (GDC), making its generative AI product available on enterprise and government data centers. GDC is an on-premises implementation of Google Cloud, aimed at heavily regulated industries like medical and financial services to bring Google Cloud services within company firewalls rather than the public cloud. The launch of Gemini on GDC allows organizations with strict data residency and compliance requirements to deploy generative AI without compromising control over sensitive information. GDC uses Nvidia Hopper and Blackwell 0era GPU accelerators with automated load balancing and zero-touch updates for high availability. Security features include audit logging and access control capabilities that provide full transparency for customers. The platform also features Confidential Computing support for both CPUs (with Intel TDX) and GPUs (with Nvidia’s confidential computing) to secure sensitive data and prevent tampering or exfiltration.

Read More »

Nvidia networking roadmap: Ethernet, InfiniBand, co-packaged optics will shape data center of the future

Nvidia is baking into its Spectrum-X Ethernet platform a suite of algorithms that can implement networking protocols to allow Spectrum-X switches, ConnectX-8 SuperNICs, and systems with Blackwell GPUs to connect over wider distances without requiring hardware changes. These Spectrum-XGS algorithms use real-time telemetry—tracking traffic patterns, latency, congestion levels, and inter-site distances—to adjust controls dynamically. Ethernet and InfiniBand Developing and building Ethernet technology is a key part of Nvidia’s roadmap. Since it first introduced Spectrum-X in 2023, the vendor has rapidly made Ethernet a core development effort. This is in addition to InfiniBand development, which is still Nvidia’s bread-and-butter connectivity offering. “InfiniBand was designed from the ground up for synchronous, high-performance computing — with features like RDMA to bypass CPU jitter, adaptive routing, and congestion control,” Shainer said. “It’s the gold standard for AI training at scale, connecting more than 270 of the world’s top supercomputers. Ethernet is catching up, but traditional Ethernet designs — built for telco, enterprise, or hyperscale cloud — aren’t optimized for AI’s unique demands,” Shainer said. Most industry analysts predict Ethernet deployment for AI networking in enterprise and hyperscale deployments will increase in the next year; that makes Ethernet advancements a core direction for Nvidia and any vendor looking to offer AI connectivity options to customers. “When we first initiated our coverage of AI back-end Networks in late 2023, the market was dominated by InfiniBand, holding over 80% share,” wrote Sameh Boujelbene, vice president of Dell ’Oro Group, in a recent report. “Despite its dominance, we have consistently predicted that Ethernet would ultimately prevail at scale. What is notable, however, is the rapid pace at which Ethernet gained ground in AI back-end networks. As the industry moves to 800 Gbps and beyond, we believe Ethernet is now firmly positioned to overtake InfiniBand in these high-performance deployments.”

Read More »

Inside the AI-optimized data center: Why next-gen infrastructure is non-negotiable

How are AI data centers different from traditional data centers? AI data centers and traditional data centers can be physically similar, as they contain hardware, servers, networking equipment, and storage systems. The difference lies in their capabilities: Traditional data centers were built to support general computing tasks, while AI data centers are specifically designed for more sophisticated, time and resource-intensive workloads. Conventional data centers are simply not optimized for AI’s advanced tasks and necessary high-speed data transfer. Here’s a closer look at their differences: AI-optimized vs. traditional data centers Traditional data centers: Handle everyday computing needs such as web browsing, cloud services, email and enterprise app hosting, data storage and retrieval, and a variety of other relatively low-resource tasks. They can also support simpler AI applications, such as chatbots, that do not require intensive processing power or speed. AI data centers: Built to compute significant volumes of data and run complex algorithms, ML and AI tasks, including agentic AI workflows. They feature high-speed networking and low-latency interconnects for rapid scaling and data transfer to support AI apps and edge and internet of things (IoT) use cases. Physical infrastructure Traditional data centers: Typically composed of standard networking architectures such as CPUs suitable for handling networking, apps, and storage. AI data centers: Feature more advanced graphics processing units (GPU) (popularized by chip manufacturer Nvidia), tensor processing units (TPUs) (developed by Google), and other specialized accelerators and equipment. Storage and data management Traditional data centers: Generally, store data in more static cloud storage systems, databases, data lakes, and data lakehouses. AI data centers: Handle huge amounts of unstructured data including text, images, video, audio, and other files. They also incorporate high-performance tools including parallel file systems, multiple network servers, and NVMe solid state drives (SSDs). Power consumption Traditional data centers: Require robust cooling

Read More »

From Cloud to Concrete: How Explosive Data Center Demand is Redefining Commercial Real Estate

The world will generate 181 ZB of data in 2025, an increase of 23.13% year over year and 2.5 quintillion bytes (a quintillion byte is also called an exabyte, EB) created daily, according to a report from Demandsage. To put that in perspective: One exabyte is equal to 1 quintillion bytes, which is 1,000,000,000,000,000,000 bytes. That’s 29 TB every second, or 2.5 million TB per day. It’s no wonder data centers have become so crucial for creating, consuming, and storing data — and no wonder investor interest has skyrocketed.  The surging demand for secure, scalable, high-performance retail and wholesale colocation and hyperscale data centers is spurred by the relentless, global expansion of cloud computing and demand for AI as data generation from businesses, governments, and consumers continues to surge. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built.  As a result, investors increasingly view these facilities not just as technology assets, but as a unique convergence of real estate, utility infrastructure, and mission-critical systems. Capitalizing on this momentum, private equity and real estate investment firms are rapidly expanding into the sector through acquisitions, joint ventures, and new funds—targeting opportunities to build and operate facilities with a focus on energy efficiency and scalability.

Read More »

Ai4 2025 Navigates Rapid Change in AI Policy, Education

The pace of innovation in artificial intelligence is fundamentally reshaping the landscape of education, and the changes are happening rapidly. At the forefront of this movement stand developers, policy makers, educational practitioners, and associated experts at the recent Ai4 2025 conference (Aug. 11-13) in Las Vegas, where leading voices such as Geoffrey Hinton “The Godfather of AI,” top executives from Google and U.S. Bank, and representatives from multiple government agencies gathered to chart the future of AI development. Importantly, educators and academic institutions played a central role, ensuring that the approach to AI in schools is informed by those closest to the classroom. Key discussions at Ai4 and recent educator symposia underscored both the promise and peril of swift technological change. Generative AI, with its lightning-fast adoption since the advent of tools like ChatGPT, is opening new possibilities for personalized learning, skills development, and operational efficiency. But participants were quick to note that acceleration brings good and bad consequences. On one hand, there’s excitement about practical classroom implementations and the potential for students to engage with cutting-edge technology. On the other, concerns about governance, ethics, safety, and the depth of genuine learning remain at the forefront. This urgency to “do this right” is echoed by teachers, unions, and developers who are united by the challenges and opportunities on the ground. Their voices highlight the need for agreement on education policy and associated regulations to keep pace with technological progress, create frameworks for ethical and responsible use, and ensure that human agency remains central in shaping the future of childhood and learning. In this rapidly evolving environment, bringing all stakeholders to the table is no longer optional; it is essential for steering AI in education toward outcomes that benefit both students and society. Global Context: America, China, and the AI Race

Read More »

Two Lenses on One Market: JLL and CBRE Show Data Centers in a Pinch

The two dominant real estate research houses, JLL and CBRE, have released midyear snapshots of the North American data center market, and both paint the same picture in broad strokes: demand remains insatiable, vacancy has plunged to record lows, and the growth of AI and hyperscale deployments is reshaping every aspect of the business. But their lenses capture different angles of the same story: one emphasizing preleasing and capital flows, the other highlighting hyperscale requirements and regional shifts. Vacancy Falls Through the Floor JLL sets the stage with a stark headline: colocation vacancy is nearing 0%. The JLL Midyear 2025 North America Data Center report warns that this scarcity “is constraining economic growth and undermining national security,” underscoring the role of data centers as critical infrastructure. CBRE’s North American Data Center Trends H1 2025 numbers back this up, recording an all-time low North America vacancy rate of 1.6%, the tightest in more than a decade. Both agree that market loosening is years away — JLL projecting vacancy hovering around 2% through 2027, CBRE noting 74.3% of new capacity already spoken for. The takeaway seems clear: without preleasing, operators and tenants alike are effectively shut out of core markets. Absorption and Preleasing Drive Growth JLL drills down into the mechanics. With virtually all absorption tied to preleasing, the firm points to Northern Virginia (647 MW) and Dallas (575 MW) as the twin engines of growth in H1, joined by Chicago, Austin/San Antonio, and Atlanta. CBRE’s absorption math is slightly different, but the conclusion aligns: Northern Virginia again leads the nation, with 538.6 MW net absorption and a remarkable 80% surge in under-construction capacity. CBRE sharpens the view by noting that the fiercest competition is at the top end: single-tenant requirements of 10 MW or more are setting pricing records as hyperscalers

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »