Stay Ahead, Stay ONMINE

Wonder Valley and the Great AI Pivot: Kevin O’Leary’s Bold Data Center Play

Data Center World 2025 drew record-breaking attendance, underscoring the AI-fueled urgency transforming infrastructure investment. But no session captivated the crowd quite like Kevin O’Leary’s electrifying keynote on Wonder Valley—his audacious plan to build the world’s largest AI compute data center campus. In a sweeping narrative that ranged from pandemic pivots to stranded gas and Branson-brand […]

Data Center World 2025 drew record-breaking attendance, underscoring the AI-fueled urgency transforming infrastructure investment. But no session captivated the crowd quite like Kevin O’Leary’s electrifying keynote on Wonder Valley—his audacious plan to build the world’s largest AI compute data center campus.

In a sweeping narrative that ranged from pandemic pivots to stranded gas and Branson-brand inspiration, O’Leary laid out a real estate and infrastructure strategy built for the AI era.

A Pandemic-Era Pivot Becomes a Case Study in Digital Resilience

O’Leary opened with a Shark Tank success story that doubled as a business parable. In 2019, a woman-led startup called Blueland raised $50 million to eliminate plastic cleaning bottles by shipping concentrated cleaning tablets in reusable kits. When COVID-19 shut down retail in 2020, her inventory was stuck in limbo—until she made an urgent call to O’Leary.

What followed was a high-stakes, last-minute pivot: a union-approved commercial shoot in Brooklyn the night SAG-AFTRA shut down television production. The direct response ad campaign that resulted would not only liquidate the stranded inventory at full margin, but deliver something more valuable—data.

By targeting locked-down consumers through local remnant TV ad slots and optimizing by conversion, Blueland saw unheard-of response rates as high as 17%. The campaign turned into a data goldmine: buyer locations, tablet usage patterns, household sizes, and contact details. Follow-up SMS campaigns would drive 30% reorders.

“It built such a franchise in those 36 months,” O’Leary said, “with no retail. Now every retailer wants in.” The lesson? Build your infrastructure to control your data, and you build a business that scales even in chaos.

This anecdote set the tone for the keynote: in a volatile world, infrastructure resilience and data control are the new core competencies.

The Data Center Power Crisis: “There Is Not a Gig on the Grid”

O’Leary didn’t mince words about the current state of power availability. “There is not a gig on the grid anywhere in any state today,” he stated. Demand is outstripping the ability of utilities and regulators to respond. “Anybody that tells me otherwise is full of it.”

He painted a stark contrast between hyperscaler ambitions and the grid’s reality. “We used to get 500-megawatt commitments from states—but the queue was seven years long. You can’t build a business on a seven-year promise.”

O’Leary emphasized that the speed of the AI boom is incompatible with legacy utility planning cycles. He quoted Amazon CEO Andy Jassy, who had said just 48 hours prior: “We have such high demand right now for AWS and AI growth is so significant that we don’t see any attenuation.”

“Every time Amazon lights up a region, it gets swallowed up by training runs in 30 days,” O’Leary added. “We are not building fast enough.”

Of course, Wall Street didn’t get the memo. Just days after O’Leary’s keynote, a report from Wells Fargo suggested AWS may actually be seeing AI growth slowing, with capex falling short of expectations and signs of digestion among hyperscale buyers.

Whether that’s a blip or a broader recalibration, O’Leary’s argument wasn’t buying it: The momentum, he contended, still lies with those who can scale off-grid and out front.

Nvidia’s 12-Month Upgrade Cycle and the CapEx Crunch

AI infrastructure isn’t just power-hungry—it’s refresh-hungry. O’Leary noted that GPU stacks are now refreshing every 12 to 18 months. “Nvidia’s upscaling cycle used to be 18 months. I think within a year or two, it’ll be 12.”

This isn’t just a technical challenge. It’s a budgetary earthquake. “You’re going to take those stacks and refresh them every 12 to 18 months. That’s huge CapEx,” he said. “So every state has to have a tax holiday on that wherever you’re building. And they’re getting the joke—they’re doing it.”

He also noted how this rapid cycle intensifies site selection and vendor risk. “You can’t just pick a site and hope—it has to be engineered to turn over gear at scale.”

The core message: AI infrastructure isn’t just about servers—it’s about structural agility in policy, planning, and procurement.

Wonder Valley: A Unicorn in the Forest

To address these compounding demands, O’Leary unveiled what he called the “unicorn of North America”—Wonder Valley, a 7,000-acre AI compute park in Alberta. Entirely off-grid, Wonder Valley will be powered by stranded natural gas, a massively underutilized resource that Alberta holds in abundance.

“This is the single largest AI compute data center park on Earth,” O’Leary declared. “Completely independent, with enough water, land, and fiber.”

He described the deal structure as a joint venture between energy, capital, and infrastructure players. “We’re not just building data centers—we’re building energy platforms around AI.”

O’Leary also detailed how Wonder Valley draws inspiration from Richard Branson, who once advised him, “Why not call everything ‘Wonder’? It worked for Virgin.” Now, O’Leary is branding everything from Wonder Valley to Wonder Capital, using the theme to cross-pollinate visibility and value.

“This isn’t just a project—it’s a platform,” he emphasized.

From New York to Norway: Lessons in Global Energy Policy

O’Leary’s path to Wonder Valley was one of lessons learned. His first project began in upstate New York, where policy and permitting delays forced him to abandon the site. “I don’t invest in New York anymore,” he said bluntly.

Instead, he shifted the operation to Norway, where a combination of hydropower and favorable energy policy helped the site flourish. That project has since doubled in size, and serves as a template for sustainable builds.

His teams also explored nuclear power in Finland, further sharpening their playbook for energy-resilient builds. “We learned to de-risk through energy first, not real estate.”

“These are smaller projects,” he said, “but they gave us our engineering chops.”

The Playbook: Power Deals, Turbine Deliveries, and Government Cooperation

What makes Wonder Valley replicable? It’s not just the land or gas—it’s the ecosystem. O’Leary stressed the importance of community infrastructure: “You need people, you need a polytechnical institute, you need a hospital, a dry cleaner, parking—you need all of that.”

He spoke in detail about turbine procurement as a bottleneck: “There’s a massive problem in turbine delivery. If you want to finance $20 billion worth of turbines, you better have a power deal before the first brick is laid.”

To solve this, O’Leary’s team has created a three-legged stool: vendor consortia, power strategy, and local alignment. “These projects can’t happen without state and local governments who get the joke.”

He also noted Wonder Valley has secured buy-in from First Nations and Alberta’s provincial leadership. “If you don’t have political cover, you don’t have a project.”

Two Models: Build-to-Suit vs. Lease-Back Hyperscaler Preferences

O’Leary outlined two paths for hyperscaler participation. Some, like G42, want to buy the land and build once the permitting and power are secured. Others prefer a lease-back model, taking full occupancy of a prebuilt facility in a 5-9s configuration.

“As a developer, you need to be ready for both,” he said. “You need to aggregate land, power, and water, but also know how to monetize through either model.”

He highlighted how structuring pre-leases around power and modularity helps derisk capital. “No one’s waiting 36 months anymore. You need to have capacity in the ground ready to flip.”

This bifurcation is reshaping how developers approach risk, capital structure, and long-term value creation.

Conclusion: The Patriots Playbook for Data Centers

With Wonder Valley, O’Leary isn’t just building one site—he’s creating a repeatable playbook. “When Brady was quarterback,” he quipped, “you just move the Patriots from location to location. That’s what Wonder Valley is.”

He’s now scouting 6,000+ acre sites with similar profiles: stranded gas, supportive governments, and willing communities.

He closed by challenging the audience: “There’s $1.6 trillion of AI infrastructure coming in the next decade. You’re either building it—or you’re watching it.”

In a year of historic industry momentum, O’Leary’s keynote at Data Center World 2025 was a definitive moment. Infrastructure is no longer background—it’s center stage, and the next decade will be shaped by who can build it, power it, and scale it fast enough to meet AI’s unrelenting demand.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Broadcom scales up Ethernet with Tomahawk Ultra for low latency HPC and AI

Broadcom Support for minimum packet size allows streaming of those packets at full bandwidth. That capability is essential for efficient communication in scientific and computational workloads. It is particularly important for scale-up networks where GPU-to-switch-to-GPU communication happens in a single hop. Lossless Ethernet gets an ‘Ultra’ boost Another specific area

Read More »

Nvidia to restart H20 exports to China, unveils new export-compliant GPU

China re-entry impact Nvidia’s announcements mark a bid to re-enter the world’s second-largest AI market under tightened US export controls. But this return may not mean business as usual. “Despite Nvidia’s market re-entry, Chinese companies will likely continue diversifying suppliers to strengthen supply chain resilience,” said Prabhu Ram, VP of

Read More »

Oil Rebounds After Weak Start

Oil edged lower as signs of a softening crude market undercut broader strength across risk assets. West Texas Intermediate futures dipped 0.2% to settle above $66 a barrel, extending a three-day losing streak. The commodity followed broader markets off its lows after President Donald Trump denied a plan to fire Fed Chair Jerome Powell. Still, that rebound wasn’t enough to fully undo a slide driven by government data showing falling distillate demand and rising inventories at the key storage hub in Cushing, Oklahoma. “We are in a rangebound market here, with upside risk capped from waning geopolitical risk, while peak seasonal demand gives us some support,” said Frank Monkam, head of macro trading at Buffalo Bayou Commodities. Traders and analysts remain preoccupied with the prospect of an oversupply later this year as global demand growth cools, the OPEC+ alliance fast-tracks the return of halted supplies and output rises across the Americas. Oil has ticked higher this month — building on the upward trend since May — despite concerns that Trump’s tariff onslaught will hurt demand. Earlier this week, Goldman Sachs Group Inc. raised its Brent forecast for this half, although it remained cautious about 2026. Key market gauges also are signaling reasonably tight near-term supplies. US benchmark crude’s current contract was trading at a $1.22 premium to the next month, a bullish structure. Overall crude inventories fell 3.86 million barrels in this week’s data and US distillate stockpiles, which include diesel, remained at the lowest since 1996 seasonally. While global crude inventories have been swelling in recent months, the bulk of the accumulation has come in markets that have relatively little effect on futures prices, according to Morgan Stanley. The premiums traders are paying for more immediate supplies, a pattern known as backwardation, signal strong short-term demand. “The Brent futures

Read More »

USA Threatens to Abandon IEA Over Green Leaning Energy Forecasts

The US may depart the International Energy Agency without changes to forecasting that Republicans have criticized as unrealistically green, President Donald Trump’s energy chief said. “We will do one of two things: we will reform the way the IEA operates or we will withdraw,” Energy Secretary Chris Wright said during an interview Tuesday. “My strong preference is to reform it.”  The Paris-based IEA, established in response to the 1970s oil crisis to enhance energy security, stirred controversy in recent years as long-term forecasts began to factor in more active government policies to shift away from fossil fuels. The agency has predicted that global oil demand will plateau this decade as electric-vehicle fleets expand and other measures are adopted to reduce emissions and combat climate change. “That’s just total nonsense,” Wright said on the sidelines of the Pennsylvania Energy and Innovation Summit at Carnegie Mellon University in Pittsburgh. He added he’s been in a dialog with the Fatih Birol, the IEA’s executive director. The IEA didn’t immediately respond to a request for comment. In the past, it has defended its forecasting and said in a March 2024 statement that its scenarios “are built on different underlying assumptions about how the energy system might evolve over time.” Wright’s criticism of the agency that gets millions of dollars in US funding is in line with Trump’s broader pro-fossil fuels thrust, and his skepticism about climate change and some environmental measures adopted under previous administrations.  The energy group came under fire in the US last year from critics such as Senator John Barrasso, a Wyoming Republican, who said the IEA has become an “energy transition cheerleader” and that its modeling of long-term energy demand was skewed and “no longer provides policymakers with balanced assessments of energy and climate proposals.” WHAT DO YOU THINK? Generated

Read More »

Georgia Power’s new IRP keeps coal plants online to serve data centers

Dive Brief: Utility regulators on Tuesday approved Georgia Power’s 2025 integrated resource plan, which calls for keeping coal plants online to serve anticipated data center demand. It also includes up to 4,000 MW of renewable energy, 1,500 MW of battery storage and a smaller amount of new gas capacity. Georgia Power said it anticipates approximately 8,500 MW of load growth over the next six years. The IRP allows for the Public Service Commission to monitor that growth, with the utility updating its load forecast and making quarterly filings regarding large load developments. Clean energy and consumer advocates were critical of the plan’s reliance on coal and gas, and an energy savings target that has not been updated in years. Positive aspects of the IRP around solar, storage and customer programs “are sadly blunted by the continued investment in fossil fuel infrastructure,” said Heather Pohnan, senior energy policy manager with the Southern Alliance for Clean Energy, in a statement.  Dive Insight: The Atlanta metropolitan area is one of the hottest data center markets in the country right now, and Georgia Power’s long-term plan aims to meet the growing demand. But growth projections remain uncertain and critics of the IRP say it could leave customers on the hook for higher bills if the demand doesn’t materialize. Approval of the plan “locks in major investments based on uncertain assumptions about future data center demand, while failing to deliver meaningful benefits or cost relief to existing residential and small business customers,” said Patrick King, Georgia policy advocate with the Natural Resources Defense Council, in a statement. “The plan prioritizes speculative growth.” Data centers are driving U.S. electricity demand rapidly higher, but observers say final construction of new facilities is likely to be a fraction of what has been proposed.  Astrid Atkinson, a former Google senior director of

Read More »

So you’re thinking about buying an electric school bus

Linda Margison is technical advisor, Emerging Energy Resources at Hoosier Energy. Ryan Henderson is senior manager, Emerging Energy Resources at Hoosier Energy. Linda Stevens is chief strategy officer, Smart Grid and Smart City at OATI. School buses are the backbone of public transportation in the United States, with about 500,000 carrying half the nation’s children to and from school daily. In 2024, the number of those buses being converted to electric school buses (ESBs) grew to about 12,000 — or 2.5% of the total fleet — including those funded, ordered, delivered or in operation. Early adopters of ESBs in rural areas have identified five actions or insights that would have drastically improved their experience. Whether you are exploring fleet electrification or are ready to move forward, this article aims to inform your decisions about where to start, what equipment and software are necessary, and whether selling stored energy back to the electric grid is feasible. Getting the utility involved early Before considering the adoption of electric school buses, engage with the local utility early to assess the readiness of the bus depot’s electrical infrastructure. A key factor is whether the existing transformers can support the increased electrical load required to charge the buses. Charging large-capacity batteries imposes significant demands on the grid, and in many instances, the infrastructure may only be adequate with upgrades. Collaborating with the utility from the onset helps prevent unplanned expenses and aids in strategic planning for a clearer understanding of total project costs, necessary permits and timelines. Insight #1. Partner with your local utility provider as early as possible. Having the utility identify your infrastructure capacity and any improvements needed is the basis for purchasing appropriate equipment and budgeting the project cost. The utility also may offer rebates or incentive programs. Insight #2. Consider using

Read More »

Virtual power plants helped save the grid during heat dome

As the eastern half of the United States baked under record heat late last month and electricity demand reached multi-year peaks, it looked like the grid might succumb.  Grid operators and public officials scrambled to avoid a disaster, ordering generators to defer maintenance and customers to conserve energy. The PJM Interconnection served about 161 GW of load on June 24, its highest demand since 2011 and not far off its all-time high of 165.6 GW. But aside from scattered outages caused by heat-damaged electrical delivery equipment in parts of the New York City area, Eastern U.S. grids largely weathered the heatwave.  Grid experts — and at least one grid operator — say at least some of the credit goes to distributed energy resource aggregations and flexible loads dispatching at higher rates than ever before. Those “virtual” or distributed power plants helped keep the lights on as generator reserve margins plummeted. “PJM said that demand response was essential,” Federal Energy Regulatory Commission Chair Mark Christie said in a June 30 press conference focused on the need for resource adequacy amid rising load forecasts. “That 161-GW peak would have been higher without DR, so DR is an important part of the mix too.” Major virtual power plant operators matched near-record peak loads with unprecedented dispatch activity. Sunrun dispatched more than 340 MW from customer-sited batteries on the evening of June 24. The same day, EnergyHub shed 900 MW of peak load and shifted 3.5 GWh of energy away from the highest-demand periods. Uplight managed about 350 MW of flexible load in 45 dispatch events across 16 utility programs over the course of the heat dome week.  Supportive state policy, expectations for rising power demand and simple economics are pushing once-skeptical utilities to embrace VPPs, said Hannah Bascom, chief growth officer at Uplight.

Read More »

Drones Hit Oil Fields in Northern Iraq in Spree of Attacks

Three oil fields in the semi-autonomous Kurdistan region in northern Iraq were attacked by drones on Wednesday, adding to a spate of hits on energy installations in the area this week. Two drones attacked the DNO ASA-operated Peshkabir field around 6am local time, the Directorate General of Counter Terrorism in Kurdistan said in a statement. Another drone hit the Tawke project about an hour later, it said. A third attack was reported at a field in Ain Sifni in the north, the Kurdistan Ministry of Natural Resources said, adding the strikes caused significant damage to infrastructure. DNO suspended output at its projects, and would restart once an assessment is completed, the company said. Gulf Keystone Petroleum Ltd., which operates the Shaikan field that produced a little over 40,000 barrels a day last year, shut operations as a precaution even though the assets haven’t been hit, the company said. The Kurdistan region hasn’t been shipping any crude to global markets since an export pipeline to Turkey’s Mediterranean coast was shut over two years ago following a payments dispute. The vast majority of Iraq’s oil production comes from the country’s south.  Attacks on energy infrastructure aren’t uncommon in the north, which the Kurdistan administration often links on Iran-affiliated groups. No one has claimed responsibility for the latest spree. On Tuesday, the Sarsang field operated by US firm HKN Energy was shut after a strike caused an explosion and fire, while another project called Khurmala was targeted by two drones earlier.  The US embassy in Iraq condemned the attacks in a statement on Tuesday. The Tawke field produced 29,153 barrels a day last year and Peshkabir’s output was 49,462 a day, according to DNO’s website.  The Sarsang field pumped about 30,000 barrels a day of oil on average in the first quarter, according

Read More »

Moving AI workloads off the cloud? A hefty data center retrofit awaits

“If you have a very specific use case, and you want to fold AI into some of your processes, and you need a GPU or two and a server to do that, then, that’s perfectly acceptable,” he says. “What we’re seeing, kind of universally, is that most of the enterprises want to migrate to these autonomous agents and agentic AI, where you do need a lot of compute capacity.” Racks of brand-new GPUs, even without new power and cooling infrastructure, can be costly, and Schneider Electric often advises cost-conscious clients to look at previous-generation GPUs to save money. GPU and other AI-related technology is advancing so rapidly, however, that it’s hard to know when to put down stakes. “We’re kind of in a situation where five years ago, we were talking about a data center lasting 30 years and going through three refreshes, maybe four,” Carlini says. “Now, because it is changing so much and requiring more and more power and cooling you can’t overbuild and then grow into it like you used to.”

Read More »

My take on the Gartner Magic Quadrant for LAN infrastructure? Highly inaccurate

Fortinet being in the leader quadrant may surprise some given they are best known as a security vendor, but the company has quietly built a broad and deep networking portfolio. I have no issue with them being considered a leader and believe for security conscious companies, Fortinet is a great option. Challenger Cisco is the only company listed as a challenger, and its movement out of the leader quadrant highlights just how inaccurate this document is. There is no vendor that sells more networking equipment in more places than Cisco, and it has led enterprise networking for decades. Several years ago, when it was a leader, I could argue the division of engineering between Meraki and Catalyst could have pushed them out, but it didn’t. So why now? At its June Cisco Live event, the company launched a salvo of innovation including AI Canvas, Cisco AI Assistant, and much more. It’s also continually improved the interoperability between Meraki and Catalyst and announced several new products. AI Canvas is a completely new take, was well received by customers at Cisco Live, and reinvents the concept of AIOps. As I stated above, because of the December cutoff time for information gathering, none of this was included, but that makes Cisco’s representation false. Also, I find this MQ very vague in its “Cautions” segment. As an example, it states: “Cisco’s product strategy isn’t well-aligned with key enterprise needs.” Some details here would be helpful. In my conversations with Cisco, which includes with Chief Product Officer and President Jeetu Patel, the company has reiterated that its strategy is to help customers be AI-ready with products that are easier to deploy and manage, more automated, and with a lower cost to run. That seems well-aligned with customer needs. If Gartner is hearing customers want networks

Read More »

Equinix, AWS embrace liquid cooling to power AI implementations

With AWS, it deployed In-Row Heat Exchangers (IRHX), a custom-built liquid cooling system designed specifically for servers using Nvidia’s Blackwell GPUs, it’s most powerful but also its hottest running processors used for AI training and inference. The IRHX unit has three components: a water‑distribution cabinet, an integrated pumping unit, and in‑row fan‑coil modules. It uses direct to chip liquid cooling just like the equinox servers, where cold‑plates attached to the chip draw heat from the chips and is cooled by the liquid. The warmed coolant then flows through the coils of heat exchangers, where high‑speed fans Blow on the pipes to cool them, like a car radiator. This type of cooling is nothing new, and there are a few direct to chip liquid cooling solutions on the market from Vertiv, CoolIT, Motivair, and Delta Electronics all sell liquid cooling options. But AWS separates the pumping unit from the fan-coil modules, letting a single pumping system to support large number of fan units. These modular fans can be added or removed as cooling requirements evolve, giving AWS the flexibility to adjust the system per row and site. This led to some concern that Amazon would disrupt the market for liquid cooling, but as a Dell’Oro Group analyst put it, Amazon develops custom technologies for itself and does not go into competition or business with other data center infrastructure companies.

Read More »

Intel CEO: We are not in the top 10 semiconductor companies

The Q&A session came on the heels of layoffs across the company. Tan was hired in March, and almost immediately he began to promise to divest and reduce non-core assets. Gelsinger had also begun divesting the company of losers, but they were nibbles around the edge. Tan is promising to take an axe to the place. In addition to discontinuing products, the company has outsourced marketing and media relations — for the first time in more than 25 years of covering this company, I have no internal contacts at Intel. Many more workers are going to lose their jobs in coming weeks. So far about 500 have been cut in Oregon and California but many more is expected — as much as 20% of the overall company staff may go, and Intel has over 100,000 employees, according to published reports. Tan believes the company is bloated and too bogged down with layers of management to be reactive and responsive in the same way that AMD and Nvidia are. “The whole process of that (deciding) is so slow and eventually nobody makes a decision,” he is quoted as saying. Something he has decided on is AI, and he seems to have decided to give up. “On training, I think it is too late for us,” Tan said, adding that Nvidia’s position in that market is simply “too strong.” So there goes what sales Gaudi3 could muster. Instead, Tan said Intel will focus on “edge” artificial intelligence, where AI capabilities Are brought to PCs and other remote devices rather than big AI processors in data centers like Nvidia and AMD are doing. “That’s an area that I think is emerging, coming up very big and we want to make sure that we capture,” Tan said.

Read More »

AMD: Latest news and insights

Survey: AMD continues to take server share from Intel May 20, 2025: AMD continues to take market share from Intel, growing at a faster rate and closing the gap between the two companies to the narrowest it has ever been. AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers May 15, 2025: As part of the avalanche of business deals that came from President Trump’s Middle East tour, both AMD and Nvidia have struck multi-billion dollar deals with an emerging Saudi AI firm. AMD targets hosting providers with affordable EPYC 4005 processors May 14, 2025: AMD launched its latest set of data center processors, targeting hosted IT service providers. The EPYC 4005 series is purpose-built with enterprise-class features and support for modern infrastructure technologies at an affordable price, the company said. Jio teams with AMD, Cisco and Nokia to build AI-enabled telecom platform March 18, 2025: Jio has teamed up with AMD, Cisco and Nokia to build an AI-enabled platform for telecom networks. The goal is to make networks smarter, more secure and more efficient to help service providers cut costs and develop new services. AMD patches microcode security holes after accidental early disclosure February 3, 2025: AMD issued two patches for severe microcode security flaws, defects that AMD said “could lead to the loss of Secure Encrypted Virtualization (SEV) protection.” The bugs were inadvertently revealed by a partner.

Read More »

Nvidia hits $4T market cap as AI, high-performance semiconductors hit stride

“The company added $1 trillion in market value in less than a year, a pace that surpasses Apple and Microsoft’s previous trajectories. This rapid ascent reflects how indispensable AI chipmakers have become in today’s digital economy,” Kiran Raj, practice head, Strategic Intelligence (Disruptor) at GlobalData, said in a statement. According to GlobalData’s Innovation Radar report, “AI Chips – Trends, Market Dynamics and Innovations,” the global AI chip market is projected to reach $154 billion by 2030, growing at a compound annual growth rate (CAGR) of 20%. Nvidia has much of that market, but it also has a giant bullseye on its back with many competitors gunning for its crown. “With its AI chips powering everything from data centers and cloud computing to autonomous vehicles and robotics, Nvidia is uniquely positioned. However, competitive pressure is mounting. Players like AMD, Intel, Google, and Huawei are doubling down on custom silicon, while regulatory headwinds and export restrictions are reshaping the competitive dynamics,” he said.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »