Stay Ahead, Stay ONMINE

Google DeepMind says its new AI can map the entire planet with unprecedented accuracy

Google DeepMind announced today a breakthrough artificial intelligence system that transforms how organizations analyze Earth’s surface, potentially revolutionizing environmental monitoring and resource management for governments, conservation groups, and businesses worldwide.The system, called AlphaEarth Foundations, addresses a critical challenge that has plagued Earth observation for decades: making sense of the overwhelming flood of satellite data streaming down from space. Every day, satellites capture terabytes of images and measurements, but connecting these disparate datasets into actionable intelligence has remained frustratingly difficult.“AlphaEarth Foundations functions like a virtual satellite,” the research team writes in their paper. “It accurately and efficiently characterizes the planet’s entire terrestrial land and coastal waters by integrating huge amounts of Earth observation data into a unified digital representation.”The AI system reduces error rates by approximately 23.9% compared to existing approaches while requiring 16 times less storage space than other AI systems. This combination of accuracy and efficiency could dramatically lower the cost of planetary-scale environmental analysis.

Google DeepMind announced today a breakthrough artificial intelligence system that transforms how organizations analyze Earth’s surface, potentially revolutionizing environmental monitoring and resource management for governments, conservation groups, and businesses worldwide.

The system, called AlphaEarth Foundations, addresses a critical challenge that has plagued Earth observation for decades: making sense of the overwhelming flood of satellite data streaming down from space. Every day, satellites capture terabytes of images and measurements, but connecting these disparate datasets into actionable intelligence has remained frustratingly difficult.

“AlphaEarth Foundations functions like a virtual satellite,” the research team writes in their paper. “It accurately and efficiently characterizes the planet’s entire terrestrial land and coastal waters by integrating huge amounts of Earth observation data into a unified digital representation.”

The AI system reduces error rates by approximately 23.9% compared to existing approaches while requiring 16 times less storage space than other AI systems. This combination of accuracy and efficiency could dramatically lower the cost of planetary-scale environmental analysis.


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


How the AI compresses petabytes of satellite data into manageable intelligence

The core innovation lies in how AlphaEarth Foundations processes information. Rather than treating each satellite image as a separate piece of data, the system creates what researchers call “embedding fields” — highly compressed digital summaries that capture the essential characteristics of Earth’s surface in 10-meter squares.

“The system’s key innovation is its ability to create a highly compact summary for each square,” the research team explains. “These summaries require 16 times less storage space than those produced by other AI systems that we tested and dramatically reduces the cost of planetary-scale analysis.”

This compression doesn’t sacrifice detail. The system maintains what the researchers describe as “sharp, 10×10 meter” precision while tracking changes over time. For context, that resolution allows organizations to monitor individual city blocks, small agricultural fields, or patches of forest — critical for applications ranging from urban planning to conservation.

Brazilian researchers use the system to track Amazon deforestation in near real-time

More than 50 organizations have been testing the system over the past year, with early results suggesting transformative potential across multiple sectors.

In Brazil, MapBiomas uses the technology to understand agricultural and environmental changes across the country, including within the Amazon rainforest. “The Satellite Embedding dataset can transform the way our team works,” Tasso Azevedo, founder of MapBiomas, said in a statement. “We now have new options to make maps that are more accurate, precise and fast to produce — something we would have never been able to do before.”

The Global Ecosystems Atlas initiative employs the system to create what it calls the first comprehensive resource for mapping the world’s ecosystems. The project helps countries classify unmapped regions into categories like coastal shrublands and hyper-arid deserts — crucial information for conservation planning.

“The Satellite Embedding dataset is revolutionizing our work by helping countries map uncharted ecosystems — this is crucial for pinpointing where to focus their conservation efforts,” said Nick Murray, Director of the James Cook University Global Ecology Lab and Global Science Lead of Global Ecosystems Atlas.

The system solves satellite imagery’s biggest problem: clouds and missing data

The research paper reveals sophisticated engineering behind these capabilities. AlphaEarth Foundations processes data from multiple sources — optical satellite images, radar, 3D laser mapping, climate simulations, and more — weaving them together into a coherent picture of Earth’s surface.

What sets the system apart technically is its handling of time. “To the best of our knowledge, AEF is the first EO featurization approach to support continuous time,” the researchers note. This means the system can create accurate maps for any specific date range, even interpolating between observations or extrapolating into periods with no direct satellite coverage.

The model architecture, dubbed “Space Time Precision” or STP, simultaneously maintains highly localized representations while modeling long-distance relationships through time and space. This allows it to overcome common challenges like cloud cover that often obscures satellite imagery in tropical regions.

Why enterprises can now map vast areas without expensive ground surveys

For technical decision-makers in enterprise and government, AlphaEarth Foundations could fundamentally change how organizations approach geospatial intelligence.

The system excels particularly in “sparse data regimes” — situations where ground-truth information is limited. This addresses a fundamental challenge in Earth observation: while satellites provide global coverage, on-the-ground verification remains expensive and logistically challenging.

“High-quality maps depend on high-quality labeled data, yet when working at global scales, a balance must be struck between measurement precision and spatial coverage,” the research paper notes. AlphaEarth Foundations’ ability to extrapolate accurately from limited ground observations could dramatically reduce the cost of creating detailed maps for large areas.

The research demonstrates strong performance across diverse applications, from crop type classification to estimating evapotranspiration rates. In one particularly challenging test involving evapotranspiration — the process by which water transfers from land to atmosphere — AlphaEarth Foundations achieved an R² value of 0.58, while all other methods tested produced negative values, indicating they performed worse than simply guessing the average.

Google positions Earth monitoring AI alongside its weather and wildfire systems

The announcement places Google at the forefront of what the company calls “Google Earth AI” — a collection of geospatial models designed to tackle planetary challenges. This includes weather predictions, flood forecasting, and wildfire detection systems that already power features used by millions in Google Search and Maps.

“We’ve spent years building powerful AI models to solve real-world problems,” write Yossi Matias, VP & GM of Google Research, and Chris Phillips, VP & GM of Geo, in an accompanying blog post published this morning. “These models already power features used by millions, like flood and wildfire alerts in Search and Maps; they also provide actionable insights through Google Earth, Google Maps Platform and Google Cloud Platform.”

The release includes the Satellite Embedding dataset, described as “one of the largest of its kind with over 1.4 trillion embedding footprints per year,” available through Google Earth Engine. This dataset covers annual snapshots from 2017 through 2024, providing historical context for tracking environmental changes.

The 10-meter resolution protects privacy while enabling environmental monitoring

Google emphasizes that the system operates at a resolution designed for environmental monitoring rather than individual tracking. “The dataset cannot capture individual objects, people, or faces, and is a representation of publicly available data sources, such as meteorological satellites,” the company clarifies.

The 10-meter resolution, while precise enough for most environmental applications, intentionally limits the ability to identify individual structures or activities — a design choice that balances utility with privacy protection.

A new era of planetary intelligence arrives through Google Earth Engine

The availability of AlphaEarth Foundations through Google Earth Engine could democratize access to sophisticated Earth observation capabilities. Previously, creating detailed maps of large areas required significant computational resources and expertise. Now, organizations can leverage pre-computed embeddings to generate custom maps rapidly.

“This breakthrough enables scientists to do something that was impossible until now: create detailed, consistent maps of our world, on-demand,” the research team writes. “Whether they are monitoring crop health, tracking deforestation, or observing new construction, they no longer have to rely on a single satellite passing overhead.”

For enterprises involved in supply chain monitoring, agricultural production, urban planning, or environmental compliance, the technology offers new possibilities for data-driven decision-making. The ability to track changes at 10-meter resolution globally, with annual updates, provides a foundation for applications ranging from verifying sustainable sourcing claims to optimizing agricultural yields.

The Satellite Embedding dataset is available now through Google Earth Engine, with AlphaEarth Foundations continuing development as part of Google’s broader Earth AI initiative. As one researcher noted during the press briefing, the question facing organizations isn’t whether they need planetary-scale intelligence anymore — it’s whether they can afford to operate without it.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

TechnipFMC Sees Surge in Q2 Profit

TechnipFMC PLC has reported $285.5 million in adjusted net income for the second quarter, up 99.8 percent from the prior three-month period and 51.1 percent against Q2 2024. The adjusted diluted earnings per share of 68 cents beat the Zacks Consensus Estimate of $0.57. TechnipFMC kept its dividend at $0.05

Read More »

Shell Profit Beats Estimates in Volatile Quarter

Shell Plc reported second-quarter profit that dropped by 32 percent but beat analyst expectations, which had been lowered after a bearish trading update earlier this month. Shell’s shares were up 2.7 percent in London on Thursday morning, after the company reported the earnings beat and maintained its pace of buying back $3.5 billion of shares in the quarter. Analysts noted that Shell’s cash flow from operations of $12.3 billion was well ahead of consensus expectations of $10.1 billion. The drop in profit from a year earlier reflected lower oil and gas prices, as well as weaker performance from the company’s oil traders. Shell Chief Executive Officer Wael Sawan has spent the past two years seeking to cut costs, improve reliability and shed underperforming assets in an effort to close a valuation gap with Shell’s US rivals. The “sprint” has begun to pay off, as the company has outperformed its Big Oil peers so far in 2025.  Maintaining the buyback level “should be well-received,” said Jefferies analyst Giacomo Romeo said. “It’s been quarter after quarter of just steady delivery,” Sawan said in an interview with Bloomberg Television. “That’s 15 quarters in a row where we are delivering 3 or more billion dollars per quarter buybacks and that’s key for us.” Shell reduced its structural costs by a further $800 million in the first half, bringing the pre-tax total since 2022 to $3.9 billion, it said. Net debt rose to $43.2 billion from $41.5 billion in the first quarter. Analysts had cut their forecasts for earnings after Shell warned July 7 that earnings from its fabled trading division would be “significantly lower” than the prior quarter. Shell’s traders are often one of its biggest profit boosters, and Sawan said in March that its traders haven’t lost money in a single quarter over the past

Read More »

Shell Profit Falls as Traders Grapple with Volatility

Shell Plc reported second-quarter profit that dropped by 32 percent but beat analyst expectations, which had been lowered after a bearish trading update earlier this month. Shell’s shares were up 2.7 percent in London on Thursday morning, after the company reported the earnings beat and maintained its pace of buying back $3.5 billion of shares in the quarter. Analysts noted that Shell’s cash flow from operations of $12.3 billion was well ahead of consensus expectations of $10.1 billion. The drop in profit from a year earlier reflected lower oil and gas prices, as well as weaker performance from the company’s oil traders. Shell Chief Executive Officer Wael Sawan has spent the past two years seeking to cut costs, improve reliability and shed underperforming assets in an effort to close a valuation gap with Shell’s US rivals. The “sprint” has begun to pay off, as the company has outperformed its Big Oil peers so far in 2025.  Maintaining the buyback level “should be well-received,” said Jefferies analyst Giacomo Romeo said. “It’s been quarter after quarter of just steady delivery,” Sawan said in an interview with Bloomberg Television. “That’s 15 quarters in a row where we are delivering 3 or more billion dollars per quarter buybacks and that’s key for us.” Shell reduced its structural costs by a further $800 million in the first half, bringing the pre-tax total since 2022 to $3.9 billion, it said. Net debt rose to $43.2 billion from $41.5 billion in the first quarter. Analysts had cut their forecasts for earnings after Shell warned July 7 that earnings from its fabled trading division would be “significantly lower” than the prior quarter. Shell’s traders are often one of its biggest profit boosters, and Sawan said in March that its traders haven’t lost money in a single quarter over the past

Read More »

UK Oil Regulator Fines Chrysaor

UK oil and gas regulator the North Sea Transition Authority (NSTA) announced in a statement posted on its site this week that Chrysaor has been fined GBP 150,000 ($200,591) for “vent breaches”. The NSTA noted in its statement that the company exceeded its consent by more than 145 tons and that breaches took place the Armada hub in the central North Sea. The NSTA added in the statement that Chrysaor “failed to identify and then inform the NSTA of the breach for seven months, despite the regulator’s repeated messaging to industry that production needs to become increasingly clean”.   In its statement, the NSTA highlighted that Chrysaor, which it pointed out was acquired by Harbour Energy in 2021, blamed the breach on high winds preventing it from relighting the flare on the Armada platform. The Armada processing hub serves Hawkins, Fleming, Drake, Maria, and Seymour fields and has a capacity of more than 20,000 barrels per day, the NSTA noted. “In January 2022 an unplanned shut-in led to one vent event,” the NSTA said in the statement. “This was followed in August the same year when there was a further event after a start-up from a shutdown; in October high winds caused the flare to extinguish with the operator being unable to relight the flare for three days,” it added. “And in November, the flare was extinguished due to a depressurization and there was a delay in relighting due to the weather conditions. This venting continued for three days,” it continued. “In total, Chrysaor vented 370.046 tons at Armada from 1 January 2022 to 31 December 2022, exceeding its consent by 145.566 tons, almost 65 percent. Venting consent was breached in October 2022 and Chrysaor failed to inform the NSTA until May 2023, which indicated that it was unaware of

Read More »

Valaris Revenue Slips

Valaris Ltd. on Wednesday reported $615.2 million in revenue for the second quarter, down one percent from the prior three-month period due to fewer operating days and lower amortized revenue for its floater fleet. That was partially offset by more operating days and higher average daily revenue for the jackup fleet. Revenue from floaters was $362.9 million, down 10 percent against Q1. Revenue from jackups was $238 million, up 11 percent. ARO Drilling, Valaris’ 50-50 venture with Saudi Arabian Oil Co., contributed $139.9 million, down four percent. Total revenue exclusive of reimbursable items came at $572.3 million, compared to $577.8 million for Q1. Reimbursable revenue was $42.9 million. “Since reporting our first quarter results, we have secured new contracts with associated revenue backlog of more than $1 billion, increasing our total backlog to approximately $4.7 billion”, president and chief executive Anton Dibowitz said. “These awards include attractive contracts for three seventh-generation drillships, and we have now secured work for three of our four drillships with near-term availability”. “As expected, the pipeline of floater opportunities we have discussed in recent quarters are [sic] converting into contracts, and we anticipate additional awards across the industry in the coming months”, Dibowitz added. Valaris noted, “Exclusive of reimbursable items, contract drilling expense decreased to $355 million from $374 million in the first quarter 2025 primarily due to a favorable arbitration outcome related to previously disclosed patent license litigation, which led to a $17 million accrual reversal, as well as lower amortized expense for the floater fleet and a reduction in costs associated with three retired semisubmersibles that were sold for recycling during the quarter”.  While revenue fell, the Hamilton, Bermuda-based driller rebounded from a net loss of $39.2 million for Q1 to a net profit of $114.2 million for Q2. “Net income included tax expense

Read More »

Japan’s Inpex Acquires Stakes in Several Norwegian Sea Assets

Tokyo, Japan-based Inpex Corporation said it has acquired stakes in several oil and gas assets in the Norwegian Sea. The company’s subsidiary Inpex Norway Co. Ltd., through its local Norwegian entity, Inpex Idemitsu Norge AS (IIN), has entered into an agreement with Pandion Energy AS to acquire 10 percent participating interest in each of the Valhall and Hod oil and gas fields and 20 percent participating interest in each of the Mistral and Slagugle oil and gas discoveries. The Valhall and Hod fields are currently producing hydrocarbons, while the Mistral and Slagugle oil and gas discoveries have yet to be developed, Inpex said in a news release. IIN holds numerous licenses in the northern North Sea, the northern Norwegian Sea, and the Barents Sea, and has participated in steady production operations from fields in the North Sea, according to the release. With the acquisition of the new stakes, IIN’s oil and gas production volume will increase to about 27,000 barrels per day (bpd) from approximately 23,000 bpd, the company said. Further, the acquisitions are expected to expand Inpex’s business portfolio in the future through the development of the Mistral and Slagugle oil and gas discoveries, as well as the potential for exploration and development in the surrounding areas, the release said. Bonaparte CCS Project Awarded Major Project Status Earlier in the month, the Bonaparte CCS Assessment Joint Venture was awarded “Major Project” status by the Australian government. The project, which is operated by company subsidiary Inpex Browse E&P Pty Ltd holding a 53 percent stake, is the first offshore carbon capture and storage (CCS) project to receive the designation in the country, according to an earlier statement. TotalEnergies CCS Australia Pty Ltd holds a 26 percent stake while Woodside Energy Ltd holds 21 percent. The award “highlights the project’s recognized potential to

Read More »

Uniper, Tourmaline Ink 234 Bcf Gas Deal

Canada’s Tourmaline Oil Corp. has signed an agreement with German power and gas utility Uniper SE for an eight-year supply of natural gas totaling 234 billion cubic feet (Bcf). “Under the LNG Netback Supply Agreement, Tourmaline will deliver gas to the ANR SE trading hub in southeast Louisiana, USA”, said a joint statement Wednesday. “The contract is based on TTF (Dutch Title Transfer Facility) pricing, providing Tourmaline with international price exposure”. Tourmaline president and chief executive Mike Rose said, “This long-term supply agreement with Uniper supports the continued execution of our market diversification strategy. We’re proud to be supplying Canadian natural gas to meet rising demand in international markets and to enhance European energy security”. Uniper chief commercial officer Carsten Poppinga said the deal showcases Uniper’s “further diversifies Uniper’s LNG supply sourcing portfolio, an important aspect of our European security of supply objectives”. North American Power Exit In the first quarter Uniper sold its North American power assets but retained its gas portfolio and hydrogen-related activities. The divestment contributes to the fulfillment of fair-competition guardrails imposed by the European Commission in approving Uniper’s bailout by the German government late 2022. The sale covered “power purchase and sale contracts and energy management agreements in the North American power markets ERCOT (North, South, West and Houston), WEST (WECC and CAISO) and CENTRAL (MISO and SPP) through a number of transactions with several counterparties”, Uniper said in a press release February 5. It did not name its buyers. Woodside Deal Uniper’s gas deal with Tourmaline comes three months after Uniper committed to one million metric tons per annum (MMtpa) of liquefied natural gas (LNG) for 13 years from Woodside Energy Group Ltd.’s Louisiana LNG. The agreement with the Australian company also secures an additional supply of up to one MMtpa from the rest

Read More »

Data center survey: AI gains ground but trust concerns persist

Cost issues: 76% Forecasting future data center capacity requirements: 71% Improving energy performance for facilities equipment: 67% Power availability: 63% Supply chain disruptions: 65% A lack of qualified staff: 67% With respect to capacity planning, there’s been a notable increase in the number of operators who describe themselves as “very concerned” about forecasting future data center capacity requirements. Andy Lawrence, Uptime’s executive director of research, said two factors are contributing to this concern: ongoing strong growth for IT demand, and the often-unpredictable demand that AI workloads are creating. “There’s great uncertainty about … what the impact of AI is going to be, where it’s going to be located, how much of the power is going to be required, and even for things like space and cooling, how much of the infrastructure is going to be sucked up to support AI, whether it’s in a colocation, whether it’s in an enterprise or even in a hyperscale facility,” Lawrence said during a webinar sharing the survey results. The survey found that roughly one-third of data center owners and operators currently perform some AI training or inference, with significantly more planning to do so in the future. As the number of AI-based software deployments increases, information about the capabilities and limitations of AI in the workplace is becoming available. The awareness is also revealing AI’s suitability for certain tasks. According to the report, “the data center industry is entering a period of careful adoption, testing, and validation. Data centers are slow and careful in adopting new technologies, and AI will not be an exception.”

Read More »

Micron unveils PCIe Gen6 SSD to power AI data center workloads

Competitive positioning With the launch of the 9650 SSD PCIe Gen 6, Micron competes with Samsung and SK Hynix enterprise SSD offerings, which are the dominant players in the SSD market. In December last year, SK Hynix announced the development of PS1012 U.2 Gen5 PCIe SSD, for massive high-capacity storage for AI data centers.  The PM1743 is Samsung’s PCIe Gen5 offering in the market, with 14,000 MBps sequential read, designed for high-performance enterprise workloads. According to Faruqui, PCIe Gen6 data center SSDs are best suited for AI inference performance enhancement. However, we’re still months away from large-scale adoption as no current CPU platforms are available with PCIe 6.0 support. Only Nvidia’s Blackwell-based GPUs have native PCIe 6.0 x16 support with interoperability tests in progress. He added that PCIe Gen 6 SSDs will see very delayed adoption in the PC segment and imminent 2025 2H adoption in AI, data centers, high-performance computing (HPC), and enterprise storage solutions. Micron has also introduced two additional SSDs alongside the 9650. The 6600 ION SSD delivers 122TB in an E3.S form factor and is targeted at hyperscale and enterprise data centers looking to consolidate server infrastructure and build large AI data lakes. A 245TB variant is on the roadmap. The 7600 PCIe Gen5 SSD, meanwhile, is aimed at mixed workloads that require lower latency.

Read More »

AI Deployments are Reshaping Intra-Data Center Fiber and Communications

Artificial Intelligence is fundamentally changing the way data centers are architected, with a particular focus on the demands placed on internal fiber and communications infrastructure. While much attention is paid to the fiber connections between data centers or to end-users, the real transformation is happening inside the data center itself, where AI workloads are driving unprecedented requirements for bandwidth, low latency, and scalable networking. Network Segmentation and Specialization Inside the modern AI data center, the once-uniform network is giving way to a carefully divided architecture that reflects the growing divergence between conventional cloud services and the voracious needs of AI. Where a single, all-purpose network once sufficed, operators now deploy two distinct fabrics, each engineered for its own unique mission. The front-end network remains the familiar backbone for external user interactions and traditional cloud applications. Here, Ethernet still reigns, with server-to-leaf links running at 25 to 50 gigabits per second and spine connections scaling to 100 Gbps. Traffic is primarily north-south, moving data between users and the servers that power web services, storage, and enterprise applications. This is the network most people still imagine when they think of a data center: robust, versatile, and built for the demands of the internet age. But behind this familiar façade, a new, far more specialized network has emerged, dedicated entirely to the demands of GPU-driven AI workloads. In this backend, the rules are rewritten. Port speeds soar to 400 or even 800 gigabits per second per GPU, and latency is measured in sub-microseconds. The traffic pattern shifts decisively east-west, as servers and GPUs communicate in parallel, exchanging vast datasets at blistering speeds to train and run sophisticated AI models. The design of this network is anything but conventional: fat-tree or hypercube topologies ensure that no single link becomes a bottleneck, allowing thousands of

Read More »

ABB and Applied Digital Build a Template for AI-Ready Data Centers

Toward the Future of AI Factories The ABB–Applied Digital partnership signals a shift in the fundamentals of data center development, where electrification strategy, hyperscale design and readiness, and long-term financial structuring are no longer separate tracks but part of a unified build philosophy. As Applied Digital pushes toward REIT status, the Ellendale campus becomes not just a development milestone but a cornerstone asset: a long-term, revenue-generating, AI-optimized property underpinned by industrial-grade power architecture. The 250 MW CoreWeave lease, with the option to expand to 400 MW, establishes a robust revenue base and validates the site’s design as AI-first, not cloud-retrofitted. At the same time, ABB is positioning itself as a leader in AI data center power architecture, setting a new benchmark for scalable, high-density infrastructure. Its HiPerGuard Medium Voltage UPS, backed by deep global manufacturing and engineering capabilities, reimagines power delivery for the AI era, bypassing the limitations of legacy low-voltage systems. More than a component provider, ABB is now architecting full-stack electrification strategies at the campus level, aiming to make this medium-voltage model the global standard for AI factories. What’s unfolding in North Dakota is a preview of what’s coming elsewhere: AI-ready campuses that marry investment-grade real estate with next-generation power infrastructure, built for a future measured in megawatts per rack, not just racks per row. As AI continues to reshape what data centers are and how they’re built, Ellendale may prove to be one of the key locations where the new standard was set.

Read More »

Amazon’s Project Rainier Sets New Standard for AI Supercomputing at Scale

Supersized Infrastructure for the AI Era As AWS deploys Project Rainier, it is scaling AI compute to unprecedented heights, while also laying down a decisive marker in the escalating arms race for hyperscale dominance. With custom Trainium2 silicon, proprietary interconnects, and vertically integrated data center architecture, Amazon joins a trio of tech giants, alongside Microsoft’s Project Stargate and Google’s TPUv5 clusters, who are rapidly redefining the future of AI infrastructure. But Rainier represents more than just another high-performance cluster. It arrives in a moment where the size, speed, and ambition of AI infrastructure projects have entered uncharted territory. Consider the past several weeks alone: On June 24, AWS detailed Project Rainier, calling it “a massive, one-of-its-kind machine” and noting that “the sheer size of the project is unlike anything AWS has ever attempted.” The New York Times reports that the primary Rainier campus in Indiana could include up to 30 data center buildings. Just two days later, Fermi America unveiled plans for the HyperGrid AI campus in Amarillo, Texas on a sprawling 5,769-acre site with potential for 11 gigawatts of power and 18 million square feet of AI data center capacity. And on July 1, Oracle projected $30 billion in annual revenue from a single OpenAI cloud deal, tied to the Project Stargate campus in Abilene, Texas. As Data Center Frontier founder Rich Miller has observed, the dial on data center development has officially been turned to 11. Once an aspirational concept, the gigawatt-scale campus is now materializing—15 months after Miller forecasted its arrival. “It’s hard to imagine data center projects getting any bigger,” he notes. “But there’s probably someone out there wondering if they can adjust the dial so it goes to 12.” Against this backdrop, Project Rainier represents not just financial investment but architectural intent. Like Microsoft’s Stargate buildout in

Read More »

Google and CTC Global Partner to Fast-Track U.S. Power Grid Upgrades

On June 17, 2025, Google and CTC Global announced a joint initiative to accelerate the deployment of high-capacity power transmission lines using CTC’s U.S.-manufactured ACCC® advanced conductors. The collaboration seeks to relieve grid congestion by rapidly upgrading existing infrastructure, enabling greater integration of clean energy, improving system resilience, and unlocking capacity for hyperscale data centers. The effort represents a rare convergence of corporate climate commitments, utility innovation, and infrastructure modernization aligned with the public interest. As part of the initiative, Google and CTC issued a Request for Information (RFI) with responses due by July 14. The RFI invites utilities, state energy authorities, and developers to nominate transmission line segments for potential fast-tracked upgrades. Selected projects will receive support in the form of technical assessments, financial assistance, and workforce development resources. While advanced conductor technologies like ACCC® can significantly improve the efficiency and capacity of existing transmission corridors, technological innovation alone cannot resolve the grid’s structural challenges. Building new or upgraded transmission lines in the U.S. often requires complex permitting from multiple federal, state, and local agencies, and frequently faces legal opposition, especially from communities invoking Not-In-My-Backyard (NIMBY) objections. Today, the average timeline to construct new interstate transmission infrastructure stretches between 10 and 12 years, an untenable lag in an era when grid reliability is under increasing stress. In 2024, the Federal Energy Regulatory Commission (FERC) reported that more than 2,600 gigawatts (GW) of clean energy and storage projects were stalled in the interconnection queue, waiting for sufficient transmission capacity. The consequences affect not only industrial sectors like data centers but also residential areas vulnerable to brownouts and peak load disruptions. What is the New Technology? At the center of the initiative is CTC Global’s ACCC® (Aluminum Conductor Composite Core) advanced conductor, a next-generation overhead transmission technology engineered to boost grid

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »