Stay Ahead, Stay ONMINE

Informatica advances its AI to transform 7-day enterprise data mapping nightmares into 5-minute coffee breaks

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Data platform vendor Informatica is expanding its AI capabilities as the needs of gen AI continue to increase enterprise requirements. Informatica is no stranger to the world of AI; […]

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


Data platform vendor Informatica is expanding its AI capabilities as the needs of gen AI continue to increase enterprise requirements.

Informatica is no stranger to the world of AI; in fact, the company debuted its first Claire AI tool for data in 2018. In the modern generative AI era, the company has expanded its technology with improved natural language capabilities in Claire GPT, as part of Informatica’s Intelligent Data Management Cloud (IDMC), which debuted in 2023. The fundamental premise is all about making it easier, faster and more intelligent to access and use data. It’s a value proposition that has made the company an attractive acquisition target, with Salesforce announcing in May that it intends to acquire the company for $8 billion.

While that acquisition proceeds through approvals and regulatory processes, enterprises still face data challenges that need to be addressed. Today, Informatica announced its Summer 2025 release, showcasing how the company’s AI journey over the past seven years has evolved to address enterprise data needs.

The update introduces natural language interfaces that can build complex data pipelines from simple English commands, AI-powered governance that automatically tracks data lineage to machine learning models and auto-mapping capabilities that compress week-long schema mapping projects into minutes. 


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


The release addresses a persistent enterprise data challenge that generative AI has made more urgent. 

“The thing that has not changed is the data continues to be fragmented in the enterprise and that fragmentation is still at a rapid scale, it’s not converging whatsoever,” Pratik Parekh, SVP and GM of Cloud Integration at Informatica told VentureBeat. “So that means that you have to bring all of this data together.”

From machine learning to gen AI for enterprise data

To better understand what Informatica is doing now, it’s critical to understand how it has gotten to this point.

Informatica’s initial Claire implementation in 2018 focused on practical machine learning (ML) problems that plagued enterprise data teams. The platform used accumulated metadata from thousands of customer implementations to provide design-time recommendations, runtime optimizations and operational insights.

The foundation was built on what Parekh calls a “metadata system of intelligence” containing 40 petabytes of enterprise data patterns. This wasn’t abstract research, but instead applied machine learning that addressed specific bottlenecks in data integration workflows.

That metadata system of intelligence has continued to improve over the years, and in the summer 2025 release, the platform includes auto-mapping capabilities that solve a persistent data problem. This feature automatically maps fields between different enterprise systems using machine learning algorithms trained on millions of existing data integration patterns.

“If you have worked with data management, you know mapping is a pretty time-consuming work,” Parekh said.

Auto mapping is all about taking data from a source system, such as SAP, and then using that data with other enterprise data to create a Master Data Management (MDM) record. MDM for enterprise data professionals is the so-called ‘golden record’ as it is intended to be the source of truth about a certain entity. The auto mapping feature can understand the schemas of the different systems and create the correct data field in the MDM.

The results demonstrate the value of Informatica’s long-term investment in AI. Tasks that previously required deep technical expertise and significant time investment now happen automatically with high accuracy rates.

“Our professional services have done some work mapping that typically takes seven days to build,” Parekh said. “This is now being done in less than five minutes,” Parekh said.

A core element of any modern AI system is a natural language interface, typically accompanied by some form of copilot to assist users in executing tasks. In that regard, Informatica is no different than any other enterprise software vendor. Where it differs, though, is still on the metadata and machine learning technology.

The summer 2025 release enhances Claire Copilot for Data Integration, which became generally available in May 2025 after nine months in early access and preview. The copilot enables users to type requests, such as “bring all Salesforce data into Snowflake,” and have the system orchestrate the necessary pipeline components. 

The summer 2025 release adds new interactive capabilities to the copilot, including enhanced question-and-answer features that help users understand how to use the product, with answers sourced directly from documentation and help articles.

The technical implementation required developing specialized language models fine-tuned for data management tasks using what Parekh calls – Informatica grammar.

“The natural language translated into Informatica grammar is where our secret sauce comes in,” Parekh explained. “Our whole platform is a metadata driven platform. So underneath we have our own grammar as to how this describes the mapping, what describes the data quality rule, what describes an MDM asset.”

Market timing: Enterprise AI demands explode

The timing of Informatica’s AI evolution aligns with fundamental changes in how enterprises consume data. 

Brett Roscoe, SVP & GM, Cloud Data Governance and Cloud Ops at Informatica, noted that a big difference in the enterprise data landscape over the last several years has been the scale, with more people than ever needing more access to data. Previously, data requests came primarily from centralized analytics teams with technical expertise; in the gen AI era, those requests come from everywhere.

“All of a sudden, with the world of gen AI, you’ve got your marketing team and your finance team all asking for data to go drive their generative AI projects,” Roscoe explained.

The summer release’s AI Governance Inventory and Workflows capabilities tackle this challenge directly. The platform now automatically catalogs AI models, tracks their data sources and maintains lineage from source systems through to AI applications. This addresses enterprise concerns about maintaining visibility and control as AI projects proliferate beyond traditional analytics teams.

The release also introduces data quality rules as an API, enabling real-time data validation within AI applications rather than batch processing after data movement. This architectural shift allows AI applications to verify data quality at the point of consumption, addressing governance challenges that emerge when non-technical teams launch AI projects.

Technical evolution: From automation to orchestration

The summer 2025 release demonstrates how Informatica’s AI capabilities have evolved from simple automation to sophisticated orchestration. The enhanced Claire copilot system can break down complex natural language requests into multiple coordinated steps while maintaining human oversight throughout the process.

The system also provides summarization capabilities for existing data workflows, addressing knowledge transfer challenges that plague enterprise data teams. Users can ask the copilot to explain complex integration flows built by previous developers, reducing institutional knowledge dependencies.

The release’s support for Model Context Protocol (MCP) and new generative AI connectors for Nvidia NIM, Databricks Mosaic AI and Snowflake Cortex AI demonstrate how the company’s AI infrastructure adapts to emerging technologies while maintaining enterprise governance standards.

Strategic implications: Maturity wins in enterprise AI for data

Informatica’s seven-year AI journey, culminating in the enhancements for the summer 2025 release, illustrates a fundamental truth about enterprise AI adoption: sustained domain expertise matters.

The company’s approach validates the strategy of building specialized AI capabilities for specific enterprise problems rather than pursuing general-purpose AI solutions. The summer release’s AI-powered lineage discovery and governance workflows represent capabilities that emerge only from years of understanding how enterprises actually manage data at scale.

“If you didn’t have data management practice before gen AI came around, you’re hurting,” Roscoe noted. “And if you had a data management practice when gen AI came around, you’re still scrambling.”

As enterprises move from AI experimentation to production deployment, Informatica’s approach validates a fundamental truth: in enterprise AI, maturity and specialization matter more than novelty. Enterprises shouldn’t just consider new AI-powered features, but AI capabilities that understand and solve the complex realities of enterprise data management.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Spotlight report: How AI is reshaping IT

The emergence of AI as the next big game changer has IT leaders rethinking not just how IT is staffed, organized, and funded, but also how the IT team works with the business to capture the value and promise of AI. Learn more in this Spotlight Report from the editors

Read More »

SD-WAN reality check: Why enterprise ‘rip-and-replace’ isn’t happening

However, despite aggressive vendor positioning around complete infrastructure overhauls, ISG’s research shows that overlay approaches are winning. Even the most technologically advanced organizations are taking a more cautious approach to SD-WAN deployments. “Honestly, even the digitally mature enterprises are favoring controlled, phased transitions due to operational complexity, embedded legacy contracts,

Read More »

Scotland Greenlights World’s Largest Offshore Wind Farm

The Scottish government has given SSE Plc consent to build what would be the world’s biggest offshore wind farm. The Berwick Bank wind farm, located off the eastern coast of Scotland, could provide power to 6 million homes. The 4.1 gigawatt project has been in development for about a decade and this was the final major stage before it can start bidding for government subsidies in the next wind auction starting in August. The project could be crucial to the UK’s ability to meet a goal to almost completely decarbonize the power grid by 2030. But its construction also risks adding further costs to consumer bills, already among the highest in the world, as it would likely exacerbate costly constraints on the nation’s electric grid and drive further investments to alleviate them. The move comes just after a visit to Scotland by US President Donald Trump who voiced his dislike of wind farms, especially ones off the coast of his golf courses there. While Scotland is pivoting from an oil past to lead the world in wind energy, Trump insisted that Aberdeen was the “oil capital of Europe,” and pushed for more expansion in that sector. Berwick Bank could bring GBP 8.3 billion of value into the UK economy and create 9,300 jobs, the company said in a statement. Scotland would benefit from 4,650 of these jobs. UK-based utility SSE said in a statement that it plans to bid into an upcoming government auction to subsidize new renewable energy projects. If it wins at that auction, the project would be able to sell power at fixed prices for a period of 20 years. Any subsidies it receives during that period would be paid by levies on consumers’ bills. What do you think? We’d love to hear from you, join the

Read More »

CNOOC Achieves New Gas Startup in South China Sea

CNOOC Ltd. has begun production at the Dongfang 1-1 Gas Field 13-3 Block Development Project in the South China Sea, its ninth announced startup offshore China in 2025. “The project is the first high-temperature, high-pressure, low-permeability natural gas project offshore China”, CNOOC Ltd., majority owned by China National Offshore Oil Corp., said in a press release. The project is in the Yinggehai Basin, in waters with an average depth of about 67 meters (219.82 feet), the company said. CNOOC Ltd. expects the project to reach its capacity of around 35 million cubic feet a day next year. CNOOC Ltd., the sole owner, plans to develop six wells. The project uses existing facilities of the Dongfang gas fields and a new unmanned wellhead platform. “The existing facilities are used to connect the Dongfang 1-1 gas field and Dongfang 13-2 gas field. CNOOC Limited has thereby successfully established an integrated offshore gas production network in the Yinggehai Basin”, the company said. “It will facilitate the stable and reliable supply of natural gas in the region, providing strong support for the economic and social development of Guangdong, Hong Kong and Hainan”. Just last week CNOOC Ltd. announced the startup of phase I of the Kenli 10-2 Oilfields Development Project, which it called “the largest shallow lithological oilfield offshore China”. The project is in the Bohai Sea, part of the Yellow Sea. The project has an average water depth of approximately 20 meters, according to CNOOC Ltd. The 100 percent owner expects the project to achieve its peak rate of 19,400 barrels of oil equivalent a day in 2026. CNOOC Ltd. plans 79 wells: 33 cold recovery wells, 24 thermal recovery wells, 21 water injection wells and one water source well. “Kenli 10-2 Oilfield is the first lithological oilfield with proved in-place volume of

Read More »

Iberdrola to Exit Mexico

Cox Abg Group SA has agreed to buy Iberdrola SA’s remaining Mexican assets, including over 2.6 gigawatts (GW) of installed generation capacity, for $4.2 billion. The divestment includes 15 operational power plants, consisting of about 1.37 GW combined-cycle and co-generation and around 1.23 GW wind and solar, according to separate statements by the Spanish companies Thursday. “It also includes the largest qualified user supplier in Mexico, with a 25 percent market share and more than 20 tWh distributed across over 500 large clients”, said Seville-based Cox, an integrated energy and water utility. Cox plans to put into operation additional projects initiated by Iberdrola in the Latin American country. “As these projects are completed, the buyer would make payments to Iberdrola in addition to the agreed $4.2 billion”, Iberdrola said. Iberdrola’s workforce of over 800 professionals in Mexico will transfer to Cox, Cox said. In February 2024 Iberdrola said it had sold more than half of its Mexican presence to a trust led by Mexico Infrastructure Partners for approximately $6.2 billion. The sale included 13 mostly gas-fired combined-cycle generation plants with a total installed capacity of about 8.64 GW. Iberdrola said at the time it was retaining a renewables portfolio of over six GW in Mexico. On Thursday, it said the sale of its remaining Mexican operations to Cox “responds to expectations of organic investment of EUR 55 billion in transmission and distribution electricity networks in its subsidiaries in the U.S. (Avangrid Networks), the UK (ScottishPower Energy Networks), Brazil (Neoenergia) and Spain (i-DE), which will almost double its regulated asset base to EUR 90 billion in the coming years”. “This strategy has already led Iberdrola’s British subsidiary, ScottishPower, to acquire the Electricity North West distribution company, which serves the northwest of England, just a year ago for EUR 5 billion”. Iberdrola

Read More »

Analysts Offer Prediction for Next OPEC+ 8 Meeting

In a report sent to Rigzone by the Standard Chartered team on Tuesday, analysts at the company, including Standard Chartered Bank Commodities Research Head Paul Horsnell, offered a prediction for the next OPEC+ 8 meeting, which is currently scheduled to take place on Sunday. “The ‘OPEC+ eight’ (the eight members who agreed additional voluntary output cuts in November 2023) meet on August 3, and we expect a decision to complete the unwind of that tranche of cuts, adding back 548,000 barrels per day to September targets,” the analysts said in the report. “This will effectively pass the torch for decisions at the margin to the ‘OPEC+ nine’ (i.e., the eight plus Gabon) that agreed voluntary cuts of 1.66 million barrels per day in April 2023,” the analysts added. In the report, the Standard Chartered Bank analysts said rolling back the November 2023 tranche of voluntary cuts has improved market transparency and allowed traders to obtain a more realistic picture of spare capacity. They added that they think removing the April 2023 tranche would have a similar effect. “With low inventories, steady demand indications and faltering non-OPEC+ supply growth, we see scope for further accelerated unwinding,” the Standard Chartered Bank analysts noted in the report. “We think a rapid removal of the April 2023 tranche of cuts is possible; we do not expect actual output to increase by as much as nominal increases given existing overproduction and compensation requirements from some members, and capacity constraints in others,” they said. “A drive for compensation for past overproduction remains to the fore. The OPEC+ Joint Ministerial Monitoring Committee met virtually on 28 July to review May and June production data,” they continued. “The communiqué issued after the meeting noted overall conformity among OPEC+ members, with a request for the submission of updated compensation

Read More »

Petronas Inks MoU with Microsoft to Advance Malaysia’s AI Ecosystem

Malaysia’s Petroliam Nasional Berhad (Petronas) has signed a memorandum of understanding (MoU) with Microsoft to develop an artificial intelligence (AI)-enabled economy in Malaysia and help advance energy transition efforts in Asia. “At Petronas, innovation goes beyond technology – it is about shaping a future where energy is smarter, cleaner, and sustainable for not only the organizations involved, but also the nation and its people”, Mohd Yusri, Senior Vice President of Projects, Technology and Health, Safety, Security and Environment (PT and HSSE) at Petronas, said.   “By harnessing our joint expertise in innovation and sustainability, we are steadfast in advancing adoption of AI and Cloud capabilities in a manner that promotes sustainable energy practices, in support of Malaysia’s aspirations of building an AI economy with a robust ecosystem in which everyone thrives”, he added. The collaboration aims to develop an ecosystem in Malaysia that empowers organizations to leverage AI for economic growth and social benefits. Focusing on nation-building, these companies will aid Malaysia in establishing regional leadership in AI and cultivating a robust local AI community through joint programs, Petronas said. As part of this MoU, Petronas and Microsoft plan to pursue additional initiatives to enhance AI and energy innovations, utilizing Microsoft’s new Malaysia West cloud region, Petronas said. They will focus on integrating Agentic AI, Microsoft Copilot, data analytics, cloud computing, and cybersecurity, among other technologies, to improve operational efficiency and sustainability throughout Malaysia’s value chain, it said. “As a trusted technology partner to Petronas, we are thrilled to strengthen our collaboration to help advance their digital and AI transformation. With Microsoft’s new cloud region in Malaysia, we are committed to supporting Petronas with secure, scalable, and sustainable cloud solutions that will drive growth and innovation in Malaysia’s energy sector”, Laurence Si, Microsoft Malaysia Managing Director, said. The parties will also

Read More »

Repsol, NEO Energy Complete UK Merger

NEO Energy Group Ltd. and Repsol Resources UK Ltd. have completed their combination, touting the resulting company as one of the biggest producers on the United Kingdom continental shelf (UKCS). The combined entity, NEO NEXT Energy Ltd., is 55 percent owned by European energy investor HitecVision and 45 percent by Repsol E&P Group. Repsol E&P is 75 percent owned by Spanish integrated energy company Repsol SA and 25 percent owned by the United States’ EIG Global Energy Partners. “This equity split reflects the contributions and strategic alignment of both parties in the creation of a market-leading entity in the UKCS, with a projected 2025 production of approximately 130,000 barrels of oil equivalent per day”, Repsol said in a statement online announcing completion. According to the announcement of the deal in March, Repsol Resources UK owns stakes in 48 producing and non-producing oil and gas fields, while NEO’s portfolio in UK waters includes Penguins, Culzean, Gannet, Shearwater, Britannia Area and Elgin Franklin. NEO NEXT will operate 11 production hubs and “substantial undeveloped reserves”, the companies said March. “This combination creates a jointly governed business which will call upon the key strengths of both shareholders”, Francisco Gea, executive managing director for exploration and production at Repsol, said in comments for the closure of the transaction. “Repsol contributes operational capabilities on production, development and decommissioning activities which will be combined with NEO Energy expertise on financial and commercial matters. “We believe this combined business has many more opportunities for profitable growth in the basin and beyond”. NEO NEXT chief executive John Knight said, “Our strategy can be summarized as ‘Resilience, Yield and Growth’: the combined company has much more scale and diversity and opportunities for cost consolidation and portfolio high-grading, giving resilience despite the tough conditions in the UK”. “The benefits of synergies

Read More »

Data center survey: AI gains ground but trust concerns persist

Cost issues: 76% Forecasting future data center capacity requirements: 71% Improving energy performance for facilities equipment: 67% Power availability: 63% Supply chain disruptions: 65% A lack of qualified staff: 67% With respect to capacity planning, there’s been a notable increase in the number of operators who describe themselves as “very concerned” about forecasting future data center capacity requirements. Andy Lawrence, Uptime’s executive director of research, said two factors are contributing to this concern: ongoing strong growth for IT demand, and the often-unpredictable demand that AI workloads are creating. “There’s great uncertainty about … what the impact of AI is going to be, where it’s going to be located, how much of the power is going to be required, and even for things like space and cooling, how much of the infrastructure is going to be sucked up to support AI, whether it’s in a colocation, whether it’s in an enterprise or even in a hyperscale facility,” Lawrence said during a webinar sharing the survey results. The survey found that roughly one-third of data center owners and operators currently perform some AI training or inference, with significantly more planning to do so in the future. As the number of AI-based software deployments increases, information about the capabilities and limitations of AI in the workplace is becoming available. The awareness is also revealing AI’s suitability for certain tasks. According to the report, “the data center industry is entering a period of careful adoption, testing, and validation. Data centers are slow and careful in adopting new technologies, and AI will not be an exception.”

Read More »

Micron unveils PCIe Gen6 SSD to power AI data center workloads

Competitive positioning With the launch of the 9650 SSD PCIe Gen 6, Micron competes with Samsung and SK Hynix enterprise SSD offerings, which are the dominant players in the SSD market. In December last year, SK Hynix announced the development of PS1012 U.2 Gen5 PCIe SSD, for massive high-capacity storage for AI data centers.  The PM1743 is Samsung’s PCIe Gen5 offering in the market, with 14,000 MBps sequential read, designed for high-performance enterprise workloads. According to Faruqui, PCIe Gen6 data center SSDs are best suited for AI inference performance enhancement. However, we’re still months away from large-scale adoption as no current CPU platforms are available with PCIe 6.0 support. Only Nvidia’s Blackwell-based GPUs have native PCIe 6.0 x16 support with interoperability tests in progress. He added that PCIe Gen 6 SSDs will see very delayed adoption in the PC segment and imminent 2025 2H adoption in AI, data centers, high-performance computing (HPC), and enterprise storage solutions. Micron has also introduced two additional SSDs alongside the 9650. The 6600 ION SSD delivers 122TB in an E3.S form factor and is targeted at hyperscale and enterprise data centers looking to consolidate server infrastructure and build large AI data lakes. A 245TB variant is on the roadmap. The 7600 PCIe Gen5 SSD, meanwhile, is aimed at mixed workloads that require lower latency.

Read More »

AI Deployments are Reshaping Intra-Data Center Fiber and Communications

Artificial Intelligence is fundamentally changing the way data centers are architected, with a particular focus on the demands placed on internal fiber and communications infrastructure. While much attention is paid to the fiber connections between data centers or to end-users, the real transformation is happening inside the data center itself, where AI workloads are driving unprecedented requirements for bandwidth, low latency, and scalable networking. Network Segmentation and Specialization Inside the modern AI data center, the once-uniform network is giving way to a carefully divided architecture that reflects the growing divergence between conventional cloud services and the voracious needs of AI. Where a single, all-purpose network once sufficed, operators now deploy two distinct fabrics, each engineered for its own unique mission. The front-end network remains the familiar backbone for external user interactions and traditional cloud applications. Here, Ethernet still reigns, with server-to-leaf links running at 25 to 50 gigabits per second and spine connections scaling to 100 Gbps. Traffic is primarily north-south, moving data between users and the servers that power web services, storage, and enterprise applications. This is the network most people still imagine when they think of a data center: robust, versatile, and built for the demands of the internet age. But behind this familiar façade, a new, far more specialized network has emerged, dedicated entirely to the demands of GPU-driven AI workloads. In this backend, the rules are rewritten. Port speeds soar to 400 or even 800 gigabits per second per GPU, and latency is measured in sub-microseconds. The traffic pattern shifts decisively east-west, as servers and GPUs communicate in parallel, exchanging vast datasets at blistering speeds to train and run sophisticated AI models. The design of this network is anything but conventional: fat-tree or hypercube topologies ensure that no single link becomes a bottleneck, allowing thousands of

Read More »

ABB and Applied Digital Build a Template for AI-Ready Data Centers

Toward the Future of AI Factories The ABB–Applied Digital partnership signals a shift in the fundamentals of data center development, where electrification strategy, hyperscale design and readiness, and long-term financial structuring are no longer separate tracks but part of a unified build philosophy. As Applied Digital pushes toward REIT status, the Ellendale campus becomes not just a development milestone but a cornerstone asset: a long-term, revenue-generating, AI-optimized property underpinned by industrial-grade power architecture. The 250 MW CoreWeave lease, with the option to expand to 400 MW, establishes a robust revenue base and validates the site’s design as AI-first, not cloud-retrofitted. At the same time, ABB is positioning itself as a leader in AI data center power architecture, setting a new benchmark for scalable, high-density infrastructure. Its HiPerGuard Medium Voltage UPS, backed by deep global manufacturing and engineering capabilities, reimagines power delivery for the AI era, bypassing the limitations of legacy low-voltage systems. More than a component provider, ABB is now architecting full-stack electrification strategies at the campus level, aiming to make this medium-voltage model the global standard for AI factories. What’s unfolding in North Dakota is a preview of what’s coming elsewhere: AI-ready campuses that marry investment-grade real estate with next-generation power infrastructure, built for a future measured in megawatts per rack, not just racks per row. As AI continues to reshape what data centers are and how they’re built, Ellendale may prove to be one of the key locations where the new standard was set.

Read More »

Amazon’s Project Rainier Sets New Standard for AI Supercomputing at Scale

Supersized Infrastructure for the AI Era As AWS deploys Project Rainier, it is scaling AI compute to unprecedented heights, while also laying down a decisive marker in the escalating arms race for hyperscale dominance. With custom Trainium2 silicon, proprietary interconnects, and vertically integrated data center architecture, Amazon joins a trio of tech giants, alongside Microsoft’s Project Stargate and Google’s TPUv5 clusters, who are rapidly redefining the future of AI infrastructure. But Rainier represents more than just another high-performance cluster. It arrives in a moment where the size, speed, and ambition of AI infrastructure projects have entered uncharted territory. Consider the past several weeks alone: On June 24, AWS detailed Project Rainier, calling it “a massive, one-of-its-kind machine” and noting that “the sheer size of the project is unlike anything AWS has ever attempted.” The New York Times reports that the primary Rainier campus in Indiana could include up to 30 data center buildings. Just two days later, Fermi America unveiled plans for the HyperGrid AI campus in Amarillo, Texas on a sprawling 5,769-acre site with potential for 11 gigawatts of power and 18 million square feet of AI data center capacity. And on July 1, Oracle projected $30 billion in annual revenue from a single OpenAI cloud deal, tied to the Project Stargate campus in Abilene, Texas. As Data Center Frontier founder Rich Miller has observed, the dial on data center development has officially been turned to 11. Once an aspirational concept, the gigawatt-scale campus is now materializing—15 months after Miller forecasted its arrival. “It’s hard to imagine data center projects getting any bigger,” he notes. “But there’s probably someone out there wondering if they can adjust the dial so it goes to 12.” Against this backdrop, Project Rainier represents not just financial investment but architectural intent. Like Microsoft’s Stargate buildout in

Read More »

Google and CTC Global Partner to Fast-Track U.S. Power Grid Upgrades

On June 17, 2025, Google and CTC Global announced a joint initiative to accelerate the deployment of high-capacity power transmission lines using CTC’s U.S.-manufactured ACCC® advanced conductors. The collaboration seeks to relieve grid congestion by rapidly upgrading existing infrastructure, enabling greater integration of clean energy, improving system resilience, and unlocking capacity for hyperscale data centers. The effort represents a rare convergence of corporate climate commitments, utility innovation, and infrastructure modernization aligned with the public interest. As part of the initiative, Google and CTC issued a Request for Information (RFI) with responses due by July 14. The RFI invites utilities, state energy authorities, and developers to nominate transmission line segments for potential fast-tracked upgrades. Selected projects will receive support in the form of technical assessments, financial assistance, and workforce development resources. While advanced conductor technologies like ACCC® can significantly improve the efficiency and capacity of existing transmission corridors, technological innovation alone cannot resolve the grid’s structural challenges. Building new or upgraded transmission lines in the U.S. often requires complex permitting from multiple federal, state, and local agencies, and frequently faces legal opposition, especially from communities invoking Not-In-My-Backyard (NIMBY) objections. Today, the average timeline to construct new interstate transmission infrastructure stretches between 10 and 12 years, an untenable lag in an era when grid reliability is under increasing stress. In 2024, the Federal Energy Regulatory Commission (FERC) reported that more than 2,600 gigawatts (GW) of clean energy and storage projects were stalled in the interconnection queue, waiting for sufficient transmission capacity. The consequences affect not only industrial sectors like data centers but also residential areas vulnerable to brownouts and peak load disruptions. What is the New Technology? At the center of the initiative is CTC Global’s ACCC® (Aluminum Conductor Composite Core) advanced conductor, a next-generation overhead transmission technology engineered to boost grid

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »