Stay Ahead, Stay ONMINE

Carney, Poilievre Scrap Over Energy and Housing in Canada Debate

Liberal Party Leader Mark Carney argued that he represents change from Justin Trudeau’s nine years in power as he fended off attacks from his rivals during the final televised debate of Canada’s election. “Look, I’m a very different person from Justin Trudeau,” Carney said in response to comments from Conservative Leader Pierre Poilievre, his chief […]

Liberal Party Leader Mark Carney argued that he represents change from Justin Trudeau’s nine years in power as he fended off attacks from his rivals during the final televised debate of Canada’s election.

“Look, I’m a very different person from Justin Trudeau,” Carney said in response to comments from Conservative Leader Pierre Poilievre, his chief opponent in the election campaign that concludes April 28.

Carney’s Liberals lead by several percentage points in most polls, marking a stunning reversal from the start of this year, when Trudeau was still the party’s leader and Poilievre’s Conservatives were ahead by more than 20 percentage points in some surveys.

Trudeau’s resignation and US President Donald Trump’s economic and sovereignty threats against Canada have upended the race. Poilievre sought to remind Canadians of their complaints about the Liberal government, while Carney tried to distance himself from Trudeau’s record. 

Poilievre argued that Carney was an adviser to Trudeau’s Liberals during a time when energy projects were stymied and the cost of living soared — especially housing prices.

Carney, 60, responded that he has been prime minister for just a month, and pointed to moves he made to reverse some of Trudeau’s policies, such as scrapping the carbon tax on consumer fuels. As for inflation, Carney noted that it was well under control when he was governor of the Bank of Canada. 

“I know it may be difficult, Mr. Poilievre,” Carney told him. “You spent years running against Justin Trudeau and the carbon tax and they’re both gone.”

“Well, you’re doing a good impersonation of him, with the same policies,” Poilievre shot back.

Trudeau announced in January that he was stepping down as prime minister and Carney was sworn in as his replacement on March 14. He triggered an election nine days later.

“The question you have to ask is: after a decade of Liberal promises, can you afford food?” Poilievre said during one segment. “Is your housing more affordable than it used to be? What is your cost of living like compared to what it was a decade ago?”

When it comes to issues such as housing, “We need a change, and you, sir, are not a change,” Poilievre told Carney.

Trade Retaliation 

Polls suggest Trump’s aggression toward Canada is a major issue for voters. The debate opened with a segment on the trade war and candidates broadly agreed on Canada taking a tough response. 

Carney made clear that in negotiating with Trump, his government has already moved off the principle of “dollar-for-dollar” counter-tariffs as retaliation. Instead, Carney said, he’s focusing on measures that will have maximum impact in the US but minimum impact on Canada.

“We have to recognize, and I think we all do, the United States economy is more than 10 times the size of the Canadian economy,” Carney said.

The nationally televised event was a critical opportunity for Carney’s opponents to make up ground. That put him under attack from all sides, including from the leader of the left-wing New Democratic Party, Jagmeet Singh, and the head of the sovereigntist Bloc Quebecois, Yves-Francois Blanchet.

During a lengthy segment on oil infrastructure, Poilievre charged that the Liberals had not done enough to get energy exports to markets other than the US. He said the government’s regulatory regime makes building pipelines too difficult, which “effectively empowers Donald Trump to have a total monopoly on our single biggest export.”

Singh, looking incredulous, pointed out that Trudeau’s government had nationalized the Trans Mountain pipeline, which exports oil from Canada’s west coast, and spent tens of billions of dollars to expand it. “The Liberals bought a pipeline, they built a pipeline,” Singh said. “I don’t know what Pierre is complaining about.”

“I’m interested in getting energy infrastructure built,” Carney insisted. “That means pipelines, that means carbon capture storage, that means electricity grids.”

The first debate, which took place Wednesday, was conducted entirely in French — making it a test for Carney, who is weaker than his opponents in the language. The French-speaking province of Quebec is Canada’s second largest by population and an important battleground region in the election. The Liberal leader made it through that debate largely unscathed.

Still, there were a few notable moments. In one exchange, Carney pledged that his government would produce more oil in order to reduce Canada’s reliance on the US, but that it would need to be “low-carbon” oil.

Carney also said his government would maintain “a cap on all types of immigration for a period of time in order to increase our capacity,” particularly around housing and other social supports for immigrants.

Last fall, Trudeau’s government slashed its permanent immigration target by 21%, aiming for a total of 395,000 for 2025. It also put a limit on international student visas and added restrictions to the use of foreign labor. 

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Agentic AI: What now, what next?

Agentic AI burst onto the scene with its promises of streamliningoperations and accelerating productivity. But what’s real and what’s hype when it comes to deploying agentic AI? This Special Report examines the state of agentic AI, the challenges organizations are facing in deploying it, and the lessons learned from success

Read More »

AMD to build two more supercomputers at Oak Ridge National Labs

Lux is engineered to train, refine, and deploy AI foundation models that accelerate scientific and engineering progress. Its advanced architecture supports data-intensive and model-centric workloads, thereby enhancing AI-driven research capabilities. Discovery differs from Lux in that it uses Instinct MI430X GPUs instead of the 300 series. The MI400 Series is

Read More »

OPEC+ 8 Decide to Implement Output Adjustment

A statement posted on OPEC’s website on Sunday revealed that Saudi Arabia, Russia, Iraq, the United Arab Emirates (UAE), Kuwait, Kazakhstan, Algeria, and Oman “decided to implement a production adjustment of 137,000 barrels per day” in a virtual meeting held that day. “The eight OPEC+ countries, which previously announced additional voluntary adjustments in April and November 2023 … met virtually on 2 November 2025, to review global market conditions and outlook,” the statement noted. “In view of a steady global economic outlook and current healthy market fundamentals, as reflected in the low oil inventories, the eight participating countries decided to implement a production adjustment of 137,000 barrels per day from the 1.65 million barrels per day additional voluntary adjustments announced in April 2023,” it added. The statement said this adjustment will be implemented in December 2025. It also announced that, “beyond December, due to seasonality, the eight countries … decided to pause the production increments in January, February, and March 2026”. According to a table accompanying the statement, Saudi Arabia and Russia’s December adjustment amounts to 41,000 barrels per day, each. Iraq’s comes to 18,000 barrels per day, the UAE’s is 12,000 barrels per day, Kuwait’s is 10,000 barrels per day, Kazakhstan’s is 7,000 barrels per day, Algeria’s is 4,000 barrels per day, and Oman’s is 4,000 barrels per day, the table outlined. The table highlighted that December 2025, January 2026, February 2026, and March 2026  “required production” is 10.103 million barrels per day for Saudi Arabia, 9.574 million barrels per day for Russia, 4.273 million barrels per day for Iraq, 3.411 million barrels per day for the UAE, 2.580 million barrels per day for Kuwait, 1.569 million barrels per day for Kazakhstan, 971,000 barrels per day for Algeria, and 811,000 barrels per day for Oman. “The eight participating countries

Read More »

The people’s power plant: How Puerto Rico turned home batteries into a reliable grid asset

In 2017, Puerto Rico was plunged into darkness. Hurricane Maria devastated the island, collapsing the power grid and triggering the longest blackout in American history. Millions were left without electricity. This event exposed the deep vulnerabilities of an electrical system strained by decades of underinvestment, aging infrastructure and increased exposure to extreme weather. Rather than rebuild what was lost, Puerto Ricans turned the devastation into an opportunity for growth and innovation. In 2019, it enacted Act 17, restructuring the island’s energy system and setting an ambitious target of 100% renewable energy by 2050. This shift spurred a rapid surge in home solar panel and battery installations, as the population sought greater energy resilience and independence. Act 17 also paved the way for grid modernization by separating the island’s power generation assets from its transmission and distribution system (T&D), a move that allowed LUMA Energy, the island’s grid operator, to assume control of the T&D system in 2021, under a 15-year public-private partnership. This brought the private expertise and resources needed to upgrade the island’s grid. As more homes became “mini power plants”, generating electricity from rooftop solar and storing it in batteries, the island’s grid faced a new challenge: modernizing to manage a decentralized, variable system. The energy landscape was rapidly shifting away from the traditional model – dominated by a few large power plants – toward thousands of smaller, distributed generation sources. From crisis to opportunity: The challenges of grid modernization The challenges faced by Puerto Rico are not unique. Across the U.S., much of the power grid was built for a different era. One defined by a few large fossil-fuel power plants, predictable energy demand and a more stable climate. Today, the grid is under severe pressure from rising electricity consumption, volatile weather and the rapid integration of

Read More »

Beyond savings: How behavioral energy programs are powering peak demand flexibility

For more than a decade, behavioral energy efficiency programs—like Home Energy Reports (HERs)—have helped millions of households use energy more wisely. These simple, data-driven communications offer personalized insights, neighbor comparisons and practical tips that nudge everyday behavior in a more efficient direction. But as the energy grid faces new pressures, these programs are evolving—and their impact is growing. At Franklin Energy, we’re leaning into that evolution. Because in today’s energy landscape, it’s not just about how much energy we use. It’s about when. Meeting the moment: Behavioral flexibility in action Grid operators and utilities are under increasing strain as demand spikes during key hours—especially on hot summer afternoons or cold winter evenings. At the same time, more renewables are coming online, adding variability to the system. Behavioral programs can help bridge this gap, not just by reducing consumption, but by shifting it. We’re now using NGAGE Discover, our advanced analytics platform, to assess how HERs and similar tools influence when people use electricity, not just how much. Through rigorous modeling, we’ve uncovered real, measurable changes in peak demand, especially when reports are designed with timing in mind. What the data shows In regions across the country, we analyzed high-frequency interval data from residential behavioral programs. Our analysis combined econometrics with machine learning to capture subtle shifts in usage patterns. The takeaway? Customers respond. In many cases, they’re using less energy during high-stress hours—without any hardware or incentives. Just better information, delivered at the right time. Although this is helpful information, it’s important to note that behavior is not uniform. Results vary by region, household makeup and even messaging style. But with the right design, behavioral programs offer a cost-effective path to demand flexibility. And they’re ready to scale. Centering equity from the start To fully unlock this potential, we must design

Read More »

Future-proofing utility communications: The case for multi-carrier SIMs

Every day, the utility sector moves toward a more connected, data-driven environment. Smart meters stand as a foundational element of this transition — but only if they remain reliable in the long term. However, the reliability of these devices is only as strong as their connection.  Early smart meters often relied on single-carrier SIM cards, which left them vulnerable to cellular outages and coverage gaps that could disrupt the flow of essential data. For critical infrastructure, that’s a risk utilities cannot afford. To solve this, leading original equipment manufacturers (OEMs) are now integrating multi-carrier eSIMs. This modern approach ensures resilient, long-lasting connectivity that allows a smart meter to switch between networks to maintain a connection automatically. It’s the key to delivering the dependable performance utilities require from devices expected to last 15 to 20 years in the field. Why smart meter OEMs have hesitated to go multi-carrier Despite the advantages of multi-carrier connectivity, many OEMs have approached multi-IMSI SIM technologies with caution. Varying concerns have been responsible for slowing adoption: Battery drain during network switching. Earlier SIM designs consumed significant power when scanning and switching networks, shortening the device’s battery life — an unacceptable trade-off for meters expected to last 15–20 years. Cost premiums. Multi-carrier SIMs once came with higher costs compared to single-carrier versions, making them harder to justify at scale. Loss of control. Some OEMs with preferred carrier relationships worried that switching to a multi-carrier SIM could impact those arrangements. Regulatory complexity. Permanent roaming restrictions and compliance requirements complicated global rollouts. However, modern eSIM technology has effectively resolved these past challenges and paved the way for wider adoption. The advantages of a modern multi-carrier approach For smart gas and water meters, it isn’t just the ability to connect to multiple networks that matters — it’s the ability to

Read More »

How AMI 2.0 is powering the grid of the future

Why AMI 2.0, and why now The electric grid is evolving faster than ever. The rise of Distributed Energy Resources (DERs), Electric Vehicles (EVs) and the electrification of homes are reshaping how energy is produced, delivered and consumed. The traditional AMI 1.0 system designed for a past generation of metering is no longer enough. Experts at EnerNex, which provides consulting, planning and technical analysis for utilities, note that these shifts are driving the need for smarter, faster and more adaptable metering systems. EnerNex works with utilities to assess technology options, design deployment strategies and integrate advanced metering into grid operations. AMI 2.0 represents a significant step forward, offering capabilities that align with today’s increasingly digital and distributed grid. Empowering customers through real-time insights & transforming utility operations AMI 2.0 delivers more than meter readings; it delivers potential data insights that will benefit utility function and customer savings. By providing real-time, detailed energy data, customers can see exactly how and when they use energy. This visibility enables informed decisions, energy efficiency improvements, and cost savings through dynamic pricing and demand response programs. The system’s grid-edge analytics will allow homeowners to identify and address potential issues, such as failing equipment or inefficient energy use, before they become costly problems. Advanced analytics embedded in the meters themselves allow utilities to detect service quality issues before they escalate, manage DERs more effectively and optimize grid operations. The result is greater efficiency, reliability and resilience across the system. With this data-driven visibility, utilities can operate proactive, predictive grids rather than reactive ones. From 1.0 to 2.0: A technological evolution Technology has progressed dramatically since the first generation of smart meters. AMI 2.0 devices feature higher sampling rates, closer to real-time communication and enhanced data processing, that convert raw data into actionable insights. They also enable

Read More »

China Buyers Shun Russian Oil amid Sanctions

Chinese oil refiners are shunning Russian shipments after the US and others blacklisted Moscow’s top producers and some of its customers. State-owned giants such as Sinopec and PetroChina Co. are staying on the sidelines, having canceled some Russian cargoes in the wake of US sanctions on Rosneft PJSC and Lukoil PJSC last month, according to traders. Smaller private refiners, dubbed teapots, are also holding off, fearful of attracting similar penalties to those faced by Shandong Yulong Petrochemical Co., which was recently blacklisted by the UK and European Union. The Russian crudes affected include the widely-favored ESPO grade, which has seen prices plunge. Consultancy Rystad Energy AS estimates some 400,000 barrels a day, or as much as 45 percent of China’s total oil imports from Russia, are affected by the buyers’ strike.  Russia has cemented itself as China’s biggest foreign supplier, in part because its oil is so heavily discounted due to the penalties imposed by other countries after the invasion of Ukraine.  The US and its allies are now ratcheting up those sanctions, on both Russian producers and their customers, in a bid to stop the war by choking off Moscow’s oil revenues. China is the world’s biggest crude importer, and any constraints on sourcing from its neighbor are likely to work to the benefit of other suppliers.  Those could include the US, which agreed a landmark trade truce with Beijing at a meeting last week between leaders Donald Trump and Xi Jinping. But the sanctions aren’t a total loss for Moscow. Blacklisted Yulong, which has had cargoes canceled by western suppliers, has turned heavily to Russian oil because of a lack of other options.  Meanwhile, other private refiners are watching developments and refraining from actions that could trigger similar sanctions, according to Rystad. In any case, teapots are running up

Read More »

Supermicro Unveils Data Center Building Blocks to Accelerate AI Factory Deployment

Supermicro has introduced a new business line, Data Center Building Block Solutions (DCBBS), expanding its modular approach to data center development. The offering packages servers, storage, liquid-cooling infrastructure, networking, power shelves and battery backup units (BBUs), DCIM and automation software, and on-site services into pre-validated, factory-tested bundles designed to accelerate time-to-online (TTO) and improve long-term serviceability. This move represents a significant step beyond traditional rack integration; a shift toward a one-stop, data-center-scale platform aimed squarely at the hyperscale and AI factory market. By providing a single point of accountability across IT, power, and thermal domains, Supermicro’s model enables faster deployments and reduces integration risk—the modern equivalent of a “single throat to choke” for data center operators racing to bring GB200/NVL72-class racks online. What’s New in DCBBS DCBBS extends Supermicro’s modular design philosophy to an integrated catalog of facility-adjacent building blocks, not just IT nodes. By including critical supporting infrastructure—cooling, power, networking, and lifecycle software—the platform helps operators bring new capacity online more quickly and predictably. According to Supermicro, DCBBS encompasses: Multi-vendor AI system support: Compatibility with NVIDIA, AMD, and Intel architectures, featuring Supermicro-designed cold plates that dissipate up to 98% of component-level heat. In-rack liquid-cooling designs: Coolant distribution manifolds (CDMs) and CDUs rated up to 250 kW, supporting 45 °C liquids, alongside rear-door heat exchangers, 800 GbE switches (51.2 Tb/s), 33 kW power shelves, and 48 V battery backup units. Liquid-to-Air (L2A) sidecars: Each row can reject up to 200 kW of heat without modifying existing building hydronics—an especially practical design for air-to-liquid retrofits. Automation and management software: SuperCloud Composer for rack-scale and liquid-cooling lifecycle management SuperCloud Automation Center for firmware, OS, Kubernetes, and AI pipeline enablement Developer Experience Console for self-service workflows and orchestration End-to-end services: Design, validation, and on-site deployment options—including four-hour response service levels—for both greenfield builds

Read More »

Investments Anchor Vertiv’s Growth Strategy as AI-Driven Data Center Orders Surge 60% YoY

New Acquisitions and Partner Awards Vertiv’s third-quarter financial performance was underscored by a series of strategic acquisitions and ecosystem recognitions that expand the company’s technological capabilities and market reach amid AI-driven demand. Acquisition of Waylay NV: AI and Hyperautomation for Infrastructure Intelligence On August 26, Vertiv announced its acquisition of Waylay NV, a Belgium-based developer of generative AI and hyperautomation software. The move bolsters Vertiv’s portfolio with AI-driven monitoring, predictive services, and performance optimization for digital infrastructure. Waylay’s automation platform integrates real-time analytics, orchestration, and workflow automation across diverse connected assets and cloud services—enabling predictive maintenance, uptime optimization, and energy management across power and cooling systems. “With the addition of Waylay’s technology and software-focused team, Vertiv will accelerate its vision of intelligent infrastructure—data-driven, proactive, and optimized for the world’s most demanding environments,” said CEO Giordano Albertazzi. Completion of Great Lakes Acquisition: Expanding White Space Integration Just days earlier, as alluded to above, Vertiv finalized its $200 million acquisition of Great Lakes Data Racks & Cabinets, a U.S.-based manufacturer of enclosures and integrated rack systems. The addition expands Vertiv’s capabilities in high-density, factory-integrated white space solutions; bridging power, cooling, and IT enclosures for hyperscale and edge data centers alike. Great Lakes’ U.S. and European manufacturing footprint complements Vertiv’s global reach, supporting faster deployment cycles and expanded configuration flexibility.  Albertazzi noted that the acquisition “enhances our ability to deliver comprehensive infrastructure solutions, furthering Vertiv’s capabilities to customize at scale and configure at speed for AI and high-density computing environments.” 2024 Partner Awards: Recognizing the Ecosystem Behind Growth Vertiv also spotlighted its partner ecosystem in August with its 2024 North America Partner Awards. The company recognized 11 partners for 2024 performance, growth, and AI execution across segments: Partner of the Year – SHI for launching a customer-facing high-density AI & Cyber Labs featuring

Read More »

QuEra’s Quantum Leap: From Neutral-Atom Breakthroughs to Hybrid HPC Integration

The race to make quantum computing practical – and commercially consequential – took a major step forward this fall, as Boston-based QuEra Computing announced new research milestones, expanded strategic funding, and an accelerating roadmap for hybrid quantum-classical supercomputing. QuEra’s Chief Commercial Officer Yuval Boger joined the Data Center Frontier Show to discuss how neutral-atom quantum systems are moving from research labs into high-performance computing centers and cloud environments worldwide. NVIDIA Joins Google in Backing QuEra’s $230 Million Round In early September, QuEra disclosed that NVentures, NVIDIA’s venture arm, has joined Google and others in expanding its $230 million Series B round. The investment deepens what has already been one of the most active collaborations between quantum and accelerated-computing companies. “We already work with NVIDIA, pairing our scalable neutral-atom architecture with its accelerated-computing stack to speed the arrival of useful, fault-tolerant quantum machines,” said QuEra CEO Andy Ory. “The decision to invest in us underscores our shared belief that hybrid quantum-classical systems will unlock meaningful value for customers sooner than many expect.” The partnership spans hardware, software, and go-to-market initiatives. QuEra’s neutral-atom machines are being integrated into NVIDIA’s CUDA-Q software platform for hybrid workloads, while the two companies collaborate at the NVIDIA Accelerated Quantum Center (NVAQC) in Boston, linking QuEra hardware with NVIDIA’s GB200 NVL72 GPU clusters for simulation and quantum-error-decoder research. Meanwhile, at Japan’s AIST ABCI-Q supercomputing center, QuEra’s Gemini-class quantum computer now operates beside more than 2,000 H100 GPUs, serving as a national testbed for hybrid workflows. A jointly developed transformer-based decoder running on NVIDIA’s GPUs has already outperformed classical maximum-likelihood error-correction models, marking a concrete step toward practical fault-tolerant quantum computing. For NVIDIA, the move signals conviction that quantum processing units (QPUs) will one day complement GPUs inside large-scale data centers. For QuEra, it widens access to the

Read More »

How CoreWeave and Poolside Are Teaming Up in West Texas to Build the Next Generation of AI Data Centers

In the evolving landscape of artificial-intelligence infrastructure, a singular truth is emerging: access to cutting-edge silicon and massive GPU clusters is no longer enough by itself. For companies chasing the frontier of multi-trillion-parameter model training and agentic AI deployment, the bottleneck increasingly lies not just in compute, but in the seamless integration of compute + power + data center scale. The latest chapter in this story is the collaboration between CoreWeave and Poolside, culminating in the launch of Project Horizon, a 2-gigawatt AI-campus build in West Texas. Setting the Stage: Who’s Involved, and Why It Matters CoreWeave (NASDAQ: CRWV) has positioned itself as “The Essential Cloud for AI™” — a company founded in 2017, publicly listed in March 2025, and aggressively building out its footprint of ultra-high-performance infrastructure.  One of its strategic moves: in July 2025 CoreWeave struck a definitive agreement to acquire Core Scientific (NASDAQ: CORZ) in an all-stock transaction. Through that deal, CoreWeave gains grip over approximately 1.3 GW of gross power across Core Scientific’s nationwide data center footprint, plus more than 1 GW of expansion potential.  That acquisition underlines a broader trend: AI-specialist clouds are no longer renting space and power; they’re working to own or tightly control it. Poolside, founded in 2023, is a foundation-model company with an ambitious mission: building artificial general intelligence (AGI) and deploying enterprise-scale agents.  According to Poolside’s blog: “When people ask what it takes to build frontier AI … the focus is usually on the model … but that’s only half the story. The other half is infrastructure. If you don’t control your infrastructure, you don’t control your destiny—and you don’t have a shot at the frontier.”  Simply put: if you’re chasing multi-trillion-parameter models, you need both the compute horsepower and the power infrastructure; and ideally, tight vertical integration. Together, the

Read More »

Vantage Data Centers Pours $15B Into Wisconsin AI Campus as It Builds Global Giga-Scale Footprint

Expanding in Ohio: Financing Growth Through Green Capital In June 2025, Vantage secured $5 billion in green loan capacity, including $2.25 billion to fully fund its New Albany, Ohio (OH1) campus and expand its existing borrowing base. The 192 MW development will comprise three 64 MW buildings, with first delivery expected in December 2025 and phased completion through 2028. The OH1 campus is designed to come online as Vantage’s larger megasites ramp up, providing early capacity and regional proximity to major cloud and AI customers in the Columbus–New Albany corridor. The site also offers logistical and workforce advantages within one of the fastest-growing data center regions in the U.S. Beyond the U.S. – Vantage Expands Its Global Footprint Moving North: Reinforcing Canada’s Renewable Advantage In February 2025, Vantage announced a C$500 million investment to complete QC24, the fourth and final building at its Québec City campus, adding 32 MW of capacity by 2027. The project strengthens Vantage’s Montreal–Québec platform and reinforces its renewable-heavy power profile, leveraging abundant hydropower to serve sustainability-driven customers. APAC Expansion: Strategic Scale in Southeast Asia In September 2025, Vantage unveiled a $1.6 billion APAC expansion, led by existing investors GIC (Singapore’s sovereign wealth fund) and ADIA (Abu Dhabi Investment Authority). The investment includes the acquisition of Yondr’s Johor, Malaysia campus at Sedenak Tech Park. Currently delivering 72.5 MW, the Johor campus is planned to scale to 300 MW at full build-out, positioning it within one of Southeast Asia’s most active AI and cloud growth corridors. Analysts note that the location’s connectivity to Singapore’s hyperscale market and favorable development economics give Vantage a strong competitive foothold across the region. Italy: Expanding European Presence Under National Priority Status Vantage is also adding a second Italian campus alongside its existing Milan site, totaling 32 MW across two facilities. Phase

Read More »

Nvidia GTC show news you need to know round-up

In the case of Flex, it will use digital twins to unify inventory, labor, and freight operations, streamlining logistics across Flex’s worldwide network. Flex’s new 400,000 sq. ft. facility in Dallas is purpose-built for data center infrastructure, aiming to significantly shorten lead times for U.S. customers. The Flex/Nvidia partnership aims to address the country’s labor shortages and drive innovation in manufacturing, pharmaceuticals, and technology. The companies believe the partnership sets the stage for a new era of giga-scale AI factories. Nvidia and Oracle to Build DOE’s Largest AI Supercomputer Oracle continues its aggressive push into supercomputing with a deal to build the largest AI supercomputer for scientific discovery — Using Nvidia GPUs, obviously — at a Department of Energy facility. The system, dubbed Solstice, will feature an incredible 100,000 Nvidia Blackwell GPUs. A second system, dubbed Equinox, will include 10,000 Blackwell GPUs and is expected to be available in the first half of 2026. Both systems will be interconnected by Nvidia networking and deliver a combined 2,200 exaflops of AI performance. The Solstice and Equinox supercomputers will be located at Argonne National Laboratory, the home to the Aurora supercomputer, built using all Intel parts. They will enable scientists and researchers to develop and train new frontier models and AI reasoning models for open science using the Nvidia Megatron-Core library and scale them using the Nvidia TensorRT inference software stack.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »