Stay Ahead, Stay ONMINE

EU Commission Names 5 New RE Projects for Potential CEF Funding

The European Commission has added five to the list of cross-border renewable energy projects (CB RES) eligible for financing under the Connecting Europe Facility (CEF) for Energy program, expanding the list to 13 projects. “The five projects, thanks to the official CB RES status, are now eligible for financial support for studies and works under the CEF […]

The European Commission has added five to the list of cross-border renewable energy projects (CB RES) eligible for financing under the Connecting Europe Facility (CEF) for Energy program, expanding the list to 13 projects.

“The five projects, thanks to the official CB RES status, are now eligible for financial support for studies and works under the CEF Energy Program”, the Commission said in a statement on its website.

“Furthermore, they benefit from higher visibility, increased investor certainty and stronger support from member states”.

The newly recognized projects involve the Baltic states, North Africa, Germany, Poland and Portugal.

To rise on the Estonian side of the Baltic Sea, the Liivi Bay Offshore Wind Farm will be radially connected to the national grid with a capacity of one gigawatt (GW). It is expected to go online 2031.

“The wind farm will contribute to Estonia’s goal of producing 100 percent of electricity from renewable sources by 2030, while also supporting Latvia’s energy transition”, the Commission said.

Straddling the Latvia-Lithuania border, the onshore Utilitas Eleja-Jonisķis Wind Park is expected to deliver 200 megawatts from 2028.

“The project is strategically located to connect to the 330 kV Viskaļi-Musa transmission line, enhancing regional grid stability and energy independence”, the Commission said.

Meanwhile the Twin Heat project will decarbonize the district heating systems of the twin cities of Slubice, Poland, and Frankfurt, Germany. Renewables-based heating infrastructure enabling cross-border heat exchange will be installed.

The latest additions to the CB RES list also include a research project that “paves the way for groundwork for future cross-border deployment of floating offshore wind energy in Portugal in a cooperation with Luxembourg”, the Commission said.

“It will assess offshore wind zones, grid reinforcements and auction models, helping to unlock up to 10 GW of offshore wind capacity in Portugal. The project also fosters collaboration around green hydrogen, port infrastructure and energy system planning”.

Medlink Renewable Generation, a “landmark North-South cooperation project”, rounds up the newly recognized projects. It aims to build 10 GW of solar and wind capacity with battery energy storage systems in Algeria and Tunisia.

“Two 2 GW HVDC interconnectors will export up to 22.8 TWh/year of clean electricity to Italy (outside the scope of the CB RES project)”, the Commission said of the North African projects.

“This adopted list is now submitted to the European Parliament and the Council for a two-month period of scrutiny (this period may be extended by an additional two-month period upon their request) and it will only be formally published in the Official Journal after this period, and enter into force 20 days later”, it said.

On September 2 the Commission was scheduled to launch a new call for applications to join the CB RES list.

To contact the author, email [email protected]

What do you think? We’d love to hear from you, join the conversation on the

Rigzone Energy Network.

The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Intel details new efficient Xeon processor line

The new chips will be able to support up to 12-channel DDR5 memory with speeds of up to 8000 MT/s, a substantial increase over the 8 channels of 6400MT/s in the prior generation. In addition to that, the platform will support up to 6 UPI 2.0 links with up to

Read More »

New York City could face power reliability issues beginning next year: ISO

Dive Brief: The New York electric grid faces increased risk of power shortages over the next five years unless planned projects, including new transmission and offshore wind resources, are brought online, the state’s grid operator said Tuesday. Starting next summer, the Independent System Operator anticipates its reliability margins in New York City will be dangerously thin, making the grid more vulnerable to failures. In addition to potentially losing anticipated wind power, the system is strained by generator deactivations, increasing consumer demand, transmission limitations and difficulties in developing new resources, the ISO said in a statement highlighting a pair of reliability reports.  “The NYISO’s findings should be alarming to residents and serve as another wake up call for the state,” Gavin Donohue, president of the Independent Power Producers of New York, said in a statement. “Electric demand is continuing to drastically rise, and the state needs to look at all possible resources.” Dive Insight: The ISO’s Short-Term Assessment of Reliability examined the five-year period from July 2025 to July 2030 and identified reliability weaknesses beginning in New York City in 2026, in Long Island in 2027 and in the Lower Hudson Valley region in 2030.  The New York City area will be deficient “through the entire five-year horizon without the completion and energization of future planned projects,” the STAR report concluded. Those projects include the 816-MW Empire Wind offshore project, which was expected to be online by 2027. That timeline was complicated when the Trump administration halted the project in April before allowing it to resume a month later. Since then, it has faced additional complications and delays. The STAR report also cited the importance of the 1,250-MW Champlain Hudson Power Express transmission line, which is slated bring power to the city from Quebec beginning next year. “Once CHPE, Empire Wind, and

Read More »

California to invest $226M in offshore wind ports amid federal cuts

Dive Brief: The California Energy Commission authorized $42.75 million in grants for offshore wind port development on Wednesday, in what commission staff described as the first appropriation of the state’s Climate Bond funds for offshore wind. Last month, California state lawmakers also authorized $225.7 million in spending for offshore wind ports and related facilities through June 2030, with funds coming from a $10 billion Climate Bond approved by voters in 2024. The construction of supportive ports and transmission systems in California is critical to deploying commercial floating offshore wind by the mid 2030s, according to a survey by offshore wind industry trade association Oceantic. Dive Insight: Despite stop-work orders and project cancellations on the East Coast, California’s status as a global leader in floating offshore wind development remains unchanged, according to Oceantic Senior Policy Director Nancy Kirshner-Rodriguez. “States have always been the driving force of our industry, and will continue to be, regardless of who is in power,” she said. “They are creating the demand signals that will pull in investment, and they are underwriting the critical enabling infrastructure investments that make the market move forward.” The Port San Luis Harbor District — one of five entities that received grants from the CEC last week to fund offshore wind port planning, engineering and design — is on track to become the state’s first dedicated offshore wind hub, according to Reid Boggiano, a CEC offshore wind program specialist. The Port of Long Beach is also well on its way to building Pier Wind, a dedicated 400-acre offshore wind terminal that the CEC awarded $20 million to wrap up planning and engineering and to complete environmental assessments. The Humboldt Bay Harbor and the cities of Oakland and Richmond also received grants for feasibility studies to determine if those communities could host offshore wind

Read More »

Interior denies canceling largest solar project in U.S. after axing review

The U.S. Department of the Interior has canceled its broad environmental review for the seven individual projects that make up the 6.2-GW Esmeralda 7 solar project located on federal land in Nevada and will review and permit each project individually, according to a spokesperson. Esmeralda 7 is set to be the largest solar project in the U.S. by capacity. The project’s National Environmental Policy Act status is listed as canceled on the Bureau of Land Management’s website. In a Tuesday email, the spokesperson said the BLM had not canceled the project. “During routine discussions prior to the lapse in appropriations, the proponents and BLM agreed to change their approach for the Esmeralda 7 Solar Project in Nevada,” they said. “Instead of pursuing a programmatic level environmental analysis, the applicants will now have the option to submit individual project proposals to the BLM to more effectively analyze potential impacts.” According to a draft programmatic environmental impact statement for the project from July of last year, the individual projects are: Lone Mountain Solar, Nivloc Solar, Smoky Valley Solar, Red Ridge 1 Solar, Red Ridge 2 Solar, Esmeralda Energy Center and Gold Dust Solar, which “would be geographically contiguous and encompass approximately 62,300 acres of BLM-administered lands approximately 30 miles west of Tonopah, Nevada.” Developers of the seven projects include Invenergy, Avantus, and NextEra. A spokesperson for NextEra told The Guardian, “We are in the early stage of development and remain committed to pursuing our project’s comprehensive environmental analysis by working closely with the Bureau of Land Management.” All of the projects have pending right-of-way applications before the Bureau of Land Management, the draft PEIS said. As the project’s previous NEPA review process has been canceled, it’s unclear how long it will now take each project to secure approvals. The original draft PEIS said that

Read More »

Losing power, losing billions: How offshoring grid materials weakens America

Jim Welsh is CEO of Peak Nano. National security requires self-reliance and independence. Today, America’s energy infrastructure and supply chain face a critical test of both.  Electrification, AI, data centers and extreme weather are driving unprecedented demand for energy and grid reliability. The U.S. Department of Energy’s latest Grid Reliability Report makes one thing clear: the time for “business-as-usual” is over. The U.S. can’t keep up, let alone grow, on old infrastructure. We need a radical shift to expand, modernize and intelligently manage the grid.  Success depends on bold investments in digital technologies, enhanced equipment and advanced grid management. Together, these will enable real-time monitoring, rapid integration of renewables and far less energy lost in transmission, which today can range from 8-12% of power generation. That’s how America will meet surging demand and protect its global leadership in the AI era.   The U.S. has an Achilles’ heel: critical materials. At the heart of grid reliability are magnets, rare-earth minerals and dielectric film. These materials are often overlooked but are vital for our power infrastructure, grid reliability and every part of our energy security. Dielectric films, for example, are used in capacitors that condition power, convert AC/DC power, keep power flowing steadily and even help manage spikes in demand to keep the grid stable and secure.   The problem? The U.S. has outsourced our ability to replace failing transformers, capacitors and other critical grid components. Capacitor film — a highly engineered, ultra-thin plastic that enables power stability and distribution for our grid — is almost entirely made overseas, and 75% is made in China, which dominates the global supply.  Every year, we spend nearly $200 billion overseas for this film and other critical materials. Historically, we’ve migrated our capacity to manufacture it overseas. No American manufacturer even builds the equipment to make dielectric films.  This isn’t

Read More »

Kuwait Unveils Major Offshore Discovery

State-owned Kuwait Oil Co. made a “major” discovery in the Jazah natural gas field in the OPEC member’s offshore region. “The initial exploration well recorded the highest production rate from a vertical well in the Minagish formation in Kuwait’s history,” KOC said in a statement on Monday. The company has made similar announcements for oil and gas offshore discoveries since last year. Initial tests from the Jazah-1 well revealed “exceptional production exceeding 29 million cubic feet of gas per day, and more than 5,000 barrels per day of condensate,” KOC, a unit of state-owned Kuwait Petroleum Corp., said. The field’s initial estimated area measures about 40 square kilometers (15.4 square miles), with projections indicating about 1 trillion cubic feet of gas and 120 million barrels of condensate, KOC said. Kuwait is OPEC’s fifth-biggest producer, with a current output of about 2.52 million barrels a day. It aims to boost capacity to 4 million barrels a day by 2035. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

IEA Sees Drone Strikes Suppressing Russia Refinery Runs to Mid-2026

The impact from Ukrainian drone strikes will suppress Russia’s refinery processing rates until at least mid-2026, the International Energy Agency says in its latest monthly oil-market report.  Kyiv has been intensifying attacks on its foe’s energy infrastructure — including oil refineries, pipelines and sea terminals — in a move to cut the Kremlin’s energy revenue and reduce its ability to supply fuel to the front lines.  Since the start of August, Ukraine has launched at least 28 strikes on key Russian refineries — with the range of the attacks increasing — causing gasoline shortages in several regions, including occupied Crimea. This has forced the Kremlin to impose fuel-export restrictions until the end of the year. “Previously, we had assumed a normalization of refining activity as we approached year-end but now embed a more cautious outlook,” the Paris-based agency said in a report on Tuesday. The IEA currently sees Russian processing rates at just under 5 million barrels a day through June 2026, and a recovery toward 5.4 million barrels a day later, with the outlook to be revised as more information becomes available. “The increasingly widespread and significant Ukrainian drone campaign against Russian oil refineries and infrastructure” has so far cut the nation’s crude processing by an estimated 500,000 barrels a day, the agency said. The government in Moscow has classified most energy data, including refinery runs and car-fuel production, which makes it difficult to make a precise assessment of the drone-related damage. Last week, Deputy Prime Minister Alexander Novak said the nation’s refiners have increased their runs, balancing domestic fuel demand and supply. Lower Revenue With the drone strikes weighing on refinery runs, Russia raised its crude exports in September to 5.1 million barrels a day, the highest since May 2023, according to the IEA. Still, the nation’s oil-export

Read More »

Inside Blackstone’s Electrification Push: From Shermco to the Power Backbone of AI Data Centers

According to the National Electrical Manufacturers Association (NEMA), U.S. energy demand is projected to grow 50% by 2050. Electrical manufacturers have invested more than $10 billion since 2021 in new technologies to expand grid and manufacturing capacity, also reducing reliance on materials from China by 32% since 2018. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built. As we previously reported in Data Center Frontier, investors realized this years ago, viewing these facilities both as technology assets and a unique convergence of real estate, utility infrastructure, and mission-critical systems that can also generate revenue. One of those investors is global asset manager Blackstone, which through its Energy Transition Partners private equity arm, recently acquired Shermco Industries for $1.6 billion. Announced August 21, the deal is part of Blackstone’s strategy to invest in companies that support the growing demand for electrification and a more reliable power grid. The goal is to strengthen data center infrastructure reliability and expand critical electrical services. Founded in 1974, Texas-based Shermco is one of the largest electrical testing organizations accredited by the InterNational Electrical Testing Association (NETA). The company operates in a niche yet important space: providing lifecycle electrical services, including maintenance, testing, commissioning, repair, and design, in support of data centers, utilities, and industrial clients. It has more than 40 service centers in the U.S. and Canada. In addition to helping Blackstone support its electrification and power grid reliability goals, the Shermco purchase is also part of Blackstone’s strategy to increase scale and resources—revenue increases without a substantial increase in resources—thus expanding its footprint and capabilities within the essential energy services sector.  As data centers expand globally, become more energy intensive, and are pressured to incorporate renewables and modernize grids, Blackstone’s leaders plan to leverage Shermco’s

Read More »

Cooling, Compute, and Convergence: How Strategic Alliances Are Informing the AI Data Center Playbook

Schneider Electric and Compass Datacenters: Prefabrication Meets the AI Frontier “We’re removing bottlenecks and setting a new benchmark for AI-ready data centers.” — Aamir Paul, Schneider Electric In another sign of how collaboration is accelerating the next wave of AI infrastructure, Schneider Electric and Compass Datacenters have joined forces to redefine the data center “white space” build-out: the heart of where power, cooling, and compute converge. On September 9, the two companies unveiled the Prefabricated Modular EcoStruxure™ Pod, a factory-built, fully integrated white space module designed to compress construction timelines, reduce CapEx, and simplify installation while meeting the specific demands of AI-ready infrastructure. The traditional fit-out process for the IT floor (i.e. integrating power distribution, cooling systems, busways, cabling, and network components) has long been one of the slowest and most error-prone stages of data center construction. Schneider and Compass’ new approach tackles that head-on, shifting the entire workflow from fragmented on-site assembly to standardized off-site manufacturing. “The traditional design and approach to building out power, cooling, and IT networking equipment has relied on multiple parties installing varied pieces of equipment,” the companies noted. “That process has been slow, inefficient, and prone to errors. Today’s growing demand for AI-ready infrastructure makes traditional build-outs ripe for improvement.” Inside the EcoStruxure Pod: White Space as a Product The EcoStruxure Pod packages every core element of a high-performance white space environment (power, cooling, and IT integration) into a single prefabricated, factory-tested unit. Built for flexibility, it supports hot aisle containment, InRow cooling, and Rear Door Heat Exchanger (RDHx) configurations, alongside high-power busways, complex network cabling, and a technical water loop for hybrid or full liquid-cooled deployments. By manufacturing these pods off-site, Schneider Electric can deliver a complete, ready-to-install white space module that arrives move-in ready. Once delivered to a Compass Datacenters campus, the

Read More »

Inside Microsoft’s Global AI Infrastructure: The Fairwater Blueprint for Distributed Supercomputing

Microsoft’s newest AI data center in Wisconsin, known as “Fairwater,” is being framed as far more than a massive, energy-intensive compute hub. The company describes it as a community-scale investment — one that pairs frontier-model training capacity with regional development. Microsoft has prepaid local grid upgrades, partnered with the Root-Pike Watershed Initiative Network to restore nearby wetlands and prairie sites, and launched Wisconsin’s first Datacenter Academy in collaboration with Gateway Technical College, aiming to train more than 1,000 students over the next five years. The company is also highlighting its broader statewide impact: 114,000 residents trained in AI-related skills through Microsoft partners, alongside the opening of a new AI Co-Innovation Lab at the University of Wisconsin–Milwaukee, focused on applying AI in advanced manufacturing. It’s Just One Big, Happy AI Supercomputer… The Fairwater facility is not a conventional, multi-tenant cloud region. It’s engineered to operate as a single, unified AI supercomputer, built around a flat networking fabric that interconnects hundreds of thousands of accelerators. Microsoft says the campus will deliver up to 10× the performance of today’s fastest supercomputers, purpose-built for frontier-model training. Physically, the site encompasses three buildings across 315 acres, totaling 1.2 million square feet of floor area, all supported by 120 miles of medium-voltage underground cable, 72.6 miles of mechanical piping, and 46.6 miles of deep foundation piles. At the rack level, each NVL72 system integrates 72 NVIDIA Blackwell GPUs (GB200), fused together via NVLink/NVSwitch into a single high-bandwidth memory domain capable of 1.8 TB/s GPU-to-GPU throughput and 14 TB of pooled memory per rack. This creates a topology that may appear as independent servers but can be orchestrated as a single, giant accelerator. Microsoft reports that one NVL72 can process up to 865,000 tokens per second. Future Fairwater-class deployments (including those under construction in the UK and Norway)

Read More »

Powering the AI Era: Innovations in Data Center Power Supply Design and Infrastructure

Recently, Data Center Frontier sister publication Electronic Design (ED) released an eBook curated by ED Senior Editor James Morra titled In the Age of AI, A New Playbook for Power Supply Design, with a collection of detailed technology articles focused on understanding the nuts and bolts of delivering power to AI-centric data centers. This compendium explores how the surge in artificial intelligence (AI) workloads is transforming data center power architectures and includes suggestions for addressing the issues. Breaking the Power Barrier As GPUs like NVIDIA’s Blackwell B100 and B200 cross the 1,000-watt threshold per chip, rack power densities are soaring beyond 100 kW, and in some projections, approaching 1 MW per rack. This unprecedented demand is exposing the limits of legacy 12-volt and 48-volt architectures, where inefficient conversion stages and high I²R losses drive up both energy waste and cooling load. Powering the Next Era of AI Infrastructure As AI data centers scale toward multi-megawatt clusters and rack densities approach one megawatt, traditional power architectures are straining under the load. The next frontier of efficiency lies in rethinking how electricity is distributed, converted, and protected inside the rack. From high-voltage DC distribution to wide-bandgap semiconductors and intelligent eFuses, a new generation of technologies is reshaping power delivery for AI. The articles in this report drill down into five core themes driving that transformation: Electronic Fuses (eFuses) for Power Protection Texas Instruments and others are introducing 48-volt-rated eFuses that integrate current sensing, control, and switching into a single device. These allow hot-swapping of AI servers without dangerous inrush currents, enable intelligent fault detection, and can be paralleled to support rack loads exceeding 100 kW. The result: simplified PCB design, improved reliability, and robust support for AI’s steep and dynamic current requirements. The Shift from 48 V to 400–800 V High-Voltage DC (HVDC)

Read More »

Fusion Energy Moves Toward Reality: Strategic Investments by CFS, Google, and Eni Signal Commercial Readiness

Global Fusion Momentum: France, Europe, and a New Competitive Context As CFS, Google, Eni, and Helion press ahead, other fusion efforts worldwide are also making waves, reminding us this is a global race, not a U.S.-exclusive pursuit. In France, the CEA’s WEST tokamak recently achieved a new benchmark by sustaining plasma for more than 22 minutes (1,337 seconds) at ~50 million °C, breaking previous records and demonstrating improved plasma control and stability. That milestone underscores the incremental but essential progress in continuous operation, one of the key prerequisites for any commercially viable fusion system. Meanwhile, ITER, the international flagship built in southern France, continues its slow-but-steady assembly. Despite years of delays and cost overruns, ITER remains central to global fusion ambitions. It’s not expected to produce significant fusion output until the 2030s, but its role in validating large-scale superconducting magnet systems, remote maintenance, tritium breeding, plasma control, and heat management is essential to de-risking downstream commercial fusion designs. Elsewhere in Europe, Proxima Fusion (Germany) is gaining attention. The company is developing a quasi-isodynamic stellarator design and has recently raised €130 million in its Series A, showing that alternative confinement geometries are earning investor support. While that path is more speculative, it adds needed diversity to the fusion technology portfolio. Germany’s Wendelstein 7-X Raises the Bar Germany added another major milestone to the fusion timeline this fall. At the Max Planck Institute for Plasma Physics, researchers operating the Wendelstein 7-X stellarator sustained a high-performance plasma for 43 seconds, setting a new world record for continuous fusion confinement. The run demonstrated stability and control at temperatures exceeding 30 million °C, proving that stellarators, once viewed mainly as scientific curiosities, can now compete head-to-head with tokamaks in performance. Unlike tokamaks, which rely on strong external currents to confine plasma, stellarators use a twisted

Read More »

OpenAI–Broadcom alliance signals a shift to open infrastructure for AI

The decision also reflects a future of AI workloads running on heterogeneous computing and networking infrastructure, said Lian Jye Su, chief analyst at Omdia. “While it makes sense for enterprises to first rely on Nvidia’s full stack solution to roll out AI, they will generally integrate alternative solutions such as AMD and self-developed chips for cost efficiency, supply chain diversity, and chip availability,” Su said. “This means data center networking vendors will need to consider interoperability and open standards as ways to address the diversification of AI chip architecture.” Hyperscalers and enterprise CIOs are increasingly focused on how to efficiently scale up or scale out AI servers as workloads expand. Nvidia’s GPUs still underpin most large-scale AI training, but companies are looking for ways to integrate them with other accelerators. Neil Shah, VP for research at Counterpoint Research, said that Nvidia’s recent decision to open its NVLink interconnect to ecosystem players earlier this year gives hyperscalers more flexibility to pair Nvidia GPUs with custom accelerators from vendors such as Broadcom or Marvell. “While this reduces the dependence on Nvidia for a complete solution, it actually increases the total addressable market for Nvidia to be the most preferred solution to be tightly paired with the hyperscaler’s custom compute,” Shah said. Most hyperscalers have moved toward custom compute architectures to diversify beyond x86-based Intel or AMD processors, Shah added. Many are exploring Arm or RISC-V designs that can be tailored to specific workloads for greater power efficiency and lower infrastructure costs. Shifting AI infrastructure strategies The collaboration also highlights how networking choices are becoming as strategic as chip design itself, suggesting a change in how AI workloads are powered and connected.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Can we repair the internet?

From addictive algorithms to exploitative apps, data mining to misinformation, the internet today can be a hazardous place. Books by three influential figures—the intellect behind

Read More »