Stay Ahead, Stay ONMINE

MISO proposes framework to speed generation interconnection

Dive Brief: The Midcontinent Independent System Operator on Monday asked federal regulators to approve an Expedited Resource Addition Study process, or ERAS, to provide a framework for the accelerated study of generation projects “that can address urgent resource adequacy and reliability needs in the near term.” MISO asked the Federal Energy Regulatory Commission to approve the […]

Dive Brief:

  • The Midcontinent Independent System Operator on Monday asked federal regulators to approve an Expedited Resource Addition Study process, or ERAS, to provide a framework for the accelerated study of generation projects “that can address urgent resource adequacy and reliability needs in the near term.”
  • MISO asked the Federal Energy Regulatory Commission to approve the ERAS proposal to be effective May 17. The grid operator is on pace for near-term capacity shortfalls, should resource retirements continue as planned, it said.
  • MISO proposed for projects entering the ERAS process, as opposed to MISO’s standard Generator Interconnection Queue, to be studied serially each quarter and granted an Expedited Generator Interconnection Agreement within 90 days. Renewable energy stakeholders, however, warn the ERAS proposal “adds chaos to an already complex process.”

Dive Insight:

Recent surveys and forecasts demonstrate the urgency with which MISO needs to “address significant resource adequacy needs in its footprint that are compounded by the addition of unexpected large spot loads,” the grid operator told FERC.

NERC’s 2024 Long-Term Reliability Assessment projected MISO will experience a 4.7 GW shortfall by 2028 if the current expected generator retirements occur, the grid operator said. And last year the grid operator and the Organization of MISO States published a report warning of possible capacity shortfalls beginning this summer.

The ERAS proposal “is MISO’s answer to addressing these resource adequacy and reliability needs in the near-term,” it said in its proposal. “ERAS is a unique process which recognizes that the responsibility for providing grid reliability and resource adequacy in the MISO region is shared by Load Serving Entities … the states, and MISO.”

According to MISO’s application, as of March 13 its generator interconnection queue contained 1,603 active interconnection requests.

“This considerable backlog of applications is spread over all five of MISO’s study regions and includes queue cycles going back to 2019,” it said. “The queue size continues to be extraordinary and unprecedented — the 2023 queue cycle, the last to close in 2024, alone is 123 GW.”

Importantly, MISO said almost 70% of the total generation capacity that entered the 2017 and 2018 queue cycles was eventually withdrawn and “similar withdrawal rates are occurring in the later cycles as well.”

But the Clean Grid Alliance, which represents renewable energy stakeholders, said the ERAS framework “overcomplicates an already complex system.”

“ERAS has been introduced primarily to address the demands of a few states within the MISO footprint that are seeking to prioritize resources not currently in the existing interconnection queue, despite ample availability of generating resources that have completed the queue and are ready for commercial operations,” CGA Vice President, Transmission and Markets, David Sapper, said in a statement.

Even with a 21% completion rate, the queue has 18 GW of storage and hybrid capacity, and planned transmission expansion could increase that to 29 GW, “far exceeding the projected shortfall,” CGA said in a statement. “Furthermore, ERAS is moving forward before the full effect of recent queue reforms is seen, which has already reduced the queue by approximately 33%.”

The renewables group said it is advocating for solutions that “maintain open access, avoid delays to existing processes, and leverages faster-constructing resources that are already in the queue.”

“There is no need to upset the apple cart. Rather, we encourage MISO to embrace the simplest solution, which is to stick with their existing tariff because it already allows for expediting serious projects,” said CGA Executive Director Beth Soholt.

MISO’s existing Provisional Generator Integration Agreement “maintains competition, efficiency, and reliability and can quickly interconnect the most certain, non-speculative projects, including gas,” Soholt said. “It’s technology-neutral and inherently prioritizes the need. Existing processes can bring capacity online quickly, while maintaining open access that keep costs down. This fast and fair solution to meeting large load demands is good for everyone.”

MISO’s application assured regulators that “guardrails” will ensure that “only truly necessary and certain projects can enter ERAS.”

Projects must demonstrate 100% site control for the interconnection customer’s interconnection facilities, establish due dates for commercial operation, pay a nonrefundable deposit of $100,000 and a $24,000/MW milestone payment. They must also agree to pay for all necessary network upgrades.

MISO said it wants to sunset ERAS by the end of 2028, reflecting the grid operator’s “intention for these projects to be completed as soon as possible as well as providing MISO with sufficient time to complete other queue process improvements.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Google Cloud partners with mLogica to offer mainframe modernization

Other than the partnership with mLogica, Google Cloud also offers a variety of other mainframe migration tools, including Radis and G4 that can be employed to modernize specific applications. Enterprises can also use a combination of migration tools to modernize their mainframe applications. Some of these tools include the Gemini-powered

Read More »

USA Crude Oil Inventories Down 3.3MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 3.3 million barrels from the week ending March 14 to the week ending March 21, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was published on March 26 and included data for the week ending March 21. The EIA report showed that crude oil stocks, not including the SPR, stood at 433.6 million barrels on March 21, 437.0 million barrels on March 14, and 448.2 million barrels on March 22, 2024. Crude oil in the SPR stood at 396.1 million barrels on March 21, 395.9 million barrels on March 14, and 363.1 million barrels on March 22, 2024, the report outlined. The EIA report highlighted that data may not add up to totals due to independent rounding. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.600 billion barrels on March 21, the report showed. Total petroleum stocks were up 3.5 million barrels week on week and up 19.9 million barrels year on year, the report revealed. “At 433.6 million barrels, U.S. crude oil inventories are about five percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories decreased by 1.4 million barrels from last week and are two percent above the five year average for this time of year. Finished gasoline inventories increased and blending components inventories decreased last week,” it added. “Distillate fuel inventories decreased by 0.4 million barrels last week and are about seven percent below the five year average for this time of year. Propane/propylene inventories decreased by

Read More »

Schneider Electric to invest $700M in US manufacturing

Dive Brief: Automation manufacturer Schneider Electric plans to invest $700 million in its U.S. operations through 2027, the company announced Tuesday. The money will go toward facility upgrades, expansions and openings across eight sites in Texas, Tennessee, Ohio, North Carolina, Massachusetts and Missouri. Schneider expects to create more than 1,000 jobs.  The move marks Schneider’s largest-ever investment in the U.S., as the company aims to meet rising demand across its data center, utilities, manufacturing and energy infrastructure segments. Dive Insight: Schneider’s announcement is part of a larger $1 billion investment the company is making in the U.S. this decade.  Artificial intelligence-driven demand for data centers and electrical infrastructure is driving the need for heightened spending on electrical grid-related needs. Data center electricity demand could double by 2030 — consuming up to 9% of the country’s electricity generation, according to a May 2024 study by the Electric Power Research Institute. “We stand at an inflection point for the technology and industrial sectors in the U.S., driven by incredible AI growth and unprecedented energy demand,” Aamir Paul, president of North America Operations for Schneider Electric, said in a statement.  Schneider has been pushing a localization strategy in recent months, with a goal to locally source and produce roughly 90% of sales in each region. That push could help the company weather the Trump administration’s tariffs on Mexico, where Schneider has much of its North American production.  CFO Hilary Maxson said on a recent earnings call that the company is watching for any reciprocal tariffs that may impact their operations. If the United States-Mexico-Canada Agreement remains in place, Maxson said the impact to Schneider would likely be “immaterial.” If the trade deal and free trade zones are repealed, however, the CFO added that the hit to the company could be greater.  “We’re really

Read More »

Utilities should develop data center tariffs to protect consumers, decarbonize: SWEEP

With data center electricity demand on the rise across the U.S., utilities should develop specialized tariffs to protect consumers and keep their grids green when these large load customers interconnect, the Southwest Energy Efficiency Project said Thursday in a report. “While AI offers the potential for significant economic and social benefits, there are growing concerns with the rapid increases in electricity demand from data centers and how they will impact the power sector and state and utility climate goals,” SWEEP said. Data centers today account for about 4.5% of U.S. electricity consumption, according to the analysis. But in its most recent report to Congress, the Lawrence Berkeley National Lab projected data centers could account for up to 12% of U.S. electricity use by 2028, SWEEP said. EPRI recently surveyed 25 utilities nationally and found almost half have received requests for new data center facilities with loads larger than 1,000 MW. And “almost half of the utilities surveyed have received data center requests that exceed 50% of their current system peak demand,” SWEEP said. The potential load additions “pose two types of threats to state greenhouse [gas] emission reduction goals,” SWEEP said: Utilities could add or utilize more fossil-fuel based generation, and they could struggle to add sufficient renewables to meet demand from the electrification of vehicles, buildings and industry, the report warned. To address these risks, SWEEP recommends that utilities ensure new data center customers — and other new industrial or commercial customers with demands over 50 MW, or combined demands from several facilities of more than 100 MW — “commit to providing sufficient revenue, over a contract period such as 12 years, to cover the generation and transmission investments made on their behalf.” Utilities should also “propose and attempt to get approval for tariffs that require new large data center customers, and other new

Read More »

Drax and Power Minerals partner on green cement material

Drax has signed a 20-year joint venture agreement with Power Minerals Ltd (PML) to develop a factory for processing legacy ash into supplementary cementitious material (SCM), or fly ash, an ingredient in low-carbon green cement. The factory will be built next to the Drax power station in Selby, Yorkshire on land leased from Drax. Under the agreement, PML will build, own and operate the new facility, while Drax will sell its legacy pulverised fuel ash (PFA) to the joint venture and provide power to the factory. The factory is due to enter service by the end of 2026, with an initial production capacity of 400,000 tonnes per year. According to the announcement, the facility will ultimately process millions of tonnes of PFA. The move marks the latest shift for Drax, following the power station’s gradual conversion to biomass since the mid-2010s. Drax says it is the single largest generator of renewable electricity in the UK. However, the power station has also been found to be the largest emitter of carbon dioxide (CO2) in the UK by think-tank Ember Energy, based on an analysis of official data from the UK Emissions Trading Scheme registry and company annual reports.  Drax Group has also been fined by UK regulators for misreporting data on the forestry type and sawlog content it uses for its biomass. Nonetheless, Drax is continuing its push to bolster its green credentials. The company is also proposing to add carbon capture and storage (CCS) to the power station site. However, this bioenergy with carbon capture and storage (BECCS) initiative still requires government funding, as well as a confirmed route to carbon storage, in order to go ahead. The announcement on the fly ash partnership represents a further step in Drax’s green push. The company noted in the announcement that cement

Read More »

Moody’s upgrades PG&E on reduced credit risks from wildfires

Moody’s Ratings on Thursday upgraded credit ratings for PG&E Corp. and its Pacific Gas & Electric subsidiary, saying the companies faced reduced financial risks from wildfires. “PG&E’s upgrade reflects the organization’s continued improvement in mitigating wildfire risk over the last few years as well as its ability to strengthen both its financial profile and its relationships with key stakeholders,” Jeff Cassella, Moody’s Ratings vice president and senior credit officer, said in a press release. The upgrade also reflects the credit quality benefits provided by California’s $21 billion wildfire legislation (AB 1054), including continued access to the state’s wildfire insurance fund and credit positive shareholder liability cap and cost recovery provisions, Cassella said. “In the backdrop of the recent LA wildfires, we expect any legislative and regulatory actions resulting from the state’s continued wildfire risk will remain supportive for utilities by protecting them from uncapped liabilities due to inverse condemnation,” Cassella said. Since PG&E emerged from a wildfire-related bankruptcy in 2020, the utility has spent more than $20 billion to reduce wildfire risks, including the risk its equipment will cause wildfires, according to the credit rating agency. The utility hasn’t experienced any wildfires that significantly affected its finances since 2020, Moody’s noted. Also, PG&E’s financial risks are muted by the utility’s demonstrated ability to receive approvals for the utility’s annual wildfire safety certificate, which allows for the presumption of PG&E’s prudence and protects the company with a liability cap on reimbursement to the wildfire fund if it is found imprudent, Moody’s said. PG&E’s liability cap is about $4.1 billion, according to Moody’s. Although the state’s wildfire fund may be used to cover damages from January fires in southern California, Moody’s said it expects the remaining amount would be enough to support PG&E’s credit ratings and credit quality. “Further upward movement of

Read More »

Glenfarne to Take Majority Stake in Alaska LNG

Alaska Gasline Development Corp. (AGDC) and Glenfarne Group LLC have signed agreements for the latter to acquire 75 percent of 8 Star Alaska, a company formed by AGDC to manage the planned Alaska LNG. Alaska LNG, approved by the Federal Energy Regulatory Commission May 2020, is designed to deliver natural gas from the state’s North Slope to both domestic and global markets. It is the only federally permitted liquefied natural gas (LNG) project on the United States Pacific Coast, according to AGDC. “Glenfarne assumes the role of Alaska LNG’s lead developer and will lead all remaining development work of Alaska LNG from front-end engineering and design through to a final investment decision (‘FID’)”, AGDC said in an online statement. The company said FID is planned for this year. “AGDC remains a 25 percent owner of 8 Star Alaska and a key partner to Glenfarne on the project”, AGDC added. Alaska LNG has three subprojects: an LNG export terminal with a capacity of 20 million metric tons per annum (MMtpa) in Nikiski, an 807-mile 42-inch pipeline and a carbon capture plant with a storage capacity of 7 MMtpa. “In light of steadily declining gas production from Cook Inlet, which has historically been Alaska’s primary in-state natural gas basin, phase one of the project will kick off immediately, prioritizing the development and final investment decision of the pipeline infrastructure needed to deliver North Slope gas to Alaskans as rapidly as possible”, AGDC said. The LNG plant will be built at a later phase, AGDC said previously. “Oil was discovered in Prudhoe Bay almost exactly 57 years ago and since then Alaskans have never given up on finding a way to also benefit from our North Slope natural gas”, commented Governor Mike Dunlea. “Alaska has made a significant investment to develop Alaska LNG

Read More »

Airtel connects India with 100Tbps submarine cable

“Businesses are becoming increasingly global and digital-first, with industries such as financial services, data centers, and social media platforms relying heavily on real-time, uninterrupted data flow,” Sinha added. The 2Africa Pearls submarine cable system spans 45,000 kilometers, involving a consortium of global telecommunications leaders including Bayobab, China Mobile International, Meta, Orange, Telecom Egypt, Vodafone Group, and WIOCC. Alcatel Submarine Networks is responsible for the cable’s manufacturing and installation, the statement added. This cable system is part of a broader global effort to enhance international digital connectivity. Unlike traditional telecommunications infrastructure, the 2Africa Pearls project represents a collaborative approach to solving complex global communication challenges. “The 100 Tbps capacity of the 2Africa Pearls cable significantly surpasses most existing submarine cable systems, positioning India as a key hub for high-speed connectivity between Africa, Europe, and Asia,” said Prabhu Ram, VP for Industry Research Group at CyberMedia Research. According to Sinha, Airtel’s infrastructure now spans “over 400,000 route kilometers across 34+ cables, connecting 50 countries across five continents. This expansive infrastructure ensures businesses and individuals stay seamlessly connected, wherever they are.” Gogia further emphasizes the broader implications, noting, “What also stands out is the partnership behind this — Airtel working with Meta and center3 signals a broader shift. India is no longer just a consumer of global connectivity. We’re finally shaping the routes, not just using them.”

Read More »

Former Arista COO launches NextHop AI for customized networking infrastructure

Sadana argued that unlike traditional networking where an IT person can just plug a cable into a port and it works, AI networking requires intricate, custom solutions. The core challenge is creating highly optimized, efficient networking infrastructure that can support massive AI compute clusters with minimal inefficiencies. How NextHop is looking to change the game for hyperscale networking NextHop AI is working directly alongside its hyperscaler customers to develop and build customized networking solutions. “We are here to build the most efficient AI networking solutions that are out there,” Sadana said. More specifically, Sadana said that NextHop is looking to help hyperscalers in several ways including: Compressing product development cycles: “Companies that are doing things on their own can compress their product development cycle by six to 12 months when they partner with us,” he said. Exploring multiple technological alternatives: Sadana noted that hyperscalers might try and build on their own and will often only be able to explore one or two alternative approaches. With NextHop, Sadana said his company will enable them to explore four to six different alternatives. Achieving incremental efficiency gains: At the massive cloud scale that hyperscalers operate, even an incremental one percent improvement can have an oversized outcome. “You have to make AI clusters as efficient as possible for the world to use all the AI applications at the right cost structure, at the right economics, for this to be successful,” Sadana said. “So we are participating by making that infrastructure layer a lot more efficient for cloud customers, or the hyperscalers, which, in turn, of course, gives the benefits to all of these software companies trying to run AI applications in these cloud companies.” Technical innovations: Beyond traditional networking In terms of what the company is actually building now, NextHop is developing specialized network switches

Read More »

Microsoft abandons data center projects as OpenAI considers its own, hinting at a market shift

A potential ‘oversupply position’ In a new research note, TD Cowan analysts reportedly said that Microsoft has walked away from new data center projects in the US and Europe, purportedly due to an oversupply of compute clusters that power AI. This follows reports from TD Cowen in February that Microsoft had “cancelled leases in the US totaling a couple of hundred megawatts” of data center capacity. The researchers noted that the company’s pullback was a sign of it “potentially being in an oversupply position,” with demand forecasts lowered. OpenAI, for its part, has reportedly discussed purchasing billions of dollars’ worth of data storage hardware and software to increase its computing power and decrease its reliance on hyperscalers. This fits with its planned Stargate Project, a $500 billion, US President Donald Trump-endorsed initiative to build out its AI infrastructure in the US over the next four years. Based on the easing of exclusivity between the two companies, analysts say these moves aren’t surprising. “When looking at storage in the cloud — especially as it relates to use in AI — it is incredibly expensive,” said Matt Kimball, VP and principal analyst for data center compute and storage at Moor Insights & Strategy. “Those expenses climb even higher as the volume of storage and movement of data grows,” he pointed out. “It is only smart for any business to perform a cost analysis of whether storage is better managed in the cloud or on-prem, and moving forward in a direction that delivers the best performance, best security, and best operational efficiency at the lowest cost.”

Read More »

PEAK:AIO adds power, density to AI storage server

There is also the fact that many people working with AI are not IT professionals, such as professors, biochemists, scientists, doctors, clinicians, and they don’t have a traditional enterprise department or a data center. “It’s run by people that wouldn’t really know, nor want to know, what storage is,” he said. While the new AI Data Server is a Dell design, PEAK:AIO has worked with Lenovo, Supermicro, and HPE as well as Dell over the past four years, offering to convert their off the shelf storage servers into hyper fast, very AI-specific, cheap, specific storage servers that work with all the protocols at Nvidia, like NVLink, along with NFS and NVMe over Fabric. It also greatly increased storage capacity by going with 61TB drives from Solidigm. SSDs from the major server vendors typically maxed out at 15TB, according to the vendor. PEAK:AIO competes with VAST, WekaIO, NetApp, Pure Storage and many others in the growing AI workload storage arena. PEAK:AIO’s AI Data Server is available now.

Read More »

SoftBank to buy Ampere for $6.5B, fueling Arm-based server market competition

SoftBank’s announcement suggests Ampere will collaborate with other SBG companies, potentially creating a powerful ecosystem of Arm-based computing solutions. This collaboration could extend to SoftBank’s numerous portfolio companies, including Korean/Japanese web giant LY Corp, ByteDance (TikTok’s parent company), and various AI startups. If SoftBank successfully steers its portfolio companies toward Ampere processors, it could accelerate the shift away from x86 architecture in data centers worldwide. Questions remain about Arm’s server strategy The acquisition, however, raises questions about how SoftBank will balance its investments in both Arm and Ampere, given their potentially competing server CPU strategies. Arm’s recent move to design and sell its own server processors to Meta signaled a major strategic shift that already put it in direct competition with its own customers, including Qualcomm and Nvidia. “In technology licensing where an entity is both provider and competitor, boundaries are typically well-defined without special preferences beyond potential first-mover advantages,” Kawoosa explained. “Arm will likely continue making independent licensing decisions that serve its broader interests rather than favoring Ampere, as the company can’t risk alienating its established high-volume customers.” Industry analysts speculate that SoftBank might position Arm to focus on custom designs for hyperscale customers while allowing Ampere to dominate the market for more standardized server processors. Alternatively, the two companies could be merged or realigned to present a unified strategy against incumbents Intel and AMD. “While Arm currently dominates processor architecture, particularly for energy-efficient designs, the landscape isn’t static,” Kawoosa added. “The semiconductor industry is approaching a potential inflection point, and we may witness fundamental disruptions in the next 3-5 years — similar to how OpenAI transformed the AI landscape. SoftBank appears to be maximizing its Arm investments while preparing for this coming paradigm shift in processor architecture.”

Read More »

Nvidia, xAI and two energy giants join genAI infrastructure initiative

The new AIP members will “further strengthen the partnership’s technology leadership as the platform seeks to invest in new and expanded AI infrastructure. Nvidia will also continue in its role as a technical advisor to AIP, leveraging its expertise in accelerated computing and AI factories to inform the deployment of next-generation AI data center infrastructure,” the group’s statement said. “Additionally, GE Vernova and NextEra Energy have agreed to collaborate with AIP to accelerate the scaling of critical and diverse energy solutions for AI data centers. GE Vernova will also work with AIP and its partners on supply chain planning and in delivering innovative and high efficiency energy solutions.” The group claimed, without offering any specifics, that it “has attracted significant capital and partner interest since its inception in September 2024, highlighting the growing demand for AI-ready data centers and power solutions.” The statement said the group will try to raise “$30 billion in capital from investors, asset owners, and corporations, which in turn will mobilize up to $100 billion in total investment potential when including debt financing.” Forrester’s Nguyen also noted that the influence of two of the new members — xAI, owned by Elon Musk, along with Nvidia — could easily help with fundraising. Musk “with his connections, he does not make small quiet moves,” Nguyen said. “As for Nvidia, they are the face of AI. Everything they do attracts attention.” Info-Tech’s Bickley said that the astronomical dollars involved in genAI investments is mind-boggling. And yet even more investment is needed — a lot more.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »