Stay Ahead, Stay ONMINE

Load forecasts from data centers risks falling into irrational exuberance territory

Todd Snitchler is president and CEO of the Electric Power Supply Association. Alan Greenspan is quoted as saying in 1996 that the stock market experienced irrational exuberance around the dot com bubble when describing the disconnect between asset valuation and a stock’s value rising to prices that exceed their fundamentals. In many ways, the race […]

Todd Snitchler is president and CEO of the Electric Power Supply Association.

Alan Greenspan is quoted as saying in 1996 that the stock market experienced irrational exuberance around the dot com bubble when describing the disconnect between asset valuation and a stock’s value rising to prices that exceed their fundamentals. In many ways, the race to power our artificial intelligence-driven future is showing the same thing. There is no doubt that the growth of AI will be driven or limited by how quickly we can ensure sufficient power to enable the expansion and operation of data centers, but the magnitude of the energy needed is still much more in question than many seem to think.

The recent announcements by PJM Interconnection of their load forecast indicate a significant increase in demand. For summer demand, they predict an increase of 42 GW between now and 2029 and, for winter demand, an increase of 43 GW by 2039. The incremental growth expected by 2034 is notable with a 27 GW and 30 GW increase in summer and winter peak demand. 

The North American Electric Reliability Corp. also released its 2024 Long-Term Reliability Assessment which projects a 15% summer peak demand increase, and an 18% winter peak demand increase over the next decade. While these timelines and figures have some variance, they all suggest a consistent, meaningful level of load growth in all seasons of the year in large parts of the country.

There is also a long history of overestimating load growth which can drive overbuilding and the addition of resources that may exceed the increased demand. A similar panic about load growth overwhelming the grid happened back in 1999, when worries about early data centers led to panicked media stories and rosy investor forecasts for utilities. It didn’t happen. Microprocessors, computers and data storage saw huge gains in efficiency that kept pace with their gains in raw power and the predicted load growth never materialized.

Load projections compared to actual load growth since 2006 reveal that estimates are often wildly optimistic compared to the actual demand the system achieved over time.

What is the lesson we need to heed? This does not mean that load growth is not coming or that we do not have an electricity supply issue that needs prompt attention. What it does mean, though, is that we need to be judicious in our thinking, methodical with our planning, and focused on optimizing the system performance.

A careful analysis of the actual availability of high-end processing chips over the next several years shows that they are already “sold out,” which means there is a limit on the ability of the AI industry and data centers to grow exponentially and overconsume energy, stressing the electrical grid to the point of failure.

This fact will help inform the speed and volume of likely data center development regardless of the region of the country where development is projected. Advancements in AI may also play a role in making the grid itself more efficient. Additional chip manufacturing capacity is subject to the speed to market and time it takes to construct and operate any new fabricating capacity.

The tech industry has also acknowledged that demand is far from certain. In a recent filing in Ohio, Amazon Web Services expert Michael Fradette told regulators that asking companies to predict how much power data centers will need over the next decade with a high degree of accuracy is “unreasonable,” because consumption will depend on factors including customer demand and emerging technologies.

Too many families are already struggling with high energy prices. A historic buildout of new generation resources and a binge of utility spending to meet this forecasted demand would be prohibitively expensive for consumers.

Major investments will be needed, but it is incumbent on policymakers to be clear-eyed about the costs and consequences. State and national regulators should be careful to avoid overbuilding, eager to embrace cost-effective and market-based solutions, and committed to ensuring customers are not the ones carrying the burden of investments that don’t turn out to be needed.

Competitive power markets are a key tool at our disposal to make sure those risks and costs do not negatively impact consumers, by shifting that risk from utility customers to shareholders. A more streamlined permitting system can also drastically reduce the costs and time to build new infrastructure without adding costs for consumers. 

America’s past experience with ambitious load forecasts would caution against overexuberance and instead encourage a rational, deliberate build out of resources needed to power the evolving American economy as we move deeper into the 21st century.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Ubuntu namespace vulnerability should be addressed quickly: Expert

Thus, “there is little impact of not ‘patching’ the vulnerability,” he said. “Organizations using centralized configuration tools like Ansible may deploy these changes with regularly scheduled maintenance or reboot windows.”  Features supposed to improve security Ironically, last October Ubuntu introduced AppArmor-based features to improve security by reducing the attack surface

Read More »

Excelerate to Acquire NFE’s Jamaican Business for $1.055B

Excelerate Energy, Inc. has entered into a definitive agreement to acquire New Fortress Energy Inc.’s (NFE) business in Jamaica for $1.055 billion in cash. Under the terms of the agreement, Excelerate will acquire the assets and operations of the Montego Bay LNG Terminal, the Old Harbour LNG Terminal, and the Clarendon combined heat and power (CHP) co-generation plant, Excelerate said in a news release, adding that it expects to assume all material contracts that are currently in place. The acquisition adds downstream and “last-mile” infrastructure to complement and diversify Excelerate’s existing portfolio, the company said. The acquisition adds the country’s only liquefied natural gas (LNG) platform and the Jamaica Power System, including one of the country’s largest gas-fired power plants. The assets include Jamaica’s only two LNG terminals, serving power plants and industrial customers, as well as Jamaica’s only combined heat and power co-generation plant, according to the release. Excelerate said it intends to fund the cash purchase price of $1.055 billion using a combination of permanent financing and cash-on-hand. The company has backstopped the financing with an $850 million fully committed bridge facility. The transaction is expected to close as early as the second quarter of 2025, subject to regulatory approvals and the satisfaction of other customary closing conditions, according to the release. In the long term, Excelerate said it plans to use its own Venture Global LNG supply “which is well-matched with customer offtake commitments, minimizing commodity risk”. The company has a 20-year Venture Global LNG supply agreement for 0.7 million metric tons per annum. Excelerate said the acquisition offers opportunities to expand in Jamaica, leveraging existing infrastructure, including LNG bunkering, an expansion of the Clarendon CHP plant, and the ability to grow LNG fuel supply for Jamaica’s industrial base, benefitting from the anticipated continued shift away from

Read More »

Petronas Confirms Fire at Main PGB Pipeline

In a statement posted on its website on Tuesday, Petronas confirmed that a fire had occurred at the main Petronas Gas Berhad (PGB) pipeline near Putra Heights, Subang Jaya, Selangor, on April 1. “The affected pipeline has been isolated, and all relevant stakeholders have been informed,” Petronas said in its statement. “Petronas is working closely with all relevant parties to ensure the safety of the surrounding community, environment, and security of gas supply to the nation, which remain our utmost priority,” it added. In the statement, Petronas noted that its surrounding three retail stations – PS Putra Heights, PS KM2 LDP, and PS Putra Bestari – were not affected but added that they had been temporarily closed as a precautionary measure. In another statement posted on its site later that same day, Petronas said its public listed subsidiary Petronas Gas Berhad will be working closely with government authorities and agencies to assess the full impact of the incident. “At the same time, the Petronas Group is proactively taking all necessary measures to preserve the security of gas supply,” the company added. “To this end, the team is hard at work to mitigate disruptions, implement contingency plans, and restore operations as safely and efficiently as possible in coordination with relevant authorities and agencies,” it continued. “As earlier cited by the Prime Minister during his visit to the location of the incident, we will be actively coordinating with the relevant ministries as well as State and Federal agencies to ensure adequate relief efforts and assistance are extended and meaningful support is deployed during this very difficult time,” it went on to state. “We are committed to work as closely as possible with these ministries and agencies so that timely and meaningful support can be effectively deployed during what must be a very

Read More »

Petronas Issues $5 Billion Bonds

Malaysia’s national oil and gas company said it had achieved the biggest oil and gas debt instrument sale in Asia since 2020 with the issuance of $5 billion senior notes. Petroliam Nasional Bhd. (Petronas) had made a $6 billion senior bond offering 2020. “Additionally, this is the largest international bond market transaction out of Asia since 2021”, it said of the new offering, which marks its return to the international dollar bond market since a $3 billion issuance April 2021. The new issuance consists of $1.6 billion senior notes maturing in 5.75 years, another 10-year tranche worth $1.8 billion and a third tranche of $1.6 billion due in 30 years. “The 5.75-year senior notes were priced at 90 basis points (bps) over the 5-year US Treasury yield for a coupon of 4.950 percent, the 10-year senior notes were priced at 100 bps over the 10-year US Treasury yield for a coupon of 5.340 percent, and the 30-year senior notes were priced at 115 bps over the 30-year US Treasury yield for a coupon of 5.848 percent”, Petronas said in an online statement. Proceeds will go to “general corporate purposes”, the company said. The peak orderbook exceeded $17 billion, an oversubscription of about 3.4 times. The statement said that “overwhelming investor interest enabled PETRONAS to tighten pricing by 30-35bps from initial pricing guidance to final pricing guidance and upsizing from US$3bn to final size of US$5bn”. “The bonds were distributed to top-tier international investors across the globe”, Petronas said. JP Morgan and Morgan Stanley served as joint global coordinators. HSBC, Maybank and MUFG acted as joint bookrunners for the drawdown offering. JP Morgan, Morgan Stanley, HSBC, Maybank and MUFG were joint arrangers and dealers for Petronas’ GMTN update. Petronas ended 2024 with MYR 20.06 billion ($4.5 billion) in borrowings, part of

Read More »

PetroChina Annual Profit Up 2 Percent on Higher Production

PetroChina Co. Ltd. has reported CNY 164.68 billion ($22.69 billion) in net profit for 2024, up 2 percent from 2023 as increased output offset lower prices. The publicly listed arm of China National Petroleum Corp. produced 243.7 million metric tons of oil equivalent last year, up 2.2 percent year-on-year. Domestic petroleum and natural gas production totaled 217 million metric tons, up 2.5 percent against 2023. Chinese oil production rose 0.4 percent to 105.2 million metric tons. Marketable gas production grew 4.6 percent to 140.36 billion cubic meters (4.96 trillion cubic feet), PetroChina said in a report on its website. It saw a 2.5 percent fall in average realized oil price. Meanwhile PetroChina grew its installed wind and solar generation capacity by 4.95 million kilowatts and produced 4.72 billion kilowatt hours of power. “Additionally, newly-signed geothermal heating contracts area reached 75.12 million square meters [808.58 million square feet]”, PetroChina said. Its upstream oil and gas and new energies business generated CNY 159.75 billion in operating profit. Downstream, PetroChina processed 190 million metric tons of crude and produced 120 million metric tons of refined products. “The commodity volume of chemical products amounted to 38.98 million tons, an increase of 13.6 percent year-on-year, and the output of new materials surged by 49.3 percent year-on-year to 2.05 million tons”, the report said. “By vigorously reducing the production of fuels and increasing chemicals and specialty products, the Company continuously increased the output of high value-added products while actively moving to mid-to-high-end of the ‘Refining, Chemicals, Bio-manufacturing, Fine Chemicals, New Materials’ industrial chain”, it said. “Key transformation and upgrading projects, including Jilin Petrochemical and Guangxi Petrochemical, progressed steadily, while the high-end polyolefin project of Blue Ocean New Materials was officially launched. “The Company expanded sales of specialty refined products, securing the largest market share in China

Read More »

SLB Announces Major Drilling Contract

In a release sent to Rigzone by the SLB team recently, SLB announced that it has been awarded a major drilling contract by Australian independent Woodside Energy for its ultra-deepwater Trion development project offshore Mexico. SLB noted in the release that it will oversee the delivery of 18 ultra-deepwater wells using an integrated services approach and AI-enabled drilling capabilities to improve operational efficiency and well quality. It added that the full scope of the contract includes digital directional drilling services and hardware, logging while drilling (LWD), surface logging, cementing, drilling and completions fluids, completions, and wireline services. Services will begin in early 2026 and be managed through SLB’s Performance Live digital service delivery centers, according to the release, which highlighted that this deal “follows another major contract from Woodside for the Trion development to the SLB OneSubsea joint venture in 2023”. Woodside is developing the field in partnership with Pemex, SLB pointed out in the release, noting that first production is targeted in 2028. “With water depths of up to 2,500 meters, the Trion development presents challenging technical conditions for drilling and well construction,” Wallace Pescarini, president, Offshore Atlantic, SLB, said in the release. “SLB has extensive expertise in ultra-deepwater drilling projects globally and advanced technologies, including AI and digitally enabled hardware, to bring these wells online safely, efficiently, and reliably,” he added. “We will also leverage the strategic investments we’ve made in local talent and supply chains in Mexico, which will help support the on-time delivery of this project,” he continued. Rigzone asked SLB for the value of the drilling contract and if the contract will support any jobs or lead to additional jobs. In response, an SLB spokesperson told Rigzone the company will not disclose the contract value. The spokesperson added that they did not have any specifics

Read More »

A Fifth of EU Gas Imports in Q4 Came from Russia: Official Data

Up to the last days of the Russia-Ukraine natural gas transit deal, the European Union remained reliant on Russia for about a fifth of its gas imports, based on an official report. Russia accounted for 19 percent or 13.4 billion cubic meters (473.22 billion cubic feet) of EU gas imports in the fourth quarter (Q4), according to the European Commission’s quarterly gas market report. Russia’s share of gas imported into the 27-member bloc rose one percentage point from the prior three-month period and two percentage points against Q4 2023. In terms of volume, EU imports of Russian gas increased five percent quarter-on-quarter and 8 percent year-over-year. The EU imported 70 Bcm in the October-December 2024 quarter, up 9 percent quarter-on-quarter but down 6 percent year-on-year. Pipeline gas constituted 62 percent and liquefied natural gas (LNG) 38 percent. Russia was the EU’S second-biggest source of pipeline gas in Q4 2024 accounting for 19 percent, the same as North Africa. Norway remained the EU’s top pipeline gas supplier accounting for 50 percent. Azerbaijan contributed 7 percent. “In the fourth quarter of 2024, the two main remaining transit routes for Russian pipeline gas exports through Ukraine and Turkey transported 38 percent and 54 percent of the Russian gas volumes, respectively”, the report said. For LNG, Russia was also second at 19 percent. The United States stayed number 1 at 43 percent. Qatar came third at 13 percent. “On the importers side, the EU kept its number one position with 20 percent as the world’s largest LNG buyer, a position it acquired in 2022, when the EU replaced the bulk of its Russian pipeline imports with LNG”, the report said. The EU consumed 103 Bcm of gas in Q4 2024, up 8 percent year-on-year and 85 percent quarter-on-quarter, “reflecting the usual higher winter demand

Read More »

European cloud group invests to create what it dubs “Trump-proof cloud services”

But analysts have questioned whether the Microsoft move truly addresses those European business concerns. Phil Brunkard, executive counselor at Info-Tech Research Group UK, said, commenting on last month’s announcement of the EU Data Boundary for the Microsoft Cloud,  “Microsoft says that customer data will remain stored and processed in the EU and EFTA, but doesn’t guarantee true data sovereignty.” And European companies are now rethinking what data sovereignty means to them. They are moving beyond having it refer to where the data sits to focusing on which vendors control it, and who controls them. Responding to the new Euro cloud plan, another analyst, IDC VP Dave McCarthy, saw the effort as “signaling a growing European push for data control and independence.” “US providers could face tougher competition from EU companies that leverage this tech to offer sovereignty-friendly alternatives. Although €1 million isn’t a game-changer on its own, it’s a clear sign Europe wants to build its own cloud ecosystem—potentially at the expense of US market share,” McCarthy said. “For US providers, this could mean investing in more EU-based data centers or reconfiguring systems to ensure European customers’ data stays within the region. This isn’t just a compliance checkbox. It’s a shift that could hike operational costs and complexity, especially for companies used to running centralized setups.” Adding to the potential bad news for US hyperscalers, McCarthy said that there was little reason to believe that this trend would be limited to Europe. “If Europe pulls this off, other regions might take note and push for similar sovereignty rules. US providers could find themselves adapting to a patchwork of regulations worldwide, forcing a rethink of their global strategies,” McCarthy said. “This isn’t just a European headache, it’s a preview of what could become a broader challenge.”

Read More »

Talent gap complicates cost-conscious cloud planning

The top strategy so far is what one enterprise calls the “Cloud Team.” You assemble all your people with cloud skills, and your own best software architect, and have the team examine current and proposed cloud applications, looking for a high-level approach that meets business goals. In this process, the team tries to avoid implementation specifics, focusing instead on the notion that a hybrid application has an agile cloud side and a governance-and-sovereignty data center side, and what has to be done is push functionality into the right place. The Cloud Team supporters say that an experienced application architect can deal with the cloud in abstract, without detailed knowledge of cloud tools and costs. For example, the architect can assess the value of using an event-driven versus transactional model without fixating on how either could be done. The idea is to first come up with approaches. Then, developers could work with cloud providers to map each approach to an implementation, and assess the costs, benefits, and risks. Ok, I lied about this being the top strategy—sort of, at least. It’s the only strategy that’s making much sense. The enterprises all start their cloud-reassessment journey on a different tack, but they agree it doesn’t work. The knee-jerk approach to cloud costs is to attack the implementation, not the design. What cloud features did you pick? Could you find ones that cost less? Could you perhaps shed all the special features and just host containers or VMs with no web services at all? Enterprises who try this, meaning almost all of them, report that they save less than 15% on cloud costs, a rate of savings that means roughly a five-year payback on the costs of making the application changes…if they can make them at all. Enterprises used to build all of

Read More »

Lightmatter launches photonic chips to eliminate GPU idle time in AI data centers

“Silicon photonics can transform HPC, data centers, and networking by providing greater scalability, better energy efficiency, and seamless integration with existing semiconductor manufacturing and packaging technologies,” Jagadeesan added. “Lightmatter’s recent announcement of the Passage L200 co-packaged optics and M1000 reference platform demonstrates an important step toward addressing the interconnect bandwidth and latency between accelerators in AI data centers.” The market timing appears strategic, as enterprises worldwide face increasing computational demands from AI workloads while simultaneously confronting the physical limitations of traditional semiconductor scaling. Silicon photonics offers a potential path forward as conventional approaches reach their limits. Practical applications For enterprise IT leaders, Lightmatter’s technology could impact several key areas of infrastructure planning. AI development teams could see significantly reduced training times for complex models, enabling faster iteration and deployment of AI solutions. Real-time AI applications could benefit from lower latency between processing units, improving responsiveness for time-sensitive operations. Data centers could potentially achieve higher computational density with fewer networking bottlenecks, allowing more efficient use of physical space and resources. Infrastructure costs might be optimized by more efficient utilization of expensive GPU resources, as processors spend less time waiting for data and more time computing. These benefits would be particularly valuable for financial services, healthcare, research institutions, and technology companies working with large-scale AI deployments. Organizations that rely on real-time analysis of large datasets or require rapid training and deployment of complex AI models stand to gain the most from the technology. “Silicon photonics will be a key technology for interconnects across accelerators, racks, and data center fabrics,” Jagadeesan pointed out. “Chiplets and advanced packaging will coexist and dominate intra-package communication. The key aspect is integration, that is companies who have the potential to combine photonics, chiplets, and packaging in a more efficient way will gain competitive advantage.”

Read More »

Silicon Motion rolls SSD kit to bolster AI workload performance

The kit utilizes the PCIe Dual Ported enterprise-grade SM8366 controller with support for PCIe Gen 5 x4 NVMe 2.0 and OCP 2.5 data center specifications. The 128TB SSD RDK also supports NVMe 2.0 Flexible Data Placement (FDP), a feature that allows advanced data management and improved SSD write efficiency and endurance. “Silicon Motion’s MonTitan SSD RDK offers a comprehensive solution for our customers, enabling them to rapidly develop and deploy enterprise-class SSDs tailored for AI data center and edge server applications.” said Alex Chou, senior vice president of the enterprise storage & display interface solution business at Silicon Motion. Silicon Motion doesn’t make drives, rather it makes reference design kits in different form factors that its customers use to build their own product. Its kits come in E1.S, E3.S, and U.2 form factors. The E1.S and U.2 forms mirror the M.2, which looks like a stick of gum and installs on the motherboard. There are PCI Express enclosures that hold four to six of those drives and plug into one card slot and appear to the system as a single drive.

Read More »

Executive Roundtable: Cooling Imperatives for Managing High-Density AI Workloads

Michael Lahoud, Stream Data Centers: For the past two years, Stream Data Centers has been developing a modular, configurable air and liquid cooling system that can handle the highest densities in both mediums. Based on our collaboration with customers, we see a future that still requires both cooling mediums, but with the flexibility to deploy either type as the IT stack destined for that space demands. With this necessity as a backdrop, we saw a need to develop a scalable mix-and-match front-end thermal solution that gives us the ability to late bind the equipment we need to meet our customers’ changing cooling needs. It’s well understood that liquid far outperforms air in its ability to transport heat, but further to this, with the right IT configuration, cooling fluid temperatures can also be raised, and this affords operators the ability to use economization for a greater number of hours a year. These key properties can help reduce the energy needed for the mechanical part of a data center’s operations substantially.  It should also be noted that as servers are redesigned for liquid cooling and the onboard server fans get removed or reduced in quantity, more of the critical power delivered to the server is being used for compute. This means that liquid cooling also drives an improvement in overall compute productivity despite not being noted in facility PUE metrics.  Counter to air cooling, liquid cooling certainly has some added management challenges related to fluid cleanliness, concurrent maintainability and resiliency/redundancy, but once those are accounted for, the clusters become stable, efficient and more sustainable with improved overall productivity.

Read More »

Airtel connects India with 100Tbps submarine cable

“Businesses are becoming increasingly global and digital-first, with industries such as financial services, data centers, and social media platforms relying heavily on real-time, uninterrupted data flow,” Sinha added. The 2Africa Pearls submarine cable system spans 45,000 kilometers, involving a consortium of global telecommunications leaders including Bayobab, China Mobile International, Meta, Orange, Telecom Egypt, Vodafone Group, and WIOCC. Alcatel Submarine Networks is responsible for the cable’s manufacturing and installation, the statement added. This cable system is part of a broader global effort to enhance international digital connectivity. Unlike traditional telecommunications infrastructure, the 2Africa Pearls project represents a collaborative approach to solving complex global communication challenges. “The 100 Tbps capacity of the 2Africa Pearls cable significantly surpasses most existing submarine cable systems, positioning India as a key hub for high-speed connectivity between Africa, Europe, and Asia,” said Prabhu Ram, VP for Industry Research Group at CyberMedia Research. According to Sinha, Airtel’s infrastructure now spans “over 400,000 route kilometers across 34+ cables, connecting 50 countries across five continents. This expansive infrastructure ensures businesses and individuals stay seamlessly connected, wherever they are.” Gogia further emphasizes the broader implications, noting, “What also stands out is the partnership behind this — Airtel working with Meta and center3 signals a broader shift. India is no longer just a consumer of global connectivity. We’re finally shaping the routes, not just using them.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »