Stay Ahead, Stay ONMINE

Big tech must stop passing the cost of its spiking energy needs onto the public

Julianne Malveaux is an MIT-educated economist, author, educator and political commentator who has written extensively about the critical relationship between public policy, corporate accountability and social equity.  The rapid expansion of data centers across the U.S. is not only reshaping the digital economy but also threatening to overwhelm our energy infrastructure. These data centers aren’t […]

Julianne Malveaux is an MIT-educated economist, author, educator and political commentator who has written extensively about the critical relationship between public policy, corporate accountability and social equity. 

The rapid expansion of data centers across the U.S. is not only reshaping the digital economy but also threatening to overwhelm our energy infrastructure. These data centers aren’t just heavy on processing power — they’re heavy on our shared energy infrastructure. For Americans, this could mean serious sticker shock when it comes to their energy bills.

Across the country, many households are already feeling the pinch as utilities ramp up investments in costly new infrastructure to power these data centers. With costs almost certain to rise as more data centers come online, state policymakers and energy companies must act now to protect consumers. We need new policies that ensure the cost of these projects is carried by the wealthy big tech companies that profit from them, not by regular energy consumers such as family households and small businesses.

According to an analysis from consulting firm Bain & Co., data centers could require more than $2 trillion in new energy resources globally, with U.S. demand alone potentially outpacing supply in the next few years. This unprecedented growth is fueled by the expansion of generative AI, cloud computing and other tech innovations that require massive computing power. Bain’s analysis warns that, to meet this energy demand, U.S. utilities may need to boost annual generation capacity by as much as 26% by 2028 — a staggering jump compared to the 5% yearly increases of the past two decades.

This poses a threat to energy affordability and reliability for millions of Americans. Bain’s research estimates that capital investments required to meet data center needs could incrementally raise consumer bills by 1% each year through 2032. That increase may seem small at first, but it can add up quickly for households already struggling with high energy prices. As utilities attempt to pay for these upgrades, the burden could fall on consumers’ shoulders unless policies are enacted to make the tech companies driving this demand handle the costs.

One example comes from Ohio, where the boom in data centers means central Ohio is on track to use as much power as Manhattan by 2030. There, the state’s largest energy company, American Electric Power, has proposed a new rate structure for data centers that requires them to pay at least 85% of their predicted energy demand every month, even if they use less, to ensure the utility won’t need to pass off costs for expanded infrastructure to consumers.

States could also consider passing legislation to impose a temporary tax on new high-usage energy consumers, like data centers, cryptocurrency miners and chip manufacturing facilities. Those tax dollars could go directly into an energy relief fund, which could be used to offset increased energy costs for current consumers, either through a tax rebate or by funding the construction and maintenance of new infrastructure, so those costs aren’t passed down to consumers in the first place.

There’s opportunity here, too, for policymakers, utilities and data centers to join forces and help drive the clean energy revolution. Policymakers could provide incentives for data centers that adopt energy-saving measures or include renewable energy sources to offset the burden on utilities and consumers. By encouraging tech companies to produce a certain percentage of their own energy on-site, states can reduce the need for costly grid expansions while promoting green energy initiatives.

Tech companies have already pushed back against efforts to implement such policies, with a coalition of data center backers that includes Amazon, Microsoft and Meta claiming in Ohio that requiring them to pay higher rates is discriminatory and unprecedented, and that it could discourage future investment in Ohio.

The reality, however, is that these tech companies can and should carry the burden of the new energy infrastructure they’re demanding. Amazon’s net earnings for 2023 were $30.4 billion. Microsoft brought in $72.4 billion. Meta? $39 billion. Passing on a fraction of these profits to fund the infrastructure that drives this wealth is a small price to pay to ensure fair treatment of energy consumers.

The massive energy demand created by these new data centers is unprecedented. And that’s exactly why it’s important for policymakers and utilities to take action now, and set a precedent that protects average consumers by requiring tech companies to pay their fair share for the electricity they need.

If left unaddressed, the unchecked growth of data centers will continue to threaten energy security and affordability for millions of Americans. States and energy companies must adopt policies to prevent the burden of rising electricity demands and prices from falling disproportionately on everyday energy consumers. By ensuring that tech companies contribute fairly to the infrastructure that sustains them, we can build a more sustainable and equitable energy future for all.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AI plus digital twins could be the pairing enterprises need

3. Link AI with transactional workflows The third point of cooperation is the linkage of AI with transactional workflows. Companies already have applications that take orders, ship goods, move component parts around, and so forth. It’s these applications that currently drive the commercial side of a business, but taking an

Read More »

Asia-Pacific hits 50% IPv6 capability

Globally, the transition to IPv6 is advancing steadily, with 34% of networks now IPv6-capable. Not all IPv6-capable networks are using it by default; though: Capability means the system can use IPv6 — not that it prefers it. Still, the direction is clear. Countries like Vietnam (60% of networks IPv6-capable), Japan

Read More »

Valero to shutter at least one of its California refineries

Lengthening legislative shadow Valero’s proposal for the Benicia refinery follows Phillips 66 Co.’s October 2024 confirmation that it will permanently cease conventional crude oil processing operations its 138,700-b/d dual-sited refinery in Los Angeles by yearend 2025 amid the operator’s determination that market conditions will prevent the long-term viability and competitiveness of the manufacturing site (OGJ Online, Oct. 17, 2024). Announcement of the Los Angeles refinery closure came on the heels of California Gov. Gavin Newsom’s Oct. 14, 2024, signing of legislation aimed at making the state’s oil refiners manage California’s gasoline supplies more responsibly to prevent price spikes at the pump. The legislation specifically provides the CEC more tools for requiring petroleum refiners to backfill supplies and plan for maintenance downtime as a means of helping prevent gasoline-price spikes that cost Californians upwards of $2 billion in 2023, Newsom’s office said. Introduced in early September 2024 in response to Newsom’s late-August proclamation convening the state’s legislature into special session “to take on Big Oil’s gas-price spikes,” the new legislation allows the state to require that refiners maintain a minimum inventory of fuel to avoid supply shortages that “create higher gasoline prices for consumers and higher profits for the industry,” the governor’s office said. While Valero did not reveal in its April 2025 statement any specific reasons for its decision on the Benicia refinery, in the wake of the market announcement, Brian W. Jones (R-Calif.) and Vince Fong (R-Calif.) both attributed the pending refinery closure to the legislation and policies heralded by Newsom and state regulatory departments. “Valero intends to shut down its Benicia refinery thanks to Newsom and radical Democrats’ extreme regulations and hostile business climate,” Jones said on Apr. 16, citing Phillips 66’s decision on the Los Angeles refinery and Chevron Corp.’s relocation of headquarters from San Ramon, Calif.,

Read More »

US BOEM begins process to replace current OCS lease sale plan

US Interior Secretary Doug Burgum directed the Bureau of Ocean Energy Management (BOEM) to start developing a plan for offshore oil and gas lease sales on the US Outer Continental Shelf (OCS), including likely sales in a newly established 27h OCS planning area offshore Alaska in the High Arctic. The 11th National OCS program will replace the current 10th Program (2024–29), which includes only three oil and gas lease sales over 5 years—all in the Gulf, Burgum said in a release Apr. 18.  BOEM will work to complete those sales, while it begins to develop the new program, he said. Earlier this month, Burgum directed BOEM to move forward a lease sale in the Gulf, starting with publication in June 2025 of a notice of sale. BOEM will soon publish in the Federal Register a request for information and comments which starts a 45-day public comment period that serves as the initial step in the multi-year planning process that details lease sales BOEM will hold in the coming years.  The Federal Register notice will also outline BOEM’s new jurisdiction over the High Arctic planning area offshore Alaska, as well as new boundaries for existing planning areas, Interior noted. The request for information will not propose a specific timeline for future lease sales or outline the potential sale areas. Instead, it invites stakeholders to provide recommendations for leasing opportunities and raise concerns about offshore leasing. BOEM manages 3.2 billion acres in the OCS, including 2,227 active oil and gas leases covering about 12.1 million acres in OCS regions. Of these, 469 leases are currently producing oil and gas. BOEM earlier in April increased its estimate of oil and gas reserves in the US Gulf’s OCS by 1.30 billion boe from its 2021 estimate, bringing the total reserve estimate to 7.04 billion

Read More »

Chevron adds Gulf of Mexico production with Ballymore subsea tieback startup

Chevron Corp. has started producing oil and natural gas from the Ballymore subsea tieback in the deepwater Mississippi Canyon area of the US Gulf of Mexico, the company said in a release Apr. 21.  Chevron Corp. sanctioned the Ballymore project in 2018, four years after its discovery. The discovery well, drilled to a final depth of 8,898 m about 120 km offshore Louisiana in 2,000 m of water, encountered 205 m of net oil pay in a high-quality Jurassic Norphlet reservoir (OGJ Online, Jan. 31, 2018; May 12, 2022). Ballymore, the operator’s first in the Norphlet trend of the Gulf, has been developed as a three-mile subsea tieback to the existing Chevron-operated Blind Faith semisubmersible platform and is expected to produce up to 75,000 gross b/d of oil and 50 MMcfd of gas. 

Read More »

Rystad Energy: Extended trade war could wipe out half of China’s anticipated oil demand growth

Uncertainties surrounding US President Donald Trump’s tariff policies disrupted the markets’ initial trajectory and raised concerns about the broader economy and demand prospects. According to Rystad Energy analysis, a prolonged trade war could eliminate up to half of China’s anticipated 2025 oil demand growth of 180,000 b/d if downside risks to the country’s outlook materialize. With tensions between the US and China continuing to simmer, a potential tit-for-tat tariff war is expected to further pressure oil prices, which have already shown signs of weakening—dragging down product prices as well, according to Rystad Energy. China’s first-quarter gross domestic product (GDP) growth beat expectations at 5.4%, together with other macroeconomic indicators showing growing signals such as exports, the Purchasing Managers’ Index (PMI), and retail sales. Strong economic growth in the first quarter was based on last September’s stimulus taking effect gradually. Assuming trade relations between China and the US remain disrupted, a mild scenario is likely for this year, with China’s GDP growth slowing down by 1 percentage point, Rystad Energy said. Slower GDP growth is expected to reduce Chinese oil demand growth by 0.47 percentage points, given the economy’s continued reliance on industry and exports. However, with the government poised to introduce additional stimulus measures in response to the trade war, there is potential for upside that could help counterbalance the negative effects and lessen the decline in oil demand growth. Overall, the current forecast suggests a reduction of 90,000 b/d in oil demand growth, down from an initial estimate of 180,000 b/d. “The biggest loss is in diesel and biggest gain in naphtha – offsetting some demand loss. Petrochemical and diesel demand will bear the most downside pressure because of the trade war, as consumer spending and industry prosperity and industry-related transportation will be damaged by potential trade decline,” said

Read More »

TotalEnergies to shutter ethylene unit at Antwerp platform

TotalEnergies SE plans to permanently shut down the older of two ethylene-producing flexible steam crackers at the operator’s 338,000-b/d integrated refining and petrochemical platform in Antwerp, Belgium. The decision to shutter the steam cracker—which is not integrated into the site’s downstream polymer production—follows nonrenewal of an offtake agreement for the unit’s ethylene production by a long-term, third-party customer by yearend 2027, leaving no outlets for the cracker’s ethylene output, TotalEnergies said on Apr. 22. Given the cancelled offtake contract amid ongoing projections for a sustained oversupply of ethylene to the European market, TotalEnergies said it will cease operating Antwerp’s elder steam cracker by yearend 2027 to focus on operation of the site’s newest cracker, which currently produces ethylene feedstock for the company’s existing Belgian petrochemical plants both at Antwerp and Feluy. The Feluy site consumes ethylene to produce high-performance polymers and includes production units for polypropylene, polyethylene, and expanded polystyrenes. Contingent upon a legally required employee consultation and notification process TotalEnergies will begin in late April with representatives of the Antwerp platform’s employees, the proposed cracker shutdown and reconfiguration project would occur without any layoffs of the 253 potentially impacted employees, according to the operator. Each of the affected employees will be “offered a solution aligned with their personal situation: retirement or an internal transfer to another position based at the Antwerp site,” TotalEnergies said. The Antwerp platform’s two steam crackers—the older of which was upgraded in 2017 to flexibly process ethane, butane, or naphtha as feedstock—have combined capacity to produce 1.1 million tonnes/year (tpy) of ethylene (OGJ Online, July 7, 2017). The planned unit closure also aligns with TotalEnergies’ long-term transformational strategy of gradually pivoting operations away from its traditional oil and gas history in alignment with its aim to achieve carbon neutrality across the entirety of its businesses

Read More »

WoodMac: Oil sector investment at risk amid tariff uncertainty, price volatility

Wood Mackenzie’s report paints a picture of an industry caught between increasing surety about longer-term demand for its products, but excess supply and uncertainty in the near term. At current prices near $65/bbl, margins are dented but not enough to force dramatic budget or development plan changes, Wood Mackenzie said. Companies are likely to delay growth capex and discretionary spending to preserve financial leverage and shareholder distributions, an approach possible due to increased portfolio and balance sheet flexibility built in since 2021. Meantime, the supply chain is bracing for impact. The service sector is preparing for potentially reduced activity and downward pressure on costs. But tariffs could drive up the sector’s input costs, forcing service companies to choose between market share and margin erosion in well-supplied markets. Depending on how the situation evolves, tariffs could increase costs in the US by up to 6% onshore and 15% offshore, according to the report.  Moreover, near-term oil demand and OPEC+ market strategy concerns have caused oil and gas company share prices to fall. Capital allocation decisions are more difficult when prices, costs, and cash flow look volatile. Consequently, Wood Mackenzie anticipates a year-on-year decline in global upstream development spend for the first time since 2020. “US tight oil operators would be among the first to curb investment if prices slide further, given their inherent activity flexibility,” said Ryan Duman, director of Americas upstream at Wood Mackenzie. “But international projects are also feeling the pinch, with some already facing delays and budget revisions. More significant budgetary action would occur if oil settled below $60/bbl for a month or longer. A drop towards or below $50/bbl would prompt decisive action from most operators,” he said. 

Read More »

TCS launches SovereignSecure Cloud aligned with India’s data localization needs

Rawat noted that, for emerging markets in Africa, Southeast Asia, and Latin America, where concerns over data sovereignty and dependence on US or China-based cloud providers are growing, the new TCS offering could act as a blueprint for reducing reliance on global cloud giants. India already has many sovereign cloud deployments, but experts believe their capabilities have been limited. While National Informatics Center’s MeghRaj, C-DAC’s PARAMShavak, and hyperscaler-hosted Indian regions (e.g., AWS Hyderabad, Oracle with Airtel) address localization, they lack interoperability and sovereign enforcement, said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. And the present options in India, be it from hyperscalers or data centers, aren’t truly indigenous. “While all leading cloud services, such as AWS, Google, Microsoft, and others, also conform to the laws prescribed, like data localisation for instance, the tech stack isn’t indigenously designed and developed,” said Faisal Kawoosa, chief analyst and co-founder at Techarc. India’s data center players, too, are exploring similar cloud solutions. Late last year, Yotta Data Services acquired IndiQus Technologies to address the big void of made-in-India Cloud and AI platforms. Earlier this week, NxtGen announced the launch of its sovereign cloud, built to meet the demands of the banking, financial services, and insurance (BFSI) sector. Bharti Airtel’s B2B arm, in collaboration with Google, is also planning to launch an AI-enabled sovereign cloud solution. But the true test will lie in scaling AI capabilities effectively and proving cost and performance competitiveness against established global hyperscaler ecosystems, said Prabhu Ram, VP for the Industry Research Group at CyberMedia Research.

Read More »

Slowdown in AWS data center leasing plans poses little threat to CIOs

Oracle, according to Westfall, is committed to investing $10 billion in 2025 to build 100 new data centers and expand 66 existing ones, aiming to double its capacity this year. Likewise, Google is investing $75 billion in 2025 for data center construction, focusing on AI and cloud infrastructure, with projects such as a $600 million facility in Mesa, Arizona, and a $2 billion data center in Fort Wayne and Indiana underway, Westfall said. Meta, too, plans to spend up to $65 billion in 2025, a sizable bump up from $40 billion in 2024, primarily for data center expansion to support AI (Llama models, Meta AI) and metaverse workloads, Westfall added. However, these expansion plans will not result in the relatively smaller players catching up with AWS and Microsoft. “For smaller players like Google and Oracle, catching up with AWS and Microsoft would require historically large capital investments that likely aren’t justified by their current growth rates,” Alletto said.

Read More »

TSMC targets AI acceleration with A14 process and ‘System on Wafer-X’

Nvidia’s flagship GPUs currently integrate two chips, while its forthcoming Rubin Ultra platform will connect four. “The SoW-X delivers wafer-scale compute performance and significantly boosts speed by integrating multiple advanced compute SoC dies, stacked HBM memory, and optical interconnects into a single package,” said Neil Shah, partner and co-founder at Counterpoint Research. “This approach reduces latency, improves power efficiency, and enhances scalability compared to traditional multi-chip setups — giving enterprises and hyperscalers AI servers capable of handling future workloads faster, more efficiently, and in a smaller footprint.” This not only boosts capex savings in the long run but also opex savings in terms of energy and space. “Wafer-X technology isn’t just about bigger chips — it’s a signal that the future of AI infrastructure is being redesigned at the silicon level,” said Abhivyakti Sengar, practice director at Everest Group. “By tightly integrating compute, memory, and optical interconnects within a single wafer-scale package, TSMC targets the core constraints of AI: bandwidth and energy. For hyperscale data centers and frontier model training, this could be a game-changer.” Priorities for enterprise customers For enterprises investing in custom AI silicon, choosing the right foundry partner goes beyond performance benchmarks. It’s about finding a balance between cutting-edge capabilities, flexibility, and cost. “First, enterprise buyers need to assess manufacturing process technologies (such as TSMC’s 3nm, 2nm, or Intel’s 18A) to determine if they meet AI chip performance and power requirements, along with customization capabilities,” said Galen Zeng, senior research manager for semiconductor research at IDC Asia Pacific. “Second, buyers should evaluate advanced packaging abilities; TSMC leads in 3D packaging and customized packaging solutions, suitable for highly integrated AI chips, while Intel has advantages in x86 architecture. Finally, buyers should assess pricing structures.”

Read More »

Cloudbrink pushes SASE boundaries with 300 Gbps data center throughput

Those core components are functionally table stakes and don’t really serve to differentiate Cloudbrink against its myriad competitors in the SASE market. Where Cloudbrink looks to differentiate is at a technical level through a series of innovations including: Distributed edge architecture: The company has decoupled software from hardware, allowing their platform to run across 800 data centers by leveraging public clouds, telco networks and edge computing infrastructure. This approach reduces network latency from 300 milliseconds to between 7 and 20 milliseconds, the company says. This density dramatically improves TCP performance and responsiveness. Protocol optimization: Cloudbrink developed its own algorithms for SD-WAN optimization that bring enterprise-grade reliability to last mile links. These algorithms significantly improve efficiency on consumer broadband connections, enabling enterprise-grade performance over standard internet links. Integrated security stack: “We’ve been able to produce secure speeds at line rate on our platform by bringing security to the networking stack itself,” Mana noted. Rather than treating security as a separate overlay that degrades performance, Cloudbrink integrates security functions directly into the networking stack. The solution consists of three core components: client software for user devices, a cloud management plane, and optional data center connectors for accessing internal applications. The client intelligently connects to multiple edge nodes simultaneously, providing redundancy and application-specific routing optimization. Cloudbrink expands global reach Beyond its efforts to increase throughput, Cloudbrink is also growing its global footprint. Cloudbrink today announced a global expansion through new channel agreements and the opening of a Brazil office to serve emerging markets in Latin America, Korea and Africa. The expansion includes exclusive partnerships with WITHX in Korea, BAMM Technologies for Latin America distribution and OneTic for African markets. The company’s software-defined FAST (Flexible, Autonomous, Smart and Temporary) Edges technology enables rapid deployment of points of presence by leveraging existing infrastructure from multiple

Read More »

CIOs could improve sustainability with data center purchasing decisions — but don’t

CIOs can drive change Even though it’s difficult to calculate an organization’s carbon footprint, CIOs and IT purchasing leaders trying to reduce their environmental impact can influence data center operators, experts say. “Customers have a very large voice,” Seagate’s Feist says. “Don’t underestimate how powerful that CIO feedback loop is. The large cloud accounts are customer-obsessed organizations, so they listen, and they react.” While DataBank began using renewable energy years ago, customer demand can push more data center operators to follow suit, Gerson says. “For sure, if there is a requirement to purchase renewable power, we are going to purchase renewable power,” she adds.

Read More »

Copper-to-optics technology eyed for next-gen AI networking gear

Broadcom’s demonstration and a follow-up session explored the benefits of further developing CPC, such as reduced signal integrity penalties and extended reach, through channel modeling and simulations, Broadcom wrote in a blog about the DesignCon event. “Experimental results showed successful implementation of CPC, demonstrating its potential to address bandwidth and signal integrity challenges in data centers, which is crucial for AI applications,” Broadcom stated. In addition to the demo, Broadcom and Samtec also authored a white paper on CPC that stated: “Co-packaged connectivity (CPC) provides the opportunity to omit loss and reflection penalties from the [printed circuit board (PCB)] and the package. When high speed I/O is cabled from the top of the package advanced PCB materials are not necessary. Losses from package vertical paths and PCB routing can be transferred to the longer reach of cables,” the authors stated. “As highly complex systems are challenged to scale the number of I/O and their reach, co- packaged connectivity presents opportunity. As we approach 224G-PAM4 [which uses optical techniques to support 224 Gigabits per second data rates per optical lane] and above, system loss and dominating noise sources necessitate the need to re-consider that which has been restricted in the back of the system architect’s mind for years: What if we attached to the package?” At OFC, Samtec demonstrated its Si-FlyHD co-packaged cable assemblies and Samtec FlyoverOctal Small Form-factor Pluggable (OSFP) over the Samtec Eye Speed Hyper Low Skew twinax copper cable. Flyover is Samtec’s proprietary way of addressing signal integrity and reach limitations of routing high-speed signals through traditional printed circuit boards (PCBs). “This evaluation platform incorporates Broadcom’s industry-leading 200G SerDes technology and Samtec’s co-packaged Flyover technology. Si-Fly HD CPC offers the industry’s highest footprint density and robust interconnect which enables 102.4T (512 lanes at 200G) in a 95 x

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »