Stay Ahead, Stay ONMINE

Toyota Motor North America lands $4.5M DOE grant for EV battery recycling

Dive Brief: Toyota Motor North America scored a $4.5 million Department of Energy grant to help advance electric vehicle battery recycling initiatives, the company announced last month. The funding will support a project led by the Toyota Research Institute of North America, whose goal is to develop an industry template for a reduce, reuse, recycle […]

Dive Brief:

  • Toyota Motor North America scored a $4.5 million Department of Energy grant to help advance electric vehicle battery recycling initiatives, the company announced last month.
  • The funding will support a project led by the Toyota Research Institute of North America, whose goal is to develop an industry template for a reduce, reuse, recycle battery facility of the future. 
  • “As it stands, this project and program will highlight avenues for everyone to rethink their approach to battery circularity, and help prioritize the extension of battery life, facilitate battery reuse, and reduce battery waste while unveiling the appropriate pathways to achieve such priorities,” Nik Singh, project leader and principal scientist at TRINA, said in the release. 

Dive Insight:

The DOE’s Advanced Research Projects Agency-Energy initiative provided funding to Toyota to support a circular domestic supply chain for EV batteries.

Singh and his team are leading the development of a robotic disassembly process for batteries in collaboration with Oak Ridge National Laboratory, the National Renewable Energy Laboratory and energy technology firm Baker Hughes’ inspection technology product line, Waygate Technologies.

The project’s goal is to resolve bottlenecks in the current battery supply chain circularity, according to the release. This includes the automation of battery pack disassembly, data-driven battery classification and addressing cell degradation.

The DOE effort is driven by the growing need for EV battery recycling as electric vehicle usage becomes more widespread. Automakers and suppliers have committed billions of dollars in private investment to take advantage of government aid to transition to the technology and foster its adoption.  

Researchers will also develop advanced diagnostic tools and a refabrication method for the recycling of battery cells into new energy systems. 

As end-of-life and battery scrap volumes increase from rising global EV adoption, a new approach is needed to extend the useful life of many standard battery pack components, researchers said. 

“We will generate processes to streamline reusing and refurbishing valuable battery cells and modules from end-of-life packs, without having to scan every single battery pack every single time,” said Marm Dixit, who is co-leading Oak Ridge National Laboratory’s contributions to the project. “By extending the life of the battery components, we reduce their total emissions per mile.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Chevron Joins TotalEnergies in New Nigerian Exploration Blocks

Chevron Corp has signed a deal to acquire 40 percent in Petroleum Prospecting License (PPL) 2000 and PPL 2001 offshore Nigeria from TotalEnergies SE. TotalEnergies will retain operatorship with a 40 percent interest. Local player South Atlantic Petroleum Ltd owns 20 percent. “This new joint venture aims at derisking and

Read More »

Four things AWS needs to fix at re:Invent this week

When it comes to new AI analytics services from AWS, CIOs can expect more of the same, said David Linthicum, independent consultant and retired chief cloud strategy officer at Deloitte Consulting. “Realistically, they can expect AWS to keep integrating its existing services; the key test will be whether this shows

Read More »

Enterprises run into roadblocks with AI implementations

CompTIA estimates a 37% weighted average adoption rate of AI across respondents, but despite the widespread AI adoption, AI skills training strategies remain reactive rather than proactive. Only one in three companies currently mandates AI training for staff, though that figure will change as 85% of respondents are either already

Read More »

Shell, Equinor Launch UK North Sea JV

Equinor ASA and Shell PLC have completed the combination of their oil and gas operations on the United Kingdom’s side of the North Sea. Launched Monday, Adura, the 50-50 joint venture, “will be the UK North Sea’s largest independent producer”, Norway’s majority state-owned Equinor said in an online statement. Adura includes Equinor’s 29.89 percent stake in the CNOOC Ltd-operated Buzzard field, which started production 2007; an operating interest of 65.11 percent in Mariner, online since 2019; and an 80 percent operating stake in Rosebank, expected to come onstream 2026. Shell will contribute its 27.97 percent ownership in BP PLC-operated Clair, which began production 2005; a 50 percent operating stake in Gannet, started up 1992; a 100 percent stake in Jackdaw, for which Shell is seeking new consent following a court nullification; a 21.23 percent operating stake in Nelson, which started production 1994; a 50 percent operating stake in Penguins, which started production 2003; a 92.52 percent operating stake in Pierce, which started production 1999; a 44.9 percent stake in BP-operated Schiehallion, which started production 1998; a 55.5 operating stake in Shearwater, which started production 2000; and a 100 percent stake in Victory, started up earlier this year. Adura expects to produce over 140,000 barrels of oil equivalent a day in 2026, and also has several exploration licenses, Equinor said. “Equinor will retain ownership of its cross-border assets, Utgard, Barnacle and Statfjord and offshore wind portfolio including Sheringham Shoal, Dudgeon, Hywind Scotland and Dogger Bank”, Equinor said. “It will also retain the hydrogen, carbon capture and storage, power generation, battery storage and gas storage assets. “Shell UK Ltd will retain ownership of its interests and projects that are part of the UK SEGAL system, namely Fife NGL Plant, St Fergus Gas Terminal and the Braefoot Bay facility, and in the Bacton

Read More »

Energy Department Announces $134 Million in Funding to Strengthen Rare Earth Element Supply Chains, Advancing American Energy Independence

WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Critical Minerals and Energy Innovation (CMEI) today announced a Notice of Funding Opportunity (NOFO) for up to $134 million to enhance domestic supply chains for rare earth elements (REEs). Through this funding, DOE will support projects that demonstrate the commercial viability of recovering and refining REEs from unconventional feedstocks including mine tailings, e-waste, and other waste materials. These efforts will reduce America’s dependence on foreign sources, strengthen national security, and promote American energy independence.       “For too long, the United States has relied on foreign nations for the minerals and materials that power our economy,” said U.S. Secretary of Energy Chris Wright. “We have these resources here at home, but years of complacency ceded America’s mining and industrial base to other nations. Thanks to President Trump’s leadership, we are reversing that trend, rebuilding America’s ability to mine, process, and manufacture the materials essential to our energy and economic security.”  This funding opportunity stems from DOE’s Office of Critical Minerals and Energy Innovation’s Rare Earth Demonstration Facility program, which is designed to demonstrate full-scale integrated rare earth extraction and separation facilities within the United States. This NOFO follows the Department’s Notice of Intent released in August. REEs, such as Praseodymium, Neodymium, Terbium and Dysprosium, are vital components in advanced manufacturing, defense systems, and high-performance magnets used in power generation and electric motors. By investing in domestic REE recovery and processing, DOE is working to secure America’s energy independence, strengthen economic competitiveness, and ensure long-term resilience in the nation’s supply chains.  A webinar with additional information on this funding opportunity will be held at 1:00 PM ET on December 9, 2025. The webinar can be joined here.  Non-binding, non-mandatory letters of intent are requested by December 10, 2025, at 5:00 PM ET to assist the Department in planning

Read More »

Crude Ends Higher Despite Glut Fears

Oil rose as a key pipeline linking Kazakh fields to Russia’s Black Sea coast halted loading after one of its three moorings was damaged amid Ukrainian attacks in the region over the weekend, while traders assessed potential US military operations in Venezuela alongside expectations for oversupply. West Texas Intermediate rose 1.3% to settle above $59 on Monday. The Caspian Pipeline Consortium carries most of Kazakhstan’s crude exports, which have averaged 1.6 million barrels a day so far this year. The mooring was severely damaged after the explosion, a person with knowledge of the matter said. CPC said “any further operations are impossible” at the mooring, in response to questions about the damage. Ukraine hasn’t commented on the incident, although it confirmed separate attacks on an oil refinery and tankers over the weekend as it ramps up strikes on Russian oil targets amid the nearly four-year old war. The infrastructure attacks come at a time when the global oil market is moving into what is expected to be a period of significant oversupply. Trend-following commodity trading advisers were 90% short on Monday, according to data from Bridgeton Research Group. Some shorter-term focused advisers bought on Monday as prices rose. The extremely bearish lean from algorithmic traders leaves the market prone to bigger spikes on bullish developments as most of these traders are trend-following in nature and amplify price moves. Oil prices are coming off a monthly drop, with futures under pressure from the prospect of a glut next year. Still, geopolitical tensions from Russia to Venezuela — where President Trump warned airspace should be considered closed over the weekend — are adding to the bullish risks for prices. The White House will hold a meeting about next steps on Venezuela on Monday evening, CNN reported. “While the outlook for the market

Read More »

Tullow Names Ex-Trafigura Executive as Chair

Tullow Oil Plc appointed former Trafigura Group executive Roald Goethe as chairman, while half the board quit as the company struggles with a mounting debt pile. The shakeup follows a 77% slump in the shares this year, with the stock sinking to a record-low last month as Tullow said it was exploring ways to refinance looming debt maturities. Goethe, who helped to build the West Africa trading desk at Trafigura, has served on Tullow’s board since 2023. He replaces Phuthuma Nhleko as chairman, while directors Genevieve Sangudi, Martin Greenslade and Mitchell Ingram also resigned with immediate effect. “The company intends to replace key positions on the board, whilst retaining a small, focused and aligned board going forward,” Tullow said Monday in a statement. “The significant reduction in the size of the board will result in a further reduction of Tullow’s cost base.” The shares rose as much as 1.9% at the open in London. The London-based oil and gas company, which made several significant African discoveries in the late 2000s, has struggled in recent years under the weight of huge borrowings. Last month, the firm raised its year-end net debt forecast to $1.2 billion from $1.1 billion. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Harbour Energy to Cut 100 UK Jobs

Harbour Energy Plc, one of the largest independent oil and gas firms in the UK, expects to cut another 100 jobs after the government decided to keep a windfall tax on North Sea producers.  The Labour government last week said it plans to retain the Energy Profits Levy — introduced by the previous Conservative administration in 2022 — until March 2030. That was blow to oil and gas producers, which had been pushing for faster change to the tax to unlock investments, boost production and keep jobs.  “The future structure of our offshore workforce must adapt to reflect these realities,” Scott Barr, managing director of Harbour Energy’s UK business, said in an emailed statement. British offshore operations “will continue to struggle to compete for capital within our global portfolio, while the EPL remains,” he said. Harbour Energy, which completed the acquisition of Wintershall Dea’s non-Russian assets last year, operates in nine countries, including in Norway, Germany, Argentina, Mexico and North Africa. The company has already cut about 600 positions in the UK since the EPL was introduced, when energy prices soared following Russia’s full-scale invasion of Ukraine.  Many oil and gas companies, already suffering declines in production at mature fields in the British North Sea, have been reassessing their activities after the windfall tax was extended and increased. Last year’s EPL hike to 38% brought the headline tax rate for the oil and gas sector to 78%, making Britain less attractive for investment, according to producers. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Utilities, regulators look to accelerate pilots to achieve speed-to-innovation

Listen to the article 13 min This audio is auto-generated. Please let us know if you have feedback. Utility use of innovations to manage challenges like load growth and affordability can be streamlined with smarter pilot project designs, new U.S. Department of Energy research found. Today’s pilots are often redundant, inconclusive and lack clear pathways to scale, a June Lawrence Berkeley National Laboratory report on pilot project designs concluded.  “With safety as a top utility priority, utilities are hesitant” about new technologies or methods, but “it is critical that utilities are able to quickly test good ideas,” the LBNL report said. Some utilities have started to move quickly, faced with the pressure of rising demand, especially from data centers, which threatens to outpace new generation and storage additions. Salt River Project’s May 3 demonstration of data center load flexibility using Emerald AI software has already led to an announced scale deployment in the PJM Interconnection system. Many utilities are pursuing ways to achieve this type of speed-to-innovation. “It is more imperative now to take innovative projects and pilots to scale quickly because customer adoption, expectations, and technology are evolving at an exponential pace,” said Chanel Parson, Southern California Edison’s director of clean energy and demand response. The faster utilities scale solutions, “the faster they can keep up,” she added. Utilities are working with their regulators to find pilot project designs that speed innovation, the LBNL study found. To meet its quickly growing electric vehicle penetration, Pacific Gas and Electric’s managed charging program for 1,000 customers, launched in January, is already nearing its next phase, said Marina Donovan, vice president of global marketing for smart meter provider Itron.  “That shows the speed the utility wants to move at,” she said. Streamlined pilot design frameworks, often called “regulatory sandboxes,” can support speed-to-innovation, LBNL’s

Read More »

Cooling crisis at CME: A wakeup call for modern infrastructure governance

Organizations should reassess redundancy However, he pointed out, “the deeper concern is that CME had a secondary data center ready to take the load, yet the failover threshold was set too high, and the activation sequence remained manually gated. The decision to wait for the cooling issue to self-correct rather than trigger the backup site immediately revealed a governance model that had not evolved to keep pace with the operational tempo of modern markets.” Thermal failures, he said, “do not unfold on the timelines assumed in traditional disaster recovery playbooks. They escalate within minutes and demand automated responses that do not depend on human certainty about whether a facility will recover in time.” Matt Kimball, VP and principal analyst at Moor Insights & Strategy, said that to some degree what happened in Aurora highlights an issue that may arise on occasion: “the communications gap that can exist between IT executives and data center operators. Think of ‘rack in versus rack out’ mindsets.” Often, he said, the operational elements of that data center environment, such as cooling, power, fire hazards, physical security, and so forth, fall outside the realm of an IT executive focused on delivering IT services to the business. “And even if they don’t fall outside the realm, these elements are certainly not a primary focus,” he noted. “This was certainly true when I was living in the IT world.” Additionally, said Kimball, “this highlights the need for organizations to reassess redundancy and resilience in a new light. Again, in IT, we tend to focus on resilience and redundancy at the app, server, and workload layers. Maybe even cluster level. But as we continue to place more and more of a premium on data, and the terms ‘business critical’ or ‘mission critical’ have real relevance, we have to zoom out

Read More »

Microsoft loses two senior AI infrastructure leaders as data center pressures mount

Microsoft did not immediately respond to a request for comment. Microsoft’s constraints Analysts say the twin departures mark a significant setback for Microsoft at a critical moment in the AI data center race, with pressure mounting from both OpenAI’s model demands and Google’s infrastructure scale. “Losing some of the best professionals working on this challenge could set Microsoft back,” said Neil Shah, partner and co-founder at Counterpoint Research. “Solving the energy wall is not trivial, and there may have been friction or strategic differences that contributed to their decision to move on, especially if they saw an opportunity to make a broader impact and do so more lucratively at a company like Nvidia.” Even so, Microsoft has the depth and ecosystem strength to continue doubling down on AI data centers, said Prabhu Ram, VP for industry research at Cybermedia Research. According to Sanchit Gogia, chief analyst at Greyhound Research, the departures come at a sensitive moment because Microsoft is trying to expand its AI infrastructure faster than physical constraints allow. “The executives who have left were central to GPU cluster design, data center engineering, energy procurement, and the experimental power and cooling approaches Microsoft has been pursuing to support dense AI workloads,” Gogia said. “Their exit coincides with pressures the company has already acknowledged publicly. GPUs are arriving faster than the company can energize the facilities that will house them, and power availability has overtaken chip availability as the real bottleneck.”

Read More »

What is Edge AI? When the cloud isn’t close enough

Many edge devices can periodically send summarized or selected inference output data back to a central system for model retraining or refinement. That feedback loop helps the model improve over time while still keeping most decisions local. And to run efficiently on constrained edge hardware, the AI model is often pre-processed by techniques such as quantization (which reduces precision), pruning (which removes redundant parameters), or knowledge distillation (which trains a smaller model to mimic a larger one). These optimizations reduce the model’s memory, compute, and power demands so it can run more easily on an edge device. What technologies make edge AI possible? The concept of the “edge” always assumes that edge devices are less computationally powerful than data centers and cloud platforms. While that remains true, overall improvements in computational hardware have made today’s edge devices much more capable than those designed just a few years ago. In fact, a whole host of technological developments have come together to make edge AI a reality. Specialized hardware acceleration. Edge devices now ship with dedicated AI-accelerators (NPUs, TPUs, GPU cores) and system-on-chip units tailored for on-device inference. For example, companies like Arm have integrated AI-acceleration libraries into standard frameworks so models can run efficiently on Arm-based CPUs. Connectivity and data architecture. Edge AI often depends on durable, low-latency links (e.g., 5G, WiFi 6, LPWAN) and architectures that move compute closer to data. Merging edge nodes, gateways, and local servers means less reliance on distant clouds. And technologies like Kubernetes can provide a consistent management plane from the data center to remote locations. Deployment, orchestration, and model lifecycle tooling. Edge AI deployments must support model-update delivery, device and fleet monitoring, versioning, rollback and secure inference — especially when orchestrated across hundreds or thousands of locations. VMware, for instance, is offering traffic management

Read More »

Networks, AI, and metaversing

Our first, conservative, view says that AI’s network impact is largely confined to the data center, to connect clusters of GPU servers and the data they use as they crunch large language models. It’s all “horizontal” traffic; one TikTok challenge would generate way more traffic in the wide area. WAN costs won’t rise for you as an enterprise, and if you’re a carrier you won’t be carrying much new, so you don’t have much service revenue upside. If you don’t host AI on premises, you can pretty much dismiss its impact on your network. Contrast that with the radical metaverse view, our third view. Metaverses and AR/VR transform AI missions, and network services, from transaction processing to event processing, because the real world is a bunch of events pushing on you. They also let you visualize the way that process control models (digital twins) relate to the real world, which is critical if the processes you’re modeling involve human workers who rely on their visual sense. Could it be that the reason Meta is willing to spend on AI, is that the most credible application of AI, and the most impactful for networks, is the metaverse concept? In any event, this model of AI, by driving the users’ experiences and activities directly, demands significant edge connectivity, so you could expect it to have a major impact on network requirements. In fact, just dipping your toes into a metaverse could require a major up-front network upgrade. Networks carry traffic. Traffic is messages. More messages, more traffic, more infrastructure, more service revenue…you get the picture. Door number one, to the AI giant future, leads to nothing much in terms of messages. Door number three, metaverses and AR/VR, leads to a message, traffic, and network revolution. I’ll bet that most enterprises would doubt

Read More »

Microsoft’s Fairwater Atlanta and the Rise of the Distributed AI Supercomputer

Microsoft’s second Fairwater data center in Atlanta isn’t just “another big GPU shed.” It represents the other half of a deliberate architectural experiment: proving that two massive AI campuses, separated by roughly 700 miles, can operate as one coherent, distributed supercomputer. The Atlanta installation is the latest expression of Microsoft’s AI-first data center design: purpose-built for training and serving frontier models rather than supporting mixed cloud workloads. It links directly to the original Fairwater campus in Wisconsin, as well as to earlier generations of Azure AI supercomputers, through a dedicated AI WAN backbone that Microsoft describes as the foundation of a “planet-scale AI superfactory.” Inside a Fairwater Site: Preparing for Multi-Site Distribution Efficient multi-site training only works if each individual site behaves as a clean, well-structured unit. Microsoft’s intra-site design is deliberately simplified so that cross-site coordination has a predictable abstraction boundary—essential for treating multiple campuses as one distributed AI system. Each Fairwater installation presents itself as a single, flat, high-regularity cluster: Up to 72 NVIDIA Blackwell GPUs per rack, using GB200 NVL72 rack-scale systems. NVLink provides the ultra-low-latency, high-bandwidth scale-up fabric within the rack, while the Spectrum-X Ethernet stack handles scale-out. Each rack delivers roughly 1.8 TB/s of GPU-to-GPU bandwidth and exposes a multi-terabyte pooled memory space addressable via NVLink—critical for large-model sharding, activation checkpointing, and parallelism strategies. Racks feed into a two-tier Ethernet scale-out network offering 800 Gbps GPU-to-GPU connectivity with very low hop counts, engineered to scale to hundreds of thousands of GPUs without encountering the classic port-count and topology constraints of traditional Clos fabrics. Microsoft confirms that the fabric relies heavily on: SONiC-based switching and a broad commodity Ethernet ecosystem to avoid vendor lock-in and accelerate architectural iteration. Custom network optimizations, such as packet trimming, packet spray, high-frequency telemetry, and advanced congestion-control mechanisms, to prevent collective

Read More »

Land & Expand: Hyperscale, AI Factory, Megascale

Land & Expand is Data Center Frontier’s periodic roundup of notable North American data center development activity, tracking the newest sites, land plays, retrofits, and hyperscale campus expansions shaping the industry’s build cycle. October delivered a steady cadence of announcements, with several megascale projects advancing from concept to commitment. The month was defined by continued momentum in OpenAI and Oracle’s Stargate initiative (now spanning multiple U.S. regions) as well as major new investments from Google, Meta, DataBank, and emerging AI cloud players accelerating high-density reuse strategies. The result is a clearer picture of how the next wave of AI-first infrastructure is taking shape across the country. Google Begins $4B West Memphis Hyperscale Buildout Google formally broke ground on its $4 billion hyperscale campus in West Memphis, Arkansas, marking the company’s first data center in the state and the anchor for a new Mid-South operational hub. The project spans just over 1,000 acres, with initial site preparation and utility coordination already underway. Google and Entergy Arkansas confirmed a 600 MW solar generation partnership, structured to add dedicated renewable supply to the regional grid. As part of the launch, Google announced a $25 million Energy Impact Fund for local community affordability programs and energy-resilience improvements—an unusually early community-benefit commitment for a first-phase hyperscale project. Cooling specifics have not yet been made public. Water sourcing—whether reclaimed, potable, or hybrid seasonal mode—remains under review, as the company finalizes environmental permits. Public filings reference a large-scale onsite water treatment facility, similar to Google’s deployments in The Dalles and Council Bluffs. Local governance documents show that prior to the October announcement, West Memphis approved a 30-year PILOT via Groot LLC (Google’s land assembly entity), with early filings referencing a typical placeholder of ~50 direct jobs. At launch, officials emphasized hundreds of full-time operations roles and thousands

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »