Stay Ahead, Stay ONMINE

Pertamina Boosts 2C Resources by 34 Percent

Indonesia’s PT Pertamina Hulu Energi (PHE), PT Pertamina’s upstream unit, said it has recorded its largest exploration reserve discovery in the past fifteen years. For 2024, Pertamina Upstream Subholding Group’s 2C contingent recoverable resources reached 652 million barrels of oil equivalent (MMboe) or 2C oil in place of 1.75 billion barrels of oil equivalent (Bboe), […]

Indonesia’s PT Pertamina Hulu Energi (PHE), PT Pertamina’s upstream unit, said it has recorded its largest exploration reserve discovery in the past fifteen years.

For 2024, Pertamina Upstream Subholding Group’s 2C contingent recoverable resources reached 652 million barrels of oil equivalent (MMboe) or 2C oil in place of 1.75 billion barrels of oil equivalent (Bboe), including existing structures reassessment evaluations, the company said in a news release.

The 2C contingent resource discovery represents a significant increase compared to previous years, marking a growth of 34 percent, compared with the 2023 figure of 488 MMboe, Pertamina said.

The contingent resource discovery was primarily driven by the company’s high impact discovery at the Tedong (TDG)-001 well, which holds 548 billion cubic feet of gas of 2C recoverable resources and 13.51 million barrels of condensate within the Pertamina EP Working Area, operated by PHE’s affiliate, PT Pertamina EP Cepu, in Region IV Zone 13, according to the release.

The drilling of the Tedong (TDG)-001 well is part of a frontier area exploration initiative across five key locations: East Wolai (EWO)-001, West Wolai (WWO)-001, Julang Emas (JLE)-001, Yaki Emas (YKE)-001, and Tedong (TDG)-001. The initiative aims to confirm the hydrocarbon potential of the Minahaki and Tomori Formation Limestone, Pertamina said.

Another significant discovery in Padang Pancuran (PPC)-1, located administratively in South Sumatra within the Jambi Merang Working Area, further contributed to the 2C contingent resources realization in the Pertamina Upstream Subholding Group last year. The PPC-1 well, drilled to a depth of 3,750 feet (1,143 meters), recorded 140.6 MMboe of 2C recoverable resources, the company stated.

PHE completed drilling 22 exploration wells in 2024. Additionally, PHE conducted a 2D seismic survey covering 769 kilometers and a 3D seismic survey spanning 4,990 square kilometers.

“This achievement is concrete evidence of our exploration team’s dedication and hard work, as well as our close collaboration with SKK Migas and the Ministry of Energy and Mineral Resources (ESDM). These efforts contribute to national oil and gas production, supporting the vision of energy self-sufficiency and national energy security,” Director of Exploration at PHE, Muharram Jaya Panguriseng, said.

The discovery “marks a significant milestone in PHE’s mission to increase national oil and gas reserves and support the government’s energy self-sufficiency program under the Asta Cita initiative,” according to the release.

Meanwhile, Pertamina New & Renewable Energy (Pertamina NRE) and PT Kilang Pertamina Internasional (KPI) have officially partnered to develop a project aiming to convert flare gas to power at the Balongan refinery in West Java.

“This initiative aligns with our vision to optimize existing energy resources while significantly reducing carbon emissions,” Pertamina NRE CEO John Anis said.

The Flare Gas to Power project aims to capture waste gas that would otherwise be flared into the atmosphere. The captured gas is then processed through a purification system and directed to a gas turbine or power generator. The resulting energy is used for refinery operations or fed into the power grid, according to a separate release.

KPI President Director of KPI Taufik Aditiyawarman stated that, through the project, KPI has the potential to reduce CO2 emissions by 80,000 tons of CO2 equivalent per year, decrease broiler gas consumption by more than 2.5 million standard cubic feet per day), and achieve fuel cost savings of over $9 million annually.

To contact the author, email [email protected]



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia is still working with suppliers on RAM chips for Rubin

Nvidia changed its requirements for suppliers of the next generation of high-bandwidth memory, HBM4, but is close to certifying revised chips from Samsung Electronics for use in its AI systems, according to reports. Nvidia revised its specifications for memory chips for its Rubin platform in the third quarter of 2025,

Read More »

Storage shortage may cause AI delays for enterprises

Higher prices ahead All indicators are showing a steep price increase for memory and storage in 2026. Brad Gastwirth, for example, says he met with many of the most important players in the market at CES earlier this month, and his analysis suggests there will be a 50% or more

Read More »

Santos Restarts Darwin LNG Exports

Santos Ltd and its partners have shipped the first cargo from the Darwin LNG life extension project in Australia’s Northern Territory. The inaugural volume will be delivered to the Sakai terminal in Japan on an ex-ship basis, the Adelaide-based natural gas-focused producer said in an online statement Tuesday. The life extension project, or the Barossa Gas Project, involves the development of the Barossa field as a new source for the liquefaction plant in Darwin, which began producing liquefied natural gas (LNG) 2006 and has an LNG capacity of about 3.7 million metric tons a year, according to Santos. Darwin LNG’s previous source field, Timor-Leste’s Bayu-Undan, stopped exporting gas to the facility late 2023 due to depletion, though Santos said 2024 Bayu-Undan would continue sending gas to the Northern Territory until the end of that year. In its quarterly report July 16, 2025 Santos confirmed Bayu-Undan ceased production May 2025. The life extension project started up approximately six months earlier than planned and within budget, said Santos, which announced a final investment decision (FID) on the Barossa Gas Project March 30, 2021. The FID announcement pegged costs at $3.6 billion, making Barossa LNG the biggest investment in Australia’s oil and gas sector then, according to Santos. According to a Santos update June 18, 2025 investment in the project reached $3.95 billion. The early startup “is an outstanding achievement for a project of this scale and complexity in the global offshore upstream sector”, said Santos managing director and chief executive Kevin Gallagher. Gallagher noted Barossa LNG had “navigated through the impacts of the COVID-19 pandemic, regulatory approvals, legal challenges and supply chain disruptions during the construction phase”. Japanese power utility JERA Co Inc, one of the partners, said separately on Tuesday about the restart of Darwin LNG, “As energy demand continues to

Read More »

Canada, India Agree to Grow Energy Trade in Relations Reset

Canada and India will pledge to expand trade in oil and gas as the countries reboot their relationship after a diplomatic chill.  Ottawa will commit to ship more crude oil, liquefied natural gas and liquefied petroleum gas to India, while New Delhi will send more refined petroleum products to Canada, following a meeting between Canadian Energy Minister Tim Hodgson and Indian Petroleum and Natural Gas Minister Hardeep Singh Puri, according to a joint statement seen by Bloomberg News.  The ministers will meet at India Energy Week in Goa on Tuesday, using the event to relaunch a “ministerial energy dialogue.” The mechanism, once the main channel for energy cooperation between the two countries, fell dormant amid an explosive dispute over the killing of a Canadian Sikh activist.  The renewed push marks one of Prime Minister Mark Carney’s major efforts to diversify Canada’s export markets at a time of escalating trade tensions with the US. It also reflects his government’s shift toward pragmatic, economy-first diplomacy with major Asian partners. Hodgson and Puri will also commit to facilitating greater reciprocal investment in each other’s energy sectors and to exploring collaboration in areas including hydrogen, biofuels, battery storage, critical minerals, electricity systems and the use of artificial intelligence in the energy industry, according to the statement. The relaunch of the dialogue signals that both governments see untapped potential – and strategic value – in tightening an energy relationship that had been left to drift.  Carney is expected to visit India in the coming weeks as part of the reset. He and Prime Minister Narendra Modi restarted talks in November toward a comprehensive economic partnership agreement. Two-way goods trade between Canada and India hit C$13.3 billion ($9.7 billion) in 2024, and Ottawa sees far more room to grow – especially in energy. India accounts for just 1 percent of Canada’s

Read More »

Energy Secretary Strengthens New York’s Grid Following Winter Storm Fern

Secretary Wright issues an emergency order to stabilize New York’s grid, save lives, and lower costs following Winter Storm Fern WASHINGTON—The U.S. Department of Energy (DOE) today issued an emergency order to mitigate blackouts in New York and the surrounding area following Winter Storm Fern. Issued pursuant to Section 202(c) of the Federal Power Act, the order authorizes New York ISO (NYISO) to run specified resources located within the New York region, regardless of limits established by environmental permits or state law. The order will help NYISO respond to extreme temperatures and storm damage across New York and reduce costs for Americans due to the winter storm. “Winter Storm Fern continues to bring extreme cold and dangerous conditions across the country,” said U.S. Secretary of Energy Chris Wright. “Maintaining affordable, reliable, and secure power in the New York region is non-negotiable. The previous administration’s energy subtraction policies weakened the grid, leaving Americans more vulnerable during events like Winter Storm Fern. Thanks to President Trump’s leadership, we are reversing those failures and using every available tool to keep the lights on and Americans safe following this storm.” On day one, President Trump declared a national energy emergency after the Biden administration’s energy subtraction agenda left behind a grid increasingly vulnerable to blackouts. According to the North American Electric Reliability Corporation (NERC), “Winter electricity demand is rising at the fastest rate in recent years,” while the premature forced closure of reliable generation such as coal and natural gas plants leaves American families vulnerable to power outages. The NERC 2025 – 2026 Winter Reliability Assessment further warns that areas across the continental United States have an elevated risk of blackouts during extreme weather conditions. Power outages cost the American people $44 billion per year, according to data from DOE’s National Laboratories. This order

Read More »

Energy Secretary Issues Emergency Orders to Deploy Backup Generation in the Mid-Atlantic and Carolinas Following Winter Storm Fern

Secretary Wright issues two emergency orders to stabilize the grid in the Mid-Atlantic and Carolinas to save lives and lower costs after Winter Storm Fern. WASHINGTON—The U.S. Department of Energy (DOE) today issued two emergency orders authorizing the deployment of backup generation resources to mitigate blackouts in the Mid-Atlantic and Carolinas following Winter Storm Fern. Issued pursuant to Section 202(c) of the Federal Power Act, the orders authorize PJM Interconnection, LLC (PJM) and Duke Energy Carolinas, LLC and Duke Energy Progress (collectively, Duke Energy), respectively, to deploy backup generation resources at data centers and other major facilities. Today’s action follows a letter Secretary Wright sent Thursday to grid operators asking them to be prepared to use backup generation if needed to mitigate the risk of blackouts from the storm. DOE estimates more than 35 GW of unused backup generation remains available nationwide. The order will help PJM and Duke respond to extreme temperatures and storm damage across the Mid-Atlantic and Carolinas and reduce costs for Americans in the days following the storm. These actions mark the second set of emergency orders issued to PJM and Duke during Winter Storm Fern, following earlier orders to run specified resources located within the PJM and Duke regions, regardless of limits established by environmental permits or state law. “The Trump administration is committed to unleashing all available power generation needed to keep Americans safe during Winter Storm Fern,” said U.S. Energy Secretary Wright. “Unfortunately, the last administration had the nation on track to lose significant amounts of baseload power, but we are doing everything in our power to reverse those reckless decisions. The Trump administration will continue taking action to ensure that the 35 GW of untapped backup generation that exists across the country can be deployed as needed during Winter Storm Fern and

Read More »

Ukraine Says It Attacked Refinery in Southern Russia

Ukraine said it hit a small oil refinery in southern Russia, the third attack this month on its foe’s fuel-producing industry.  Explosions were recorded at the territory of the Slavyansk facility after Ukrainian drones struck it overnight and hit “elements of a primary crude processing unit”, the General Staff in Kyiv said in a Telegram statement. The scale of damage of the facility, which is involved in supplying Russian military forces, is being clarified, it added. Bloomberg couldn’t independently verify the claim. Slavyansk ECO, the operator of the refinery, didn’t immediately respond to a request for comment.  The refinery is in the Krasnodar region, near Ukraine. Kyiv and Moscow continue to trade strikes on energy infrastructure even as Ukrainian, Russian and US delegations held talks last week aiming at ending the Kremlin’s war on its neighbor that’s about to enter a fifth year.   Ukraine has reduced the intensity of attacks on Russian refineries so far this year with the three targeted in January comparing with 11 of them in December. Kyiv has also gone after ports and tankers handling Moscow’s oil. At the same time Russia has intensified strikes on Ukraine’s power sector, leaving hundreds of thousands of people without heating, water and electricity amid freezing temperatures.  Ukraine’s capital, Kyiv, and other cities are rushing to restore power after huge Russian air-strikes over the weekend caused widespread outages, even as peace talks were underway in the United Arab Emirates. The Slavyansk refinery processed an average 467,000 tons of crude a month in the first half of 2025, according to its financial report. That equates to almost 115,000 barrels a day based on the 7.33 barrels-per-ton conversion rate.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial

Read More »

Crude Eases Despite Winter Storm Risks

Oil edged down as an improving supply outlook out of OPEC+ member Kazakhstan overshadowed fears that a winter storm which pounded swaths of the US with snow, ice and freezing temperatures will crimp production. West Texas Intermediate fell slightly to settle below $61 a barrel, after climbing 2.9% on Friday, the biggest gain in two weeks. Disruptions to Kazakh oil flows that had tightened the European crude market eased as a key Black Sea terminal that accounts for most of Kazakhstan’s exports was brought back into service. At the same time, output from the country’s giant Tengiz field is set to restart shortly. The fresh injection of supply mitigated fears of shortages as investors assess the fallout from a winter storm that gripped much of the US. Several plants, including ExxonMobil Corp.’s Baytown mega refinery, curtailed operations ahead of the freeze, while diesel rallied by the most since November on higher demand for heating. The full extent of cold-related supply shut-ins remains unclear. Meanwhile, tensions persist in the Middle East after US President Donald Trump dispatched naval assets to the region, prompting speculation he may follow through on threats to attack Iran’s regime and spurring concern over the country’s oil output. Geopolitical turmoil and short-term supply disruptions have supported crude prices amid widespread expectations that swelling output from the Americas will create a glut. Hedge funds raised their bullish bets on crude to the highest since August in the week through Jan. 20. “The constant stir in geopolitics is keeping risk premiums alive,” said Priyanka Sachdeva, a senior market analyst at brokerage Phillip Nova Pte in Singapore. “However, the broader market remains cautious with production growth from the US and other major exporters outpacing demand growth.” OPEC+ delegates, meanwhile, said they are currently expecting to stick with plans to keep

Read More »

Photonic chip vendor snags Gates investment

“Moore’s Law is slowing, but AI can’t afford to wait. Our breakthrough in photonics unlocks an entirely new dimension of scaling, by packing massive optical parallelism on a single chip,” said Patrick Bowen, CEO of Neurophos. “This physics-level shift means both efficiency and raw speed improve as we scale up, breaking free from the power walls that constrain traditional GPUs.” The new funding includes investments from Microsoft’s investment fund M12 that will help speed up delivery of Neurophos’ first integrated photonic compute system, including datacenter-ready OPU modules. Neurophos is not the only company exploring this field. Last April, Lightmatter announced the launch of photonic chips to tackle data center bottlenecks, And in 2024, IBM said its researchers were exploring optical chips and developing a prototype in this area.

Read More »

Intel wrestling with CPU supply shortage

“We have important customers in the data center side. We have important OEM customers on both data center and client and that needs to be our priority to get the limited supply we have to those customers,” he added. CEO Lip-Bu Tan added that the continuing proliferation and diversification of AI workloads is placing significant capacity constraints on traditional and new hardware infrastructure, reinforcing the growing and essential role CPUs play in the AI era. Because of this, Intel decided to simplify its server road map, focusing resources on the 16-channel Diamond Rapids product and accelerate the introduction of Coral Rapids. Intel had removed multithreading from diamond Rapids, presumably to get rid of the performance bottlenecks. With each core running two threads, they often competed for resources. That’s why, for example, Ampere does not use threading but instead applies many more cores per CPU. With Coral Rapids, Intel is not only reintroducing multi-threading back into our data center road map but working closely with Nvidia to build a custom Xeon fully integrated with their NVLink technology to Build the tighter connection between Intel Xeon processors and Nvidia GPUs. Another aspect impacting supply has been yields or the new 18A process node. Tan said he was disappointed that the company could not fully meet the demand of the markets, and that while yields are in line with internal plans, “they’re still below where I want them to be,” Tan said.  Tan said yields for 18A are improving month-over-month and Intel is targeting a 7% to 8% improvement each month.

Read More »

Intel’s AI pivot could make lower-end PCs scarce in 2026

However, he noted, “CPUs are not being cannibalized by GPUs. Instead, they have become ‘chokepoints’ in AI infrastructure.” For instance, CPUs such as Granite Rapids are essential in GPU clusters, and for handling agentic AI workloads and orchestrating distributed inference. How pricing might increase for enterprises Ultimately, rapid demand for higher-end offerings resulted in foundry shortages of Intel 10/7 nodes, Bickley noted, which represent the bulk of the company’s production volume. He pointed out that it can take up to three quarters for new server wafers to move through the fab process, so Intel will be “under the gun” until at least Q2 2026, when it projects an increase in chip production. Meanwhile, manufacturing capacity for Xeon is currently sold out for 2026, with varying lead times by distributor, while custom silicon programs are seeing lead times of 6 to 8 months, with some orders rolling into 2027, Bickley said. In the data center, memory is the key bottleneck, with expected price increases of more than 65% year over year in 2026 and up to 25% for NAND Flash, he noted. Some specific products have already seen price inflation of over 1,000% since 2025, and new greenfield capacity for memory is not expected until 2027 or 2028. Moor’s Sag was a little more optimistic, forecasting that, on the client side, “memory prices will probably stabilize this year until more capacity comes online in 2027.” How enterprises can prepare Supplier diversification is the best solution for enterprises right now, Sag noted. While it might make things more complex, it also allows data center operators to better absorb price shocks because they can rebalance against suppliers who have either planned better or have more resilient supply chains.

Read More »

Reports of SATA’s demise are overblown, but the technology is aging fast

The SATA 1.0 interface made its debut in 2003. It was developed by a consortium consisting of Intel, Dell, and storage vendors like Seagate and Maxtor. It quickly advanced to SATA III in 2009, but there never was a SATA IV. There was just nibbling around the edges with incremental updates as momentum and emphasis shifted to PCI Express and NVMe. So is there any life to be had in the venerable SATA interface? Surprisingly, yes, say the analysts. “At a high level, yes, SATA for consumer is pretty much a dead end, although if you’re storing TB of photos and videos, it is still the least expensive option,” said Bob O’Donnell, president and chief analyst with TECHnalysis Research. Similarly for enterprise, for massive storage demands, the 20 and 30 TB SATA drives from companies like Seagate and WD are apparently still in wide use in cloud data centers for things like cold storage. “In fact, both of those companies are seeing recording revenues based, in part, on the demand for these huge, high-capacity low-cost drives,” he said. “SATA doesn’t make much sense anymore. It underperforms NVMe significantly,” said Rob Enderle, principal analyst with The Enderle Group. “It really doesn’t make much sense to continue make it given Samsung allegedly makes three to four times more margin on NVMe.” And like O’Donnell, Enderle sees continued life for SATA-based high-capacity hard drives. “There will likely be legacy makers doing SATA for some time. IT doesn’t flip technology quickly and SATA drives do wear out, so there will likely be those producing legacy SATA products for some time,” he said.

Read More »

DCN becoming the new WAN for AI-era applications

“DCN is increasingly treated as an end-to-end operating model that standardizes connectivity, security policy enforcement, and telemetry across users, the middle mile, and cloud/application edges,” Sanchez said. Dell’Oro defines DCN as platforms and services that deliver consistent connectivity, policy enforcement, and telemetry from users, across the WAN, to distributed cloud and application edges spanning branch sites, data centers and public clouds. The category is gaining relevance as hybrid architectures and AI-era traffic patterns increase the operational penalty for fragmented control planes. DCN buyers are moving beyond isolated upgrades and are prioritizing architectures that reduce operational seams across connectivity, security and telemetry so that incident response and change control can follow a single thread, according to Dell’Oro’s research. What makes DCN distinct is that it links user-to-application experience with where policy and visibility are enforced. This matters as application delivery paths become more dynamic and workloads shift between on-premises data centers, public cloud, and edge locations. The architectural requirement is eliminating handoffs between networking and security teams rather than optimizing individual network segments. Where DCN is growing the fastest Cloud/application edge is the fastest-growing DCN pillar. This segment deploys policy enforcement and telemetry collection points adjacent to workloads rather than backhauling traffic to centralized security stacks. “Multi-cloud remains a reality, but it is no longer the durable driver by itself,” Sanchez said. “Cloud/application edge is accelerating because enterprises are trying to make application paths predictable and secure across hybrid environments, and that requires pushing application-aware steering, policy enforcement, and unified telemetry closer to workloads.”

Read More »

Edged US Builds Waterless, High-Density AI Data Center Campuses at Scale

Edged US is targeting a narrow but increasingly valuable lane of the hyperscale AI infrastructure market: high-density compute delivered at speed, paired with a sustainability posture centered on waterless, closed-loop cooling and a portfolio-wide design PUE target of roughly 1.15. Two recent announcements illustrate the model. In Aurora, Illinois, Edged is developing a 72-MW facility purpose-built for AI training and inference, with liquid-to-chip cooling designed to support rack densities exceeding 200 kW. In Irving, Texas, a 24-MW campus expansion combines air-cooled densities above 120 kW per rack with liquid-to-chip capability reaching 400 kW. Taken together, the projects point to a consistent strategy: standardized, multi-building campuses in major markets; a vertically integrated technical stack with cooling at its core; and an operating model built around repeatable designs, modular systems, and readiness for rapidly escalating AI densities. A Campus-First Platform Strategy Edged US’s platform strategy is built around campus-scale expansion rather than one-off facilities. The company positions itself as a gigawatt-scale, AI-ready portfolio expanding across major U.S. metros through repeatable design targets and multi-building campuses: an emphasis that is deliberate and increasingly consequential. In Chicago/Aurora, Edged is developing a multi-building campus with an initial facility already online and a second 72-MW building under construction. Dallas/Irving follows the same playbook: the first facility opened in January 2025, with a second 24-MW building approved unanimously by the city. Taken together with developments in Atlanta, Chicago, Columbus, Dallas, Des Moines, Kansas City, and Phoenix, the footprint reflects a portfolio-first mindset rather than a collection of bespoke sites. This focus on campus-based expansion matters because the AI factory era increasingly rewards developers that can execute three things at once: Lock down power and land at scale. Standardize delivery across markets. Operate efficiently while staying aligned with community and regulatory expectations. Edged is explicitly selling the second

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »