Stay Ahead, Stay ONMINE

PTTEP Acquires Stake in Gulf of Thailand Asset from Chevron

Thailand’s PTT Exploration and Production Public Co. Ltd. (PTTEP) said it has acquired a 50 percent participating interest in Block A-18 of the Malaysia–Thailand Joint Development Area (MTJDA) for $450 million. The sellers, Hess (Bahamas) Limited and Hess Asia Holdings Inc., are subsidiaries of Chevron following the Chevron-Hess merger. The acquisition enhances PTTEP’s gas production […]

Thailand’s PTT Exploration and Production Public Co. Ltd. (PTTEP) said it has acquired a 50 percent participating interest in Block A-18 of the Malaysia–Thailand Joint Development Area (MTJDA) for $450 million.

The sellers, Hess (Bahamas) Limited and Hess Asia Holdings Inc., are subsidiaries of Chevron following the Chevron-Hess merger.

The acquisition enhances PTTEP’s gas production volume, petroleum reserves, and increases its investment in the MTJDA from its existing 50 percent participating interest in Block B-17-01, the company said in a news release.

Block A-18 currently produces 600 million standard cubic feet of natural gas per day (MMscfd) which is distributed equally to Thailand and Malaysia, the company said, adding that the 300 MMscfd supplied to Thailand accounts for six percent of the country’s domestic gas demand.

PTTEP said it plans to develop additional production wells and wellhead platforms, as well as gas pipelines, to support a consistent and reliable gas supply.

The MTJDA is located in the southern part of the Gulf of Thailand. Covering an area of approximately 2,800 square miles (7,250 square kilometers), it is a key source of natural gas and condensates for Thailand and Malaysia, according to the release.

Block A-18 which includes Cakerawala, Bumi, Suriya, Bulan, and Bulan South fields, started production in 2005, while Block B-17-01 began production in 2010. The block includes Muda, Tapi, Tanjung, Amarit, Jengka, Melati, and Andalas fields, and currently produces approximately 300 MMscfd of natural gas for Thailand and Malaysia, the release said.

“PTTEP is pleased to further expand our operations in the MTJDA, which is recognized for its petroleum potential and strategic significance to Thailand’s energy security. The acquisition also contributes to the company’s growth. Apart from the existing producing fields, Block A-18 includes several discovered gas fields awaiting development to unlock their full potential. Participation in Block A-18 also fosters operational synergy with Block B-17-01, enhancing efficiency to ensure continuous and accelerated energy supply for both countries.” PTTEP CEO Montri Rawanchaikul said.

First Half Updates

In 2025, PTTEP, in partnership with Eni Algeria Exploration B.V. as the operator, was awarded the Reggane II block in Algeria. The company, which holds a 34 percent interest in the project, signed a production sharing contract (PSC) for the asset, with the contract effective upon the official announcement by the Algerian government.

Reggane II, located near the Algeria Touat Project, “presents a strategic opportunity to enhance development and resource management synergies, coupled with discovered gas and exploration potential in the area,” PTTEP said in a separate statement.

In the Middle East, PTTEP has also signed an agreement to extend the Exploration and Production Sharing Agreement for Block 53 through 2050. The company has also been awarded the production concession agreement for the Abu Dhabi Offshore 2 project by a state agency of the Emirate of Abu Dhabi, UAE, following a successful gas discovery. This marks a step toward a final investment decision (FID), the company said.

For the first half of the year PTTEP reported total revenue of THB 148.53 billion ($4.43 billion). Major growth was mainly from the increase in G1/61 production since March 2024, higher crude sales of Sabah Block K project and the participation interest increase of Sinphuhorm project, the company said.

To contact the author, email [email protected]

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

USA and Pakistan Sign Trade Deal to Boost Oil Reserves, Market Ties

The US sealed a trade deal with Pakistan as their officials wrapped up talks in Washington, agreeing to develop oil reserves. The agreement involves a reduction of the so called reciprocal tariffs, especially on Pakistani exports, according to a statement by Pakistan’s finance ministry on Thursday. No details on tariffs were shared by either side. The agreement will spur US investments in Pakistan’s infrastructure, besides deepening market ties between the partners, the ministry said.  US President Donald Trump said in a post on Truth Social that the two countries will “work together on developing their massive oil reserves”, adding that officials are now selecting the company that will anchor the partnership. Relations between Islamabad and Washington have been showing signs of easing after prolonged tensions, with President Trump welcoming Pakistan’s army chief, Field Marshal Asim Munir, for rare talks at the White House in June.  Pakistan, which lists the US as one of its top export destinations, had offered to boost American imports, particularly cotton and soybean. The South Asian nation sold over $5 billion worth of goods to the US as of 2024, and imported about $2.1 billion. The US has also expressed interest in sunrise sectors such as crypto currencies. Pakistan plans to legalize and regulate digital assets as the field gains traction in key Asian markets following Trump’s pro-crypto agenda, Bloomberg News reported. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Rick Stockburger Named Inaugural FESI CEO

Rick Stockburger has been appointed as the inaugural chief executive officer (CEO) of the Department of Energy’s (DOE) Foundation for Energy Security and Innovation (FESI). The DOE said in a media release that this is the first-ever independent agency-related foundation. The DOE added that the foundation has been established to support its mission and help accelerate energy technology commercialization, expand private-public collaboration, and strengthen America’s energy system. Stockburger is a decorated U.S. Army combat veteran and leader in energy innovation, known for helping accelerate energy technology startups and fostering public-private partnerships, the DOE said. He served in Kosovo and Afghanistan before moving to the energy sector, where he scaled innovative technologies from concept to market, according to the DOE. As president and CEO of BRITE Energy Innovators in Warren, Ohio, he expanded the organization’s budget and programming, generating over $250 million in economic impact for the Midwest, the DOE said. “Today’s announcement marks a new chapter in how the Department will deliver breakthrough technologies to market”, Secretary of Energy Chris Wright said. “Rick’s proven leadership and background will help advance the Department’s efforts to move emerging technologies into real-world energy deployment – strengthening American science, innovation, and energy leadership. With him in place, FESI will be a valuable partner in expanding private-sector collaboration and delivering on President Trump’s agenda to unleash American energy and innovation”. “Rick’s decorated service to our country while in the U.S. Army, combined with his leadership in technology entrepreneurship in the private sector, makes him an outstanding choice to be FESI’s first CEO”, Anthony Pugliese, DOE chief commercialization officer and director of the Office of Technology Commercialization, said. “Together, we will strengthen America’s ability to move breakthrough research into real-world impact”. FESI has already made two initial investments in DOE initiatives aimed at enhancing America’s energy infrastructure

Read More »

5 utility commissions ask FERC to undo MISO’s $22B multi-value transmission portfolio

Five state utility commissions asked the Federal Energy Regulatory Commission to change the classification of a $22 billion portfolio of “multi-value” transmission projects, a move that would make them ineligible for regional cost sharing, according to a complaint filed at the agency on Wednesday. The utility commissions from Arkansas, Louisiana, Mississippi, Montana and North Dakota contend that MISO overstated the benefits of multi-value projects in its Tranche 2.1 regional transmission portfolio, which the grid operator’s board approved in December. Unlike other types of transmission projects, multi-value projects have their costs shared across MISO’s footprint. The Tranche 2.1 portfolio contains 24 transmission projects, including some that form a 3,631-mile, 765-kV backbone. The projects are expected to go online from 2032 to 2034, according to MISO. The state commissions contend that MISO used flawed modeling and assumptions that inflated the value of the Tranche 2.1 portfolio. MISO, for example, said the low-end benefits of three key metrics — avoided capacity costs, mitigation of reliability issues and decarbonization — used to assess Tranche 2.1 totaled $38.3 billion in 2024 dollars over 20 years, making up about 74% of benefits in the scenario, according to the complaint.  However, after “correcting” MISO’s assumptions and analysis, a consultant for the utility commissions pegged the benefits of those metrics at $4.3 billion to $7.2 billion, according to the complaint. The benefits of the transmission portfolio are “significantly less than the costs,” making it ineligible for its MVP designation under MISO’s rules, the state commissions said. MISO’s Tranche 2.1 portfolio aims to give states with clean energy goals such as Minnesota, Michigan and Illinois access to remote sources of clean energy, according to the complaint. “Classifying the Tranche 2.1 projects as MVPs allows states with ambitious clean energy goals to shift transmission costs (to deliver their remote energy) to other

Read More »

Dynegy to pay $38M to settle charges it manipulated MISO’s capacity market

Dynegy will pay $38 million to settle allegations that it manipulated the Midcontinent Independent System Operator’s 2015/16 capacity auction, driving up capacity prices in Illinois, according to an agreement filed Wednesday at the Federal Energy Regulatory Commission. The agreement stems from complaints filed by Public Citizen, former Illinois Attorney General Lisa Madigan and Southwestern Electric Cooperative over the auction results. Dynegy — now owned by Vistra — continues to dispute the allegations, according to the “black box” settlement. Dynegy will pay MISO $38 million within ten days of the agreement taking effect, according to the agreement. The grid operator will distribute $1.1 million to Southwestern; $1.3 million to the Illinois Municipal Electric Agency; $2 million to Illinois Industrial Energy Consumers; and $33.5 million to Ameren Illinois, according to the agreement. IMEA, IIEC and Ameren Illinois will then distribute the money to its member municipalities, members or default supply customers, respectively. The settlement amounts are based on how much capacity Southwestern (65.1 MW), IMEA (78.7 MW) and Ameren Illinois (2,104.8 MW) bought in the auction from MISO’s Zone 4. IIEC is receiving a refund for capacity charges its members paid. The parties asked FERC to approve the agreement by Aug. 29. FERC responded to the complaints over the capacity auction with two decisions. In December 2015, the commission ordered MISO to change its tariff provisions related to market power mitigation and how it calculates capacity import limits, which the agency said were no longer just and reasonable. In a 3-1 July 2019 decision, FERC found the Zone 4 auction results to be just and reasonable. Then-FERC Commissioner Richard Glick dissented from the second decision and criticized then-FERC Chairman Neil Chatterjee for unilaterally ending the enforcement office’s investigation into Dynegy’s market behavior. However, in 2021, an appeals court agreed with Public Citizen

Read More »

Aggregations and data centers: If a resource shows up when the grid is straining, make it count.

Arushi Sharma Frank advises NVIDIA Inception startup Emerald AI, which develops software to help data centers become grid assets, as well as technology firms working with utilities to integrate grid-edge solutions. The same planning instincts, operational conversations and system design principles that helped distributed energy earn its place on the grid must now be applied to the infrastructure demands of artificial intelligence. We’ve already lived through the work of proving that resources outside the substation fence can be real contributors to grid reliability — building controls, thermostats, batteries, rooftop solar, virtual power plant portfolios. What mattered wasn’t the technology alone, but its ability to deliver visibility, control and verifiable response when it counted. We are entering a similar moment with AI data centers, which arrive with load profiles that are shaping up to be large and urgent, but also increasingly dispatch-aware to the grid and power efficiency-aware to the providers of software and hardware behind the fence. The architecture is different. The topology is different. But the “ask” is familiar: allow us to participate in grid stress management — not just consume — and the capital will flow to the solutions that enable flexible, grid-supportive outcomes. With the right planning framework, flexible assets can be integrated into the grid’s living system. That vision requires us to extend the same logic we’ve spent a decade refining — how we plan for flexibility, how we model load shapes, how we co-invest in infrastructure, and how we compensate response — across all sides of the grid, not just at the edge. We’ve certainly spent years proving that distributed energy can perform, so let us spend only months achieving the same for large load users. The opportunity awaits to unlock a different grid future, in which AI systems and human communities are powered by

Read More »

Freeport LNG Outage Reverses Relief Rally Attempt in NatGas

In an EBW Analytics Group report sent to Rigzone by the EBW team today, Eli Rubin, an energy analyst at the company, outlined that “another Freeport LNG outage reverse[d a]… relief rally attempt” in natural gas on Wednesday. The report pointed out that the September natural gas contract closed at $3.405 per million British thermal units (MMBtu) yesterday. It highlighted that this represented a drop of 9.7 cents, or 3.1 percent, compared to Tuesday’s close.   “Although the September natural gas contract reached $3.186 [per MMBtu] yesterday, another Freeport LNG outage slashed Gulf Coast physical demand 1.8 billion cubic feet per day and sent the NYMEX front-month down to test the $3.00 per MMBtu psychological level,” Rubin said in the report. The EBW analyst warned in the report that, with “breaking key technical support at $3.06 per MMBtu, mild near-term weather, natural gas storage surpluses vs. the five-year average primed to achieve new heights, and ongoing LNG weakness, September appears positioned for another leg lower in search of support”. Rubin highlighted that Henry Hub at $2.97 per MMBtu yesterday “will remain critical to monitor during near-term weather weakness in potentially helping to define near-term downside risks”. “Consensus expectations anticipate a 36-40 billion cubic foot build with this morning’s EIA [U.S. Energy Information Administration] storage report,” Rubin went on to note in the report. “Following last week’s bullish surprise, a wide range of outcomes is possible –  potentially introducing additional volatility risks at the front of the curve,” he added. Rigzone contacted Freeport LNG for comment on EBW’s report. In response, a Freeport LNG spokesperson told Rigzone, “one important thing to be noted here is that the Freeport LNG outage was caused by an external factor; an apparent power outage yesterday that affected the town of Freeport and some of the surrounding

Read More »

Data center survey: AI gains ground but trust concerns persist

Cost issues: 76% Forecasting future data center capacity requirements: 71% Improving energy performance for facilities equipment: 67% Power availability: 63% Supply chain disruptions: 65% A lack of qualified staff: 67% With respect to capacity planning, there’s been a notable increase in the number of operators who describe themselves as “very concerned” about forecasting future data center capacity requirements. Andy Lawrence, Uptime’s executive director of research, said two factors are contributing to this concern: ongoing strong growth for IT demand, and the often-unpredictable demand that AI workloads are creating. “There’s great uncertainty about … what the impact of AI is going to be, where it’s going to be located, how much of the power is going to be required, and even for things like space and cooling, how much of the infrastructure is going to be sucked up to support AI, whether it’s in a colocation, whether it’s in an enterprise or even in a hyperscale facility,” Lawrence said during a webinar sharing the survey results. The survey found that roughly one-third of data center owners and operators currently perform some AI training or inference, with significantly more planning to do so in the future. As the number of AI-based software deployments increases, information about the capabilities and limitations of AI in the workplace is becoming available. The awareness is also revealing AI’s suitability for certain tasks. According to the report, “the data center industry is entering a period of careful adoption, testing, and validation. Data centers are slow and careful in adopting new technologies, and AI will not be an exception.”

Read More »

Micron unveils PCIe Gen6 SSD to power AI data center workloads

Competitive positioning With the launch of the 9650 SSD PCIe Gen 6, Micron competes with Samsung and SK Hynix enterprise SSD offerings, which are the dominant players in the SSD market. In December last year, SK Hynix announced the development of PS1012 U.2 Gen5 PCIe SSD, for massive high-capacity storage for AI data centers.  The PM1743 is Samsung’s PCIe Gen5 offering in the market, with 14,000 MBps sequential read, designed for high-performance enterprise workloads. According to Faruqui, PCIe Gen6 data center SSDs are best suited for AI inference performance enhancement. However, we’re still months away from large-scale adoption as no current CPU platforms are available with PCIe 6.0 support. Only Nvidia’s Blackwell-based GPUs have native PCIe 6.0 x16 support with interoperability tests in progress. He added that PCIe Gen 6 SSDs will see very delayed adoption in the PC segment and imminent 2025 2H adoption in AI, data centers, high-performance computing (HPC), and enterprise storage solutions. Micron has also introduced two additional SSDs alongside the 9650. The 6600 ION SSD delivers 122TB in an E3.S form factor and is targeted at hyperscale and enterprise data centers looking to consolidate server infrastructure and build large AI data lakes. A 245TB variant is on the roadmap. The 7600 PCIe Gen5 SSD, meanwhile, is aimed at mixed workloads that require lower latency.

Read More »

AI Deployments are Reshaping Intra-Data Center Fiber and Communications

Artificial Intelligence is fundamentally changing the way data centers are architected, with a particular focus on the demands placed on internal fiber and communications infrastructure. While much attention is paid to the fiber connections between data centers or to end-users, the real transformation is happening inside the data center itself, where AI workloads are driving unprecedented requirements for bandwidth, low latency, and scalable networking. Network Segmentation and Specialization Inside the modern AI data center, the once-uniform network is giving way to a carefully divided architecture that reflects the growing divergence between conventional cloud services and the voracious needs of AI. Where a single, all-purpose network once sufficed, operators now deploy two distinct fabrics, each engineered for its own unique mission. The front-end network remains the familiar backbone for external user interactions and traditional cloud applications. Here, Ethernet still reigns, with server-to-leaf links running at 25 to 50 gigabits per second and spine connections scaling to 100 Gbps. Traffic is primarily north-south, moving data between users and the servers that power web services, storage, and enterprise applications. This is the network most people still imagine when they think of a data center: robust, versatile, and built for the demands of the internet age. But behind this familiar façade, a new, far more specialized network has emerged, dedicated entirely to the demands of GPU-driven AI workloads. In this backend, the rules are rewritten. Port speeds soar to 400 or even 800 gigabits per second per GPU, and latency is measured in sub-microseconds. The traffic pattern shifts decisively east-west, as servers and GPUs communicate in parallel, exchanging vast datasets at blistering speeds to train and run sophisticated AI models. The design of this network is anything but conventional: fat-tree or hypercube topologies ensure that no single link becomes a bottleneck, allowing thousands of

Read More »

ABB and Applied Digital Build a Template for AI-Ready Data Centers

Toward the Future of AI Factories The ABB–Applied Digital partnership signals a shift in the fundamentals of data center development, where electrification strategy, hyperscale design and readiness, and long-term financial structuring are no longer separate tracks but part of a unified build philosophy. As Applied Digital pushes toward REIT status, the Ellendale campus becomes not just a development milestone but a cornerstone asset: a long-term, revenue-generating, AI-optimized property underpinned by industrial-grade power architecture. The 250 MW CoreWeave lease, with the option to expand to 400 MW, establishes a robust revenue base and validates the site’s design as AI-first, not cloud-retrofitted. At the same time, ABB is positioning itself as a leader in AI data center power architecture, setting a new benchmark for scalable, high-density infrastructure. Its HiPerGuard Medium Voltage UPS, backed by deep global manufacturing and engineering capabilities, reimagines power delivery for the AI era, bypassing the limitations of legacy low-voltage systems. More than a component provider, ABB is now architecting full-stack electrification strategies at the campus level, aiming to make this medium-voltage model the global standard for AI factories. What’s unfolding in North Dakota is a preview of what’s coming elsewhere: AI-ready campuses that marry investment-grade real estate with next-generation power infrastructure, built for a future measured in megawatts per rack, not just racks per row. As AI continues to reshape what data centers are and how they’re built, Ellendale may prove to be one of the key locations where the new standard was set.

Read More »

Amazon’s Project Rainier Sets New Standard for AI Supercomputing at Scale

Supersized Infrastructure for the AI Era As AWS deploys Project Rainier, it is scaling AI compute to unprecedented heights, while also laying down a decisive marker in the escalating arms race for hyperscale dominance. With custom Trainium2 silicon, proprietary interconnects, and vertically integrated data center architecture, Amazon joins a trio of tech giants, alongside Microsoft’s Project Stargate and Google’s TPUv5 clusters, who are rapidly redefining the future of AI infrastructure. But Rainier represents more than just another high-performance cluster. It arrives in a moment where the size, speed, and ambition of AI infrastructure projects have entered uncharted territory. Consider the past several weeks alone: On June 24, AWS detailed Project Rainier, calling it “a massive, one-of-its-kind machine” and noting that “the sheer size of the project is unlike anything AWS has ever attempted.” The New York Times reports that the primary Rainier campus in Indiana could include up to 30 data center buildings. Just two days later, Fermi America unveiled plans for the HyperGrid AI campus in Amarillo, Texas on a sprawling 5,769-acre site with potential for 11 gigawatts of power and 18 million square feet of AI data center capacity. And on July 1, Oracle projected $30 billion in annual revenue from a single OpenAI cloud deal, tied to the Project Stargate campus in Abilene, Texas. As Data Center Frontier founder Rich Miller has observed, the dial on data center development has officially been turned to 11. Once an aspirational concept, the gigawatt-scale campus is now materializing—15 months after Miller forecasted its arrival. “It’s hard to imagine data center projects getting any bigger,” he notes. “But there’s probably someone out there wondering if they can adjust the dial so it goes to 12.” Against this backdrop, Project Rainier represents not just financial investment but architectural intent. Like Microsoft’s Stargate buildout in

Read More »

Google and CTC Global Partner to Fast-Track U.S. Power Grid Upgrades

On June 17, 2025, Google and CTC Global announced a joint initiative to accelerate the deployment of high-capacity power transmission lines using CTC’s U.S.-manufactured ACCC® advanced conductors. The collaboration seeks to relieve grid congestion by rapidly upgrading existing infrastructure, enabling greater integration of clean energy, improving system resilience, and unlocking capacity for hyperscale data centers. The effort represents a rare convergence of corporate climate commitments, utility innovation, and infrastructure modernization aligned with the public interest. As part of the initiative, Google and CTC issued a Request for Information (RFI) with responses due by July 14. The RFI invites utilities, state energy authorities, and developers to nominate transmission line segments for potential fast-tracked upgrades. Selected projects will receive support in the form of technical assessments, financial assistance, and workforce development resources. While advanced conductor technologies like ACCC® can significantly improve the efficiency and capacity of existing transmission corridors, technological innovation alone cannot resolve the grid’s structural challenges. Building new or upgraded transmission lines in the U.S. often requires complex permitting from multiple federal, state, and local agencies, and frequently faces legal opposition, especially from communities invoking Not-In-My-Backyard (NIMBY) objections. Today, the average timeline to construct new interstate transmission infrastructure stretches between 10 and 12 years, an untenable lag in an era when grid reliability is under increasing stress. In 2024, the Federal Energy Regulatory Commission (FERC) reported that more than 2,600 gigawatts (GW) of clean energy and storage projects were stalled in the interconnection queue, waiting for sufficient transmission capacity. The consequences affect not only industrial sectors like data centers but also residential areas vulnerable to brownouts and peak load disruptions. What is the New Technology? At the center of the initiative is CTC Global’s ACCC® (Aluminum Conductor Composite Core) advanced conductor, a next-generation overhead transmission technology engineered to boost grid

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »