Stay Ahead, Stay ONMINE

Martin Midstream Partners, Martin Resource Management Bin Merger Plan

Martin Midstream Partners LP (MMLP) and investor Martin Resource Management Corp. (MRMC) have mutually terminated a deal under which MRMC would buy MMLP common units it did not already own. The cancellation comes after two other investors opposed the takeover by MRMC, which owns the 100 percent general partnership interest in MMLP. The two other […]

Martin Midstream Partners LP (MMLP) and investor Martin Resource Management Corp. (MRMC) have mutually terminated a deal under which MRMC would buy MMLP common units it did not already own.

The cancellation comes after two other investors opposed the takeover by MRMC, which owns the 100 percent general partnership interest in MMLP. The two other investors, Nut Tree Capital Management LP and Caspian Capital LP, had lost a higher-priced counter-offer.

Under the transaction agreed between MMLP and MRMC, each non-MRMC-owned common unit representing a limited partnership interest in MMLP would be converted into cash. MMLP was to survive as a wholly owned subsidiary of MRMC.

MRMC initially proposed a purchase price of $3.05 per unit. Nut Tree and Caspian, which own limited partnership interests in MMLP responded with a counter-offer to buy the common units targeted in MRMC’s proposal for $4 a unit.

Nut Tree and Caspian later raised their proposal to $4.5 per unit and expressed willingness to increase their offer further, according to regulatory filings.

However, on October 3, MMLP announced a definitive agreement under which MRMC would acquire all MMLP common units it did not already own for $4.02 per unit, rebuffing Nut Tree and Caspian’s increased offer.

According to statements from MMLP, its purchase by a party other than MRMC would likely necessitate the purchase of its general partner, which is owned by MRMC. MMLP has said it could not make a sale transaction with another party because MRMC had no intention of selling the general partner.

MRMC owns about 15.7 percent of MMLP common units, while Nut Tree and Caspian have “economic exposure” of around 13.2 percent of MMLP common units, according to information shared with the United States Securities and Exchange Commission.

Announcing the termination of the takeover agreement with MRMC, MMLP said it would “continue to operate as a standalone publicly traded company”.

Bob Bondurant, president and chief executive of MMLP’s general partner, said, “We appreciate the feedback we have received from unitholders during our extensive outreach and engagement over the last several weeks”.

“We greatly value unitholders’ perspectives and are pleased that unitholders have confidence in the future of MMLP as a standalone company”, Bondurant added.

“We will continue to focus on executing our long-term strategy, including strengthening the balance sheet through debt reduction and improving operating results, to create value for unitholders”.

Focusing on the United States Gulf Coast, MMLP offers terminaling, processing and storage services for crude oil and petroleum products. It also offers land and marine transport for oil, chemicals and other products. MMLP also distributes natural gas liquids and offers blending and packaging services for lubricants and grease, as well as manufactures sulfur and sulfur-based products.

Kilgore, Texas-based MRMC distributes asphalt, diesel fuel, fuel oil and naphthenic lubricants. “MRMC markets over 250 million gallons of diesel fuel and lubricants per year along the Gulf Coast and over 1.5 million barrels of naphthenic lubricants and base oils per year throughout the United States”, it says on its website.

To contact the author, email [email protected]

What do you think? We’d love to hear from you, join the conversation on the

Rigzone Energy Network.

The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

NetOps teams struggle with AI readiness

Some 87% of respondents indicated that internet and cloud environments are creating network blind spots in many areas. Half of organizations reported a lack of adequate insight into public clouds, 44% of respondents indicated transit and peering networks created blind spots, and 43% said remote work environments lack visibility. Other

Read More »

USA EIA Raises WTI Oil Price Forecasts

In its latest short term energy outlook (STEO), which was released on November 12, the U.S. Energy Information Administration (EIA) increased its West Texas Intermediate (WTI) spot average price forecast for 2025 and 2026. According to this STEO, the EIA now sees the WTI spot price averaging $65.15 per barrel in 2025 and $51.26 per barrel in 2026. In its previous STEO, which was released in October, the EIA projected that the WTI spot price would average $65.00 per barrel in 2025 and $48.50 per barrel in 2026. The EIA’s September STEO forecast that the WTI spot price average would come in at $64.16 per barrel this year and $47.77 per barrel next year. A quarterly breakdown included in the EIA’s latest STEO projected that the WTI spot price will average $58.65 per barrel in the fourth quarter of 2025, $50.30 per barrel in the first quarter of next year, $50.68 per barrel in the second quarter, and $52.00 per barrel across the third and fourth quarters of 2026. The EIA’s October STEO saw the WTI spot price averaging $58.05 per barrel in the fourth quarter of next year, $47.97 per barrel in the first quarter of next year, $48.33 per barrel in the second quarter, $48.68 per barrel in the third quarter, and $49.00 per barrel in the fourth quarter of 2026. In its September STEO, the EIA projected that the WTI spot price would come in at $65.14 per barrel in the third quarter of 2025, $55.41 per barrel in the fourth quarter, $45.97 per barrel in the first quarter of next year, $46.33 per barrel in the second quarter, $48.68 per barrel in the third quarter, and $50.00 per barrel in the fourth quarter of 2026. The EIA’s latest STEO showed that the WTI spot price averaged

Read More »

Phillips 66 to Supply SAF to DHL for Three Years

Phillips 66 has won a three-year contract to deliver over 240,000 metric tons of sustainable aviation fuel (SAF) to DHL Group. “The SAF will be produced at Phillips 66’s Rodeo Renewable Energy Complex in California, one of the world’s largest renewable fuels facilities with a production capacity of 150 million gallons per year of neat SAF (i.e. SAF that is not blended with conventional jet fuel)”, DHL said in an online statement. The bulk of the supply is for the Los Angeles International Airport, “with future intended deliveries to other West Coast airports where DHL maintains operations, such as San Francisco International Airport”, the German logistics giant said. “The agreement with Phillips 66 represents one of the largest SAF deals by a U.S. producer and for the overall air cargo sector, paving the way for future collaborations in the SAF space”, DHL said. The volume represents an avoidance of about 737,000 metric tons of lifecycle greenhouse gas emissions, according to DHL. “DHL Express has been actively securing SAF partnerships worldwide including in the Europe, America and Asia-Pacific regions since 2021, and this new agreement exemplifies its dedication to leveraging sustainable aviation fuels to address its air freight carbon footprint effectively”, DHL said. “This agreement will contribute significantly to DHL’s GoGreen Plus service, which enables customers to reduce their Scope 3 greenhouse gas emissions using SAF”. Phillips 66 vice president for aviation Ronald Sanchez said in a separate statement, “Our integrated model is a competitive advantage that enables resilience and value creation in the SAF market. Our people, capabilities and assets allow for feedstock optionality; our supply chain agility accounts for an evolving environment”. Phillips 66 said that earlier this year it had signed agreements to supply SAF to Alaska Airlines, British Airways, Qantas Airlines and United Airlines. Last year Phillips

Read More »

Golden Pass LNG Amends Train 2 and 3 Contract

Golden Pass LNG Terminal LLC has reached an agreement with Chiyoda International Corp and McDermott LLC to amend the engineering, procurement and construction (EPC) terms for the second and third trains of the Texas project owned by QatarEnergy and Exxon Mobil Corp. Yokohama-based Chiyoda, Houston-based McDermott and San Antonio-based Zachry Holdings Inc (ZHI) won the contract 2019. However, ZHI filed for Chapter 11 bankruptcy last year, leading to a court-approved settlement that allowed ZHI to exit the contract. The EPC terms for Train 1 had been amended late last year, as announced by Chiyoda November 25, 2024. Earlier this year Chiyoda and McDermott signed binding terms with Golden Pass for key components of Trains 2 and 3 including payment terms based on the reassessment of the allocation regarding future costs, according to online statements by Chiyoda. Later the parties agreed on more detailed terms, and the revised EPC contract for Trains 2 and 3 was signed this month, Chiyoda said in a statement this week. Chiyoda has not disclosed details about the revised contract terms for the three liquefaction trains. Its latest statement said, “We will further examine the details of the revised provisions and impacts on profitability and will make any adjustments for prompt announcement based on disclosure standards for performance forecasts when it becomes possible to calculate profit and loss”. In 2024 ZHI filed for bankruptcy before the U.S. Bankruptcy Court for the Southern District of Texas to allow it to restructure finances and exit the project. According to the text of the Chapter 11 complaint filed May 21, 2024, ZHI had to shoulder higher costs emanating from demands by Golden Pass to get the project back on track after it had gone beyond schedule and over budget due to “unexpected challenges”. On July 24, 2024, ZHI said the

Read More »

India Seeks Mideast Oil Shippers amid Russia Sanctions

A surge in bookings for oil tankers to bring cargoes from the Middle East to India points to higher import flows ahead, as sanctions on major Russian producers force the South Asian importer to seek alternatives.  So far this week, roughly a dozen vessels have been chartered to ship crude from countries including Saudi Arabia, Kuwait, Iraq and the United Arab Emirates and ferry it across the Arabian Sea, according to shipbroker reports. That’s a jump from the same time last month, when about four fixtures were seen. These bookings include supertankers known as Very Large Crude Carriers as well as smaller Suezmax vessels, for oil loading late November to December. Indian importers are still seeking even more tankers for the same routes, the reports show. Oil traders have been closely monitoring India’s spot and term purchases of non-Russian crudes as they try to make sense of the Asian nation’s next steps, ahead of Nov. 21, when sanctions on Rosneft PJSC and Lukoil PJSC come into effect. While these fixtures are not necessarily exhaustive – bookings can be made through private negotiations – they reflect the broader purchasing patterns of refiners and as such provide a window into an opaque market. The latest bookings are helping to push up freight rates, with daily costs of hiring an oil supertanker from the Middle East to Asia hovering near a five-year high. Five of India’s seven refiners, including Reliance Industries Ltd., have said they would no longer take delivery of Russian crude after the wind-down period ends this week. The remaining companies are expected to continue considering purchases from non-sanctioned sellers. India’s oil purchases through monthly tenders has showed a small increase in volume from usual patterns. However, the addition isn’t yet enough to make up for the possible loss of over one million

Read More »

NSTA Announces ‘Giant Leap Forward for North Sea Data Sharing’

The North Sea Transition Authority (NSTA) announced a “giant leap forward for North Sea data sharing” in a statement posted on its website recently. In that statement, the NSTA revealed that a new “data portal signposting page” had been established by eight organizations “dedicated to sharing offshore information”. The NSTA described the new page as “a one-stop shop providing easy access to North Sea facts and figures”.  The UKCS Data Portals site was set up by the Offshore Energy Digital Strategy Group (DSG) and signposts to data from Admiralty Marine (AM), BGS GeoIndex, The Crown Estate (TCE), The Crown Estate Scotland (CES), European Marine Observation and Data Network (EMOD), Marine Data Exchange (MDE), Marine Environmental & Data Network (MEDIN), and the NSTA, the NSTA highlighted in the statement.  “Each organization has agreed to share the information available on their individual sites in one convenient place,” the NSTA said in the statement. “Users will be able to access and cross-refer information that will aid effective decision-making and help to ensure that the North Sea is used to its full potential and environmental rules are followed,” it added. “From wrecks on the seabed to the movements of sharks, and the locations of wells to where carbon can be stored, the new UKCS data portals site has all the info you need,” the NSTA noted in the statement.   The NSTA pointed out that the new portal follows a workshop run by the DSG in February this year, which the NSTA said “highlighted the growing need for access to reliable information”. “This need is becoming more acute as more industries look to share space in the North Sea,” the NSTA said. The NSTA noted that the site is the first iteration and added that it is expected that further improvements, including standardizing terms, will

Read More »

NuEnergy Completes Drilling for ‘Early Gas Sales’ Project in Indonesia

NuEnergy Gas Ltd said it had completed drilling for the fourth and final well in its “Early Gas Sales” project under the initial development plan for the Tanjung Enim coalbed methane (CBM) production sharing contract (PSC) in Indonesia. “Gas shows were observed at surface via surface logging equipment, confirming the presence of methane across multiple seams”, the Australian company said in a stock filing. The TE-B01-003 well, drilled 451 meters (1,479.66 feet) deep, intersected five coal seams at depths ranging between 299 and 419 meters, according to NuEnergy. “NuEnergy has installed a progressive cavity pump system for the TE-B01-003 well and preparations are now underway to commence dewatering – a key step toward establishing stable gas flow and optimizing well performance”, the company said. “Gas will be gathered at the surface facility and delivered to the gas processing facility upon reaching target production levels”. It added, “Pursuant to the signed heads of agreement with PT Perusahaan Gas Negara Tbk (PGN), gas produced from the drilled wells, TE-B06-001, TE-B06-002, TE-B06-003 well and the TE-B01-003 well, will be delivered via an infield pipeline to PGN’s processing and distribution facility”. The Early Gas Sales project will sell one million standard cubic feet a day (MMscfd) to Indonesian state-owned gas distributor PGN, toward the 25-MMscfd initial plan for the Tanjung Enim license, according to NuEnergy. On September 8, it announced approval from the Energy and Mineral Resources Ministry for the one-MMscfd sale through its subsidiary Dart Energy (Tanjung Enim) Pte Ltd (DETE). “With the gas allocation approval now secured, DETE will proceed with finalizing the Gas Sale and Purchase Agreement with PGN”, NuEnergy said then. Meanwhile the bigger Tanjung Enim Plan of Development (POD) 1 was approved June 2021 “under a gross split scheme which will allow the PSC to proceed field development, surface facility

Read More »

Cobalt 200: Microsoft’s next-gen Arm CPU targets lower TCO for cloud workloads

These architectural improvements underpin Cobalt 200’s claimed increase in performance, which, according to Stephen Sopko, analyst at HyperFRAME Research, will lead to a reduction in total cost of ownership (TCO) compared to its predecessor. As a result, enterprise customers can benefit from consolidating workloads onto fewer machines. “For example, a 1k-instance cluster can see up to 30-40% TCO gains,” Sopko said, adding that this also helps enterprises free up resources to allocate to other workloads or projects. Moor Strategy and Insights principal analyst Matt Kimball noted that the claimed improvements in throughput-per-watt could be beneficial for compute-intensive workloads such as AI inferencing, microservices, and large-scale data processing. Some of Microsoft’s customers are already using Cobalt 100 virtual machines (VMs) for large-scale data processing workloads, and the chips are deployed across 32 Azure data centers, the company said. With Cobalt 200, the company will directly compete with AWS’s Graviton series and Google’s recently announced Axion processors, both of which leverage Arm architecture to deliver better price-performance for cloud workloads. Microsoft and other hyperscalers have been forced to design their own chips for data centers due to the skyrocketing costs for AI and cloud infrastructure, supply constraints around GPUs, and the need for energy-efficient yet customizable architectures to optimize workloads.

Read More »

AWS boosts its long-distance cloud connections with custom DWDM transponder

By controlling the entire hardware stack, AWS can implement comprehensive security measures that would be challenging with third-party solutions, Rehder stated. “This initial long-haul deployment represents just the first implementation of the in-house technology across our extensive long-haul network. We have already extended deployment to Europe, with plans to use the AWS DWDM transponder for all new long-haul connections throughout our global infrastructure,” Rehder wrote. Cloud vendors are some of the largest optical users in the world, though not all develop their own DWDM or other optical systems, according to a variety of papers on the subject. Google develops its own DWDM, for example, but others like Microsoft Azure develop only parts and buy optical gear from third parties. Others such as IBM, Oracle and Alibaba have optical backbones but also utilize third-party equipment. “We are anticipating that the time has come to interconnect all those new AI data centers being built,” wrote Jimmy Yu, vice president at Dell’Oro Group, in a recent optical report. “We are forecasting data center interconnect to grow at twice the rate of the overall market, driven by increased spending from cloud providers. The direct purchases of equipment for DCI will encompass ZR/ZR+ optics for IPoDWDM, optical line systems for transport, and DWDM systems for high-performance, long-distance terrestrial and subsea transmission.”

Read More »

Nvidia’s first exascale system is the 4th fastest supercomputer in the world

The world’s fourth exascale supercomputer has arrived, pitting Nvidia’s proprietary chip technologies against the x86 systems that have dominated supercomputing for decades. For the 66th edition of the TOP500, El Capitan holds steady at No. 1 while JUPITER Booster becomes the fourth exascale system on the list. The JUPITER Booster supercomputer, installed in Germany, uses Nvidia CPUs and GPUs and delivers a peak performance of exactly 1 exaflop, according to the November TOP500 list of supercomputers, released on Monday. The exaflop measurement is considered a major milestone in pushing computing performance to the limits. Today’s computers are typically measured in gigaflops and teraflops—and an exaflop translates to 1 billion gigaflops. Nvidia’s GPUs dominate AI servers installed in data centers as computing shifts to AI. As part of this shift, AI servers with Nvidia’s ARM-based Grace CPUs are emerging as a high-performance alternative to x86 chips. JUPITER is the fourth-fastest supercomputer in the world, behind three systems with x86 chips from AMD and Intel, according to TOP500. The top three supercomputers on the TOP500 list are in the U.S. and owned by the U.S. Department of Energy. The top two supercomputers—the 1.8-exaflop El Capitan at Lawrence Livermore National Laboratory and the 1.35-exaflop Frontier at Oak Ridge National Laboratory—use AMD CPUs and GPUs. The third-ranked 1.01-exaflop Aurora at Argonne National Laboratory uses Intel CPUs and GPUs. Intel scrapped its GPU roadmap after the release of Aurora and is now restructuring operations. The JUPITER Booster, which was assembled by France-based Eviden, has Nvidia’s GH200 superchip, which links two Nvidia Hopper GPUs with CPUs based on ARM designs. The CPU and GPU are connected via Nvidia’s proprietary NVLink interconnect, which is based on InfiniBand and provides bandwidth of up to 900 gigabytes per second. JUPITER first entered the Top500 list at 793 petaflops, but

Read More »

Samsung’s 60% memory price hike signals higher data center costs for enterprises

Industry-wide price surge driven by AI Samsung is not alone in raising prices. In October, TrendForce reported that Samsung and SK Hynix raised DRAM and NAND flash prices by up to 30% for Q4. Similarly, SK Hynix said during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, with the company posting record quarterly operating profit exceeding $8 billion, driven by surging AI demand. Industry analysts attributed the price increases to manufacturers redirecting production capacity. HBM production for AI accelerators consumes three times the wafer capacity of standard DRAM, according to a TrendForce report, citing remarks from Micron’s Chief Business Officer. After two years of oversupply, memory inventories have dropped to approximately eight weeks from over 30 weeks in early 2023. “The memory industry is tightening faster than expected as AI server demand for HBM, DDR5, and enterprise SSDs far outpaces supply growth,” said Manish Rawat, semiconductor analyst at TechInsights. “Even with new fab capacity coming online, much of it is dedicated to HBM, leaving conventional DRAM and NAND undersupplied. Memory is shifting from a cyclical commodity to a strategic bottleneck where suppliers can confidently enforce price discipline.” This newfound pricing power was evident in Samsung’s approach to contract negotiations. “Samsung’s delayed pricing announcement signals tough behind-the-scenes negotiations, with Samsung ultimately securing the aggressive hike it wanted,” Rawat said. “The move reflects a clear power shift toward chipmakers: inventories are normalized, supply is tight, and AI demand is unavoidable, leaving buyers with little room to negotiate.” Charlie Dai, VP and principal analyst at Forrester, said the 60% increase “signals confidence in sustained AI infrastructure growth and underscores memory’s strategic role as the bottleneck in accelerated computing.” Servers to cost 10-25% more For enterprises building AI infrastructure, these supply dynamics translate directly into

Read More »

Arista, Palo Alto bolster AI data center security

“Based on this inspection, the NGFW creates a comprehensive, application-aware security policy. It then instructs the Arista fabric to enforce that policy at wire speed for all subsequent, similar flows,” Kotamraju wrote. “This ‘inspect-once, enforce-many’ model delivers granular zero trust security without the performance bottlenecks of hairpinning all traffic through a firewall or forcing a costly, disruptive network redesign.” The second capability is a dynamic quarantine feature that enables the Palo Alto NGFWs to identify evasive threats using Cloud-Delivered Security Services (CDSS). “These services, such as Advanced WildFire for zero-day malware and Advanced Threat Prevention for unknown exploits, leverage global threat intelligence to detect and block attacks that traditional security misses,” Kotamraju wrote. The Arista fabric can intelligently offload trusted, high-bandwidth “elephant flows” from the firewall after inspection, freeing it to focus on high-risk traffic. When a threat is detected, the NGFW signals Arista CloudVision, which programs the network switches to automatically quarantine the compromised workload at hardware line-rate, according to Kotamraju: “This immediate response halts the lateral spread of a threat without creating a performance bottleneck or requiring manual intervention.” The third feature is unified policy orchestration, where Palo Alto Networks’ management plane centralizes zone-based and microperimeter policies, and CloudVision MSS responds with the offload and enforcement of Arista switches. “This treats the entire geo-distributed network as a single logical switch, allowing workloads to be migrated freely across cloud networks and security domains,” Srikanta and Barbieri wrote. Lastly, the Arista Validated Design (AVD) data models enable network-as-a-code, integrating with CI/CD pipelines. AVDs can also be generated by Arista’s AVA (Autonomous Virtual Assist) AI agents that incorporate best practices, testing, guardrails, and generated configurations. “Our integration directly resolves this conflict by creating a clean architectural separation that decouples the network fabric from security policy. This allows the NetOps team (managing the Arista

Read More »

AMD outlines ambitious plan for AI-driven data centers

“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.” One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms. One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara. And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.” AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors. Zen 6 processors will be built on 2 nanometer design generated by (you guessed

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »