Stay Ahead, Stay ONMINE

Monkey Island LNG selects process technology from ConocoPhillips

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: #c19a06; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; […]

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’);
a { color: #c19a06; }
.ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4,
.ebm-page__main h5, .ebm-page__main h6 {
font-family: Inter;
}
body {
line-height: 150%;
letter-spacing: 0.025em;
font-family: Inter;
}
button, .ebm-button-wrapper { font-family: Inter; }
.label-style {
text-transform: uppercase;
color: var(–color-grey);
font-weight: 600;
font-size: 0.75rem;
}
.caption-style {
font-size: 0.75rem;
opacity: .6;
}

#onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] {
background-color: #c19a06 !important;
border-color: #c19a06 !important;
}
#onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a {
color: #c19a06 !important;
}
#onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu {
border-color: #c19a06 !important;
}
#onetrust-consent-sdk #onetrust-accept-btn-handler,
#onetrust-banner-sdk #onetrust-reject-all-handler,
#onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link {
background-color: #c19a06 !important;
border-color: #c19a06 !important;
}
#onetrust-consent-sdk
.onetrust-pc-btn-handler {
color: #c19a06 !important;
border-color: #c19a06 !important;
background-color: undefined !important;
}

Monkey Island LNG has selected ConocoPhillips’ Optimized Cascade Process liquefaction technology for its planned 26 million tonnes/year (tpy) natural gas liquefaction and export plant in Cameron Parish, La.

The 246-acre project site on Monkey Island in Cameron Parish, is positioned with access to deepwater shipping channels and US natural gas supply.

The LNG plant is expected to utilize cryogenic technology to liquefy about 3.4 bcfd of natural gas to produce LNG to serve both US domestic offtakers and global export markets, according to Monkey Island LNG’s website.

Monkey Island LNG expects to develop up to five liquefaction trains (5-million tpy each). The project design calls for three LNG storage tanks on site, each with capacity of about 180,000 cu m, the website shows.  

In addition to ConocoPhillips, the privately held company has selected McDermott for engineering, procurement, and construction services on the $25-billion project. ERM is serving as environmental consultant. 

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Oceaneering Awarded Contracts by Equinor

Oceaneering International, Inc. said its Norwegian subsidiary Oceaneering Asset Integrity AS entered into a new framework agreement with Equinor ASA for the delivery of fabric maintenance services. The contract runs through July 2027, with four additional one-year extension options, allowing for a potential total duration of six years, Oceaneering said

Read More »

Network discovery gets a boost from Intel-spinout Articul8

Technical architecture: beyond traditional monitoring Weave’s technical foundation relies on a hybrid knowledge graph architecture. It processes different data types through specialized analytical engines. It does not attempt to force all network data through large language models (LLM). This design choice addresses accuracy concerns inherent in applying generative AI to

Read More »

Fed primed for rate cut as hiring slumps, unemployment inches up to 4.3%

Dive Brief: Hiring fell last month and unemployment rose to 4.3%, the Labor Department said Friday in a report that primed the Federal Reserve for a reduction in the main interest rate at its next scheduled policy meeting on Sept. 16-17. Employers added a less-than-forecast 22,000 jobs in August, the Labor Department said, while noting in a revision that payrolls in June shrunk by 13,000 jobs in the first such decline since December 2020. “This report green lights a September Federal Open Market Committee easing and implies further action will be required to stabilize the labor market before year end,” Pantheon Macroeconomics Chief U.S. Economist Samuel Tombs said in a client note. The data “make us more confident that the FOMC will ease” the benchmark rate by 0.75 percentage point by December and by 0.75 percentage point next year, he said. Dive Insight: The newest data showing labor market weakness puts at loggerheads both sides of the Fed’s dual mandate to maintain stable prices and ensure maximum employment. The U.S. has expanded payrolls by just 598,000 in 2025, the slowest increase of any year since the pandemic. Meanwhile, inflation, as measured by the personal consumption expenditures price index minus food and energy prices, edged up in July to 2.9%, well above the Fed’s 2% inflation target. Labor market weakness has prompted several Fed officials during the past several weeks to focus less on reducing price pressures and favor aiding the labor market with looser monetary policy. Two Fed governors dissented from a July 30 FOMC vote to leave the federal funds rate unchanged, saying weak hiring warranted a reduction in borrowing costs. The central bank for five consecutive policy meetings this year has held the federal funds rate at a range between 4.25% and 4.5%. Last month Fed Chair Jerome Powell,

Read More »

Alone we fail: the culture change needed to deliver the energy transition

Alex Thornton is executive director of LF Energy. Late last fall, my neighbors and I received a notice from our utility, Pacific Gas & Electric, informing us of proposed rate increases from an eye watering $761.3 million Billing Modernization Initiative. Digging further into their filing with the California Public Utilities Commission, PG&E highlights the need to update a profoundly outdated system, featuring a potpourri of vendor and custom in-house software from the 1990s and 2000s, much of it written in a programming language (COBOL) that peaked in popularity last century and for which there are vanishingly few engineers with expertise in it. Additionally, the filing highlights that the billing system has been unable to keep pace with regulatory needs, forcing delays and complex workarounds. In brief, PG&E’s key digital systems are unable to satisfy business needs and therefore need an expensive overhaul, raising customer rates. As a result, my monthly electric bill is going up. I pick on PG&E because I’m their customer and they offer a good example, not because they are an industry exception. Most utilities and regulators struggle with these very same digital modernization challenges. Billing is a core function for any business. If utilities can’t maintain their core systems, then how can they be expected to roll out and operate more advanced digital functionality such as dynamic rates, distributed energy resource orchestration, virtual power plants, demand response programs, electric vehicle charging optimization and AI, all while defending against ever-increasing cyber security attacks and maintaining affordability? Short answer: they can’t, not without a serious culture change to become technology companies that embrace best practices for digital innovation. As part of their CPUC filing, PG&E notes that Southern California Edison and San Diego Gas & Electric launched similar billing system upgrades in 2016 and 2017, respectively. This key point

Read More »

SPE Launches 2026 Offshore Achievement Awards

In a statement sent to Rigzone recently, Society of Petroleum Engineers (SPE) Aberdeen announced that it had launched the 2026 Offshore Achievement Awards (OAA). The awards “celebrat[e]… excellence across the energy spectrum”, according to the statement, which highlighted that next year’s iteration marks the 39th “year of recognizing outstanding achievements that drive innovation and excellence across the offshore energy industry”. “Building on the success of previous years, the 2026 OAAs continue to reward groundbreaking technologies, exceptional company growth, and the remarkable contributions of individuals who shape the future of energy,” SPE Aberdeen said in the statement. “From emerging innovators to industry veterans, the awards celebrate those who demonstrate excellence, leadership, and vision in advancing the sector,” it added. Entries are now open to all UK registered companies operating within the oil and gas and renewables sectors, the statement noted. The ceremony will take place on March 5 in Aberdeen, the statement revealed. The 2026 award categories “span the full breadth of offshore energy achievements”, the statement said. They can be seen in full below: Emerging Technology  Field Proven Technology  Collaboration  Sustainability  Skills Development  Offshore Workplace of Choice  Inclusivity Champion  Industry Expert  Young Professionals  Industry Returner/Transferer  Significant Contribution “The energy industry continues to evolve at an unprecedented pace, and the OAAs evolve with it,” Graham Dallas, Chair of the Offshore Achievement Awards Committee, said in the statement.  “These awards provide a vital platform to showcase the innovations, partnerships, and individual excellence that will define our industry’s future. We encourage organizations and individuals across the entire energy spectrum to share their success stories,” Dallas added. “We’re thrilled to have Bilfinger return as Principal Sponsor for the second year running. Their ongoing support reinforces our shared commitment to celebrating the innovations and individuals driving our industry forward,” he continued. Rod Agnew, Vice President

Read More »

Delta Achieves First SAF Uplift at PDX

Delta Air Lines, in cooperation with Shell plc and Portland International Airport (PDX), has taken delivery of sustainable aviation fuel (SAF) for the PDX fuel system. Delta said in a media release that this marks the first commercial-scale SAF uplift at PDX, owned and operated by the Port of Portland. Shell supplied the batch of more than 400,000 gallons of neat SAF to Zenith Terminal in Portland, where it was blended with traditional jet fuel to meet regulatory requirements, before being delivered to PDX via barge, truck, and pipeline, Delta said. The blended SAF entered the airport’s privately owned fuel supply system earlier this month. “Every SAF delivery is a powerful example of how industry collaboration can unlock markets for sustainable aviation fuel”, Charlotte Lollar, Delta’s SAF director, said. “By working with Shell and the Port of Portland to tap into existing infrastructure, we’re able to access SAF at more airports and continue to prove its viability and ultimately scale it in a way the industry needs”. “We’re excited about this step toward bringing sustainable aviation fuel to PDX. It’s a move that can help improve local air quality and cut down on greenhouse gas emissions, which is great news for our communities”, Dan Pippenger, Port of Portland Chief Aviation Officer, said. Delta noted SAF is a renewable jet fuel that reduces lifecycle emissions by over 80 percent, and is compatible with existing infrastructure and aircraft engines. Production is still nascent, expensive, and scarce, amounting to less than a week’s worth of global flights, it said. Locally, Delta said it collaborates with the Port of Portland and others to establish a SAF market. Federally, Delta advocates for incentives to boost production and affordability, including the 40B SAF Blenders Tax Credit and the 45Z Clean Fuel Production Credit, extended through 2029,

Read More »

Inaugural ContourGlobal RE Facility in US Goes Online

ContourGlobal said it had started operations at the first phase of the Black Hollow Sun (BHS) complex in Severance, Colorado, unlocking 185 megawatts peak (MWp) out of a planned 324 MWp. BHS I has now begun supplying Platte River Power Authority, while BHS II is under construction and expected to go onstream 2026. “BHS I represents ContourGlobal’s first operating renewable plant in the United States and the initial phase of the Black Hollow Sun complex, which will also include BHS II (139 MWp)”, ContourGlobal, acquired by New York City-headquartered global investor KKR in 2022, said in a statement on its website. “When fully completed in 2026, the power complex will reach a total capacity of 324 MWp, making it Northern Colorado’s largest solar project and Platte River’s largest contracted solar asset. “Once fully operational, the complex will produce around 608 GWh of clean energy each year, which will be delivered to Platte River Power Authority, the community-owned utility serving Fort Collins, Loveland, Estes Park and Longmont”. “This is enough clean electricity to power more than 73,000 homes in Northern Colorado while avoiding almost 450,000 metric tons of carbon dioxide emissions each year”, ContouGlobal said. The household capacity was based on an estimated 8,304 kilowatt hours of annual energy consumption by a typical Colorado household. The emissions avoidance was based on the most updated grid emission data and Colorado Clean Energy Plan reporting methods, according to ContourGlobal. “Both phases were designed to integrate seamlessly through shared infrastructure, optimizing land use and technical efficiency”, ContourGlobal said. “As part of its 20-year presence in the U.S. energy, ContourGlobal is using American-manufactured solar modules supplied by its partner Qcells, which is also providing its integrated EPC (Engineering, Procurement, Construction) solutions for this project”, it said. According to the company, it has 2.8 gigawatts (GW) of

Read More »

Argan Posts 94 Percent Increase in Quarterly Profit

Argan Inc., primarily involved in construction and related services in the power sector, has reported $35.28 million in net income for the second quarter of fiscal year 2026, up 93.84 percent from the same three-month period a year ago. Earnings per share assuming dilution landed at $2.5. The New York Stock Exchange-listed company declared a dividend of $0.375 per share for the quarter ended July 2025, up from $0.3 for the corresponding prior-year period. Revenue rose 4.73 percent year-on-year to $237.74 million. EBITDA increased 45.82 percent to $36.23 million, representing 15.2 percent of revenue. Income before income taxes was $35.64 million, up 46.76 percent year-over-year. “The increase in the number of current projects and contract backlog has resulted in increased project activity and the related revenues compared to the prior-year quarter”, the Arlington, Virginia-based company, which also provides industrial construction services and telecommunications solutions, said in a statement on its website. “During the second quarter of fiscal 2026, several recently awarded power plant projects were in their early stages of activity with limited revenues, while the company’s mature projects continued to generate significant activity”. Argan president and chief executive David Watson said, “Additionally, demand for our capabilities across all of our business segments remains strong, with record backlog of $2.0 billion”. “We achieved a few project milestones in the quarter, reflecting excellent execution and solid progress within our project base”, Watson added. “During the quarter, we completed our LNG project in Louisiana and achieved first fire at one of our Trumbull units, followed by first fire at the second Trumbull unit occurring in August. “Several of our renewable projects advanced meaningfully this summer, as cooperative weather allowed us to make significant progress”. “Argan remains well-positioned to benefit from the current demand environment as the industry responds to the urgent need

Read More »

Google adds Gemini to its on-prem cloud for increased data protection

Google has announced the general availability of its Gemini artificial intelligence models on Google Distributed Cloud (GDC), making its generative AI product available on enterprise and government data centers. GDC is an on-premises implementation of Google Cloud, aimed at heavily regulated industries like medical and financial services to bring Google Cloud services within company firewalls rather than the public cloud. The launch of Gemini on GDC allows organizations with strict data residency and compliance requirements to deploy generative AI without compromising control over sensitive information. GDC uses Nvidia Hopper and Blackwell 0era GPU accelerators with automated load balancing and zero-touch updates for high availability. Security features include audit logging and access control capabilities that provide full transparency for customers. The platform also features Confidential Computing support for both CPUs (with Intel TDX) and GPUs (with Nvidia’s confidential computing) to secure sensitive data and prevent tampering or exfiltration.

Read More »

Nvidia networking roadmap: Ethernet, InfiniBand, co-packaged optics will shape data center of the future

Nvidia is baking into its Spectrum-X Ethernet platform a suite of algorithms that can implement networking protocols to allow Spectrum-X switches, ConnectX-8 SuperNICs, and systems with Blackwell GPUs to connect over wider distances without requiring hardware changes. These Spectrum-XGS algorithms use real-time telemetry—tracking traffic patterns, latency, congestion levels, and inter-site distances—to adjust controls dynamically. Ethernet and InfiniBand Developing and building Ethernet technology is a key part of Nvidia’s roadmap. Since it first introduced Spectrum-X in 2023, the vendor has rapidly made Ethernet a core development effort. This is in addition to InfiniBand development, which is still Nvidia’s bread-and-butter connectivity offering. “InfiniBand was designed from the ground up for synchronous, high-performance computing — with features like RDMA to bypass CPU jitter, adaptive routing, and congestion control,” Shainer said. “It’s the gold standard for AI training at scale, connecting more than 270 of the world’s top supercomputers. Ethernet is catching up, but traditional Ethernet designs — built for telco, enterprise, or hyperscale cloud — aren’t optimized for AI’s unique demands,” Shainer said. Most industry analysts predict Ethernet deployment for AI networking in enterprise and hyperscale deployments will increase in the next year; that makes Ethernet advancements a core direction for Nvidia and any vendor looking to offer AI connectivity options to customers. “When we first initiated our coverage of AI back-end Networks in late 2023, the market was dominated by InfiniBand, holding over 80% share,” wrote Sameh Boujelbene, vice president of Dell ’Oro Group, in a recent report. “Despite its dominance, we have consistently predicted that Ethernet would ultimately prevail at scale. What is notable, however, is the rapid pace at which Ethernet gained ground in AI back-end networks. As the industry moves to 800 Gbps and beyond, we believe Ethernet is now firmly positioned to overtake InfiniBand in these high-performance deployments.”

Read More »

Inside the AI-optimized data center: Why next-gen infrastructure is non-negotiable

How are AI data centers different from traditional data centers? AI data centers and traditional data centers can be physically similar, as they contain hardware, servers, networking equipment, and storage systems. The difference lies in their capabilities: Traditional data centers were built to support general computing tasks, while AI data centers are specifically designed for more sophisticated, time and resource-intensive workloads. Conventional data centers are simply not optimized for AI’s advanced tasks and necessary high-speed data transfer. Here’s a closer look at their differences: AI-optimized vs. traditional data centers Traditional data centers: Handle everyday computing needs such as web browsing, cloud services, email and enterprise app hosting, data storage and retrieval, and a variety of other relatively low-resource tasks. They can also support simpler AI applications, such as chatbots, that do not require intensive processing power or speed. AI data centers: Built to compute significant volumes of data and run complex algorithms, ML and AI tasks, including agentic AI workflows. They feature high-speed networking and low-latency interconnects for rapid scaling and data transfer to support AI apps and edge and internet of things (IoT) use cases. Physical infrastructure Traditional data centers: Typically composed of standard networking architectures such as CPUs suitable for handling networking, apps, and storage. AI data centers: Feature more advanced graphics processing units (GPU) (popularized by chip manufacturer Nvidia), tensor processing units (TPUs) (developed by Google), and other specialized accelerators and equipment. Storage and data management Traditional data centers: Generally, store data in more static cloud storage systems, databases, data lakes, and data lakehouses. AI data centers: Handle huge amounts of unstructured data including text, images, video, audio, and other files. They also incorporate high-performance tools including parallel file systems, multiple network servers, and NVMe solid state drives (SSDs). Power consumption Traditional data centers: Require robust cooling

Read More »

From Cloud to Concrete: How Explosive Data Center Demand is Redefining Commercial Real Estate

The world will generate 181 ZB of data in 2025, an increase of 23.13% year over year and 2.5 quintillion bytes (a quintillion byte is also called an exabyte, EB) created daily, according to a report from Demandsage. To put that in perspective: One exabyte is equal to 1 quintillion bytes, which is 1,000,000,000,000,000,000 bytes. That’s 29 TB every second, or 2.5 million TB per day. It’s no wonder data centers have become so crucial for creating, consuming, and storing data — and no wonder investor interest has skyrocketed.  The surging demand for secure, scalable, high-performance retail and wholesale colocation and hyperscale data centers is spurred by the relentless, global expansion of cloud computing and demand for AI as data generation from businesses, governments, and consumers continues to surge. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built.  As a result, investors increasingly view these facilities not just as technology assets, but as a unique convergence of real estate, utility infrastructure, and mission-critical systems. Capitalizing on this momentum, private equity and real estate investment firms are rapidly expanding into the sector through acquisitions, joint ventures, and new funds—targeting opportunities to build and operate facilities with a focus on energy efficiency and scalability.

Read More »

Ai4 2025 Navigates Rapid Change in AI Policy, Education

The pace of innovation in artificial intelligence is fundamentally reshaping the landscape of education, and the changes are happening rapidly. At the forefront of this movement stand developers, policy makers, educational practitioners, and associated experts at the recent Ai4 2025 conference (Aug. 11-13) in Las Vegas, where leading voices such as Geoffrey Hinton “The Godfather of AI,” top executives from Google and U.S. Bank, and representatives from multiple government agencies gathered to chart the future of AI development. Importantly, educators and academic institutions played a central role, ensuring that the approach to AI in schools is informed by those closest to the classroom. Key discussions at Ai4 and recent educator symposia underscored both the promise and peril of swift technological change. Generative AI, with its lightning-fast adoption since the advent of tools like ChatGPT, is opening new possibilities for personalized learning, skills development, and operational efficiency. But participants were quick to note that acceleration brings good and bad consequences. On one hand, there’s excitement about practical classroom implementations and the potential for students to engage with cutting-edge technology. On the other, concerns about governance, ethics, safety, and the depth of genuine learning remain at the forefront. This urgency to “do this right” is echoed by teachers, unions, and developers who are united by the challenges and opportunities on the ground. Their voices highlight the need for agreement on education policy and associated regulations to keep pace with technological progress, create frameworks for ethical and responsible use, and ensure that human agency remains central in shaping the future of childhood and learning. In this rapidly evolving environment, bringing all stakeholders to the table is no longer optional; it is essential for steering AI in education toward outcomes that benefit both students and society. Global Context: America, China, and the AI Race

Read More »

Two Lenses on One Market: JLL and CBRE Show Data Centers in a Pinch

The two dominant real estate research houses, JLL and CBRE, have released midyear snapshots of the North American data center market, and both paint the same picture in broad strokes: demand remains insatiable, vacancy has plunged to record lows, and the growth of AI and hyperscale deployments is reshaping every aspect of the business. But their lenses capture different angles of the same story: one emphasizing preleasing and capital flows, the other highlighting hyperscale requirements and regional shifts. Vacancy Falls Through the Floor JLL sets the stage with a stark headline: colocation vacancy is nearing 0%. The JLL Midyear 2025 North America Data Center report warns that this scarcity “is constraining economic growth and undermining national security,” underscoring the role of data centers as critical infrastructure. CBRE’s North American Data Center Trends H1 2025 numbers back this up, recording an all-time low North America vacancy rate of 1.6%, the tightest in more than a decade. Both agree that market loosening is years away — JLL projecting vacancy hovering around 2% through 2027, CBRE noting 74.3% of new capacity already spoken for. The takeaway seems clear: without preleasing, operators and tenants alike are effectively shut out of core markets. Absorption and Preleasing Drive Growth JLL drills down into the mechanics. With virtually all absorption tied to preleasing, the firm points to Northern Virginia (647 MW) and Dallas (575 MW) as the twin engines of growth in H1, joined by Chicago, Austin/San Antonio, and Atlanta. CBRE’s absorption math is slightly different, but the conclusion aligns: Northern Virginia again leads the nation, with 538.6 MW net absorption and a remarkable 80% surge in under-construction capacity. CBRE sharpens the view by noting that the fiercest competition is at the top end: single-tenant requirements of 10 MW or more are setting pricing records as hyperscalers

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »