Stay Ahead, Stay ONMINE

Monkey Island LNG Picks ConocoPhillips Tech

ConocoPhillips has been contracted to deliver its Optimized Cascade Process liquefaction technology for the 26 million tonnes per annum (MTPA) liquefied natural gas (LNG) export facility in Cameron Parish, Louisiana, being developed by Monkey Island LNG. “After an extensive technology selection study and analysis on multiple technologies, Monkey Island LNG selected the Optimized Cascade process […]

ConocoPhillips has been contracted to deliver its Optimized Cascade Process liquefaction technology for the 26 million tonnes per annum (MTPA) liquefied natural gas (LNG) export facility in Cameron Parish, Louisiana, being developed by Monkey Island LNG.

“After an extensive technology selection study and analysis on multiple technologies, Monkey Island LNG selected the Optimized Cascade process based on its operational flexibility, quick restart capabilities, high efficiency, and proven performance above nameplate capacity”, Greg Michaels, CEO of Monkey Island LNG, said.

“The decision marks a major milestone in advancing Monkey Island LNG’s mission to deliver TrueCost LNG, a radically transparent, cost-efficient model that eliminates hidden fees and aligns incentives across the LNG value chain”, Michaels said.

The 246-acre site in Cameron Parish, Louisiana, is located on a deepwater port along the Calcasieu Ship Channel, about 2 miles inland from the Gulf of Mexico. It offers additional marine access via the Cameron Loop on Monkey Island’s northern bank, ensuring flexibility during construction and operation, according to Monkey Island LNG.

The facility is also situated near one of North America’s most extensive natural gas transportation networks, which are directly connected to Henry Hub and the abundant Haynesville Shale gas basin, Monkey Island LNG noted.

The facility is designed to have five liquefaction trains, each capable of liquefying approximately 3.4 billion cubic feet per day of natural gas. Each train gas a production capacity of approximately 5 MTPA. The facility will also have three LNG storage tanks with the capacity to hold 180,000 cubic meters (6.4 million cubic feet) each.

Monkey Island LNG also said it has picked McDermott International as its engineering, procurement, and construction partner for the project.

To contact the author, email [email protected]



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco’s Splunk embeds agentic AI into security and observability products

AI-powered observability enhancements Cisco also announced it has updated Splunk Observability to use Cisco AgenticOps, which deploys AI agents to automate telemetry collection, detect issues, identify root causes, and apply fixes. The agentic AI updates help enterprise customers automate incident detection, root-cause analysis, and routine fixes. “We are making sure

Read More »

Broadcom touts AI-native VMware, but gains aren’t revolutionary

“We have the relationships,” Umesh Mahajan, Broadcom’s general manager for application networking and security, told Network World. A large organization can’t simply stop using VMware, he says. “These workloads can’t disappear overnight. So, we will continue to have those relationships.” In addition, VMware’s technology is proprietary, complicated, and not something

Read More »

Cisco launches AI-driven data fabric powered by Splunk

At this week’s Splunk .conf25 event in Boston, Cisco unveiled a new data architecture that’s powered by the Splunk platform and designed to help enterprises glean AI-driven insights from machine-generated telemetry, such as metrics, events, logs and traces. The new Cisco Data Fabric integrates business and machine data for AI

Read More »

Aftentra Signs Heads of Terms for Angola Asset License

Afentra plc said it has signed heads of terms with Angola’s National Agency of Petroleum, Gas and Biofuels (ANPG) for the risk service contract (RSC) for offshore Block 3/24, located adjacent to its existing Block 3/05 & 5A interests in Angola. The formal award of the license is expected in the coming months following the completion of the government approval process, Afentra said in a news release. Block 3/24 contains five oil and gas discoveries all located in shallow water, with several exploration prospects identified within the acreage on existing 3D seismic, according to the release. Its proximity to Block 3/05 offers short-cycle, low-cost development potential, Afentra said. The granting of the license will increase Afentra’s gross offshore acreage position increases to 810 square kilometers from 265 square kilometers, the company said. Under the agreement, Afentra will become the operator with a 40 percent interest in the block, along with Maurel & Prom Angola S.A.S. holding a 40 percent interest and Sonangol E&P with 20 percent. There will be an initial five-year period to review the development potential for existing discoveries and exploration prospectivity with a 25-year production period that will be awarded when a discovery is developed, according to the release. Block 3/24 covers 210 square miles (545 square kilometers) and is adjacent to Afentra’s existing producing oil fields and undeveloped discoveries in Blocks 3/05 and 3/05A, the company said. The block adds a further five discoveries: Palanca North East, Quissama, Goulongo, Cefo and Kuma. The discoveries are all located in the same Pinda reservoir as the existing oil fields in Blocks 3/05 and 3/05A, Afentra said. The block also contains the previously developed Canuku field cluster, which has produced up to 12,000 barrels of oil per day. The block is estimated to include over 130 million barrels of

Read More »

Malaysia Tightens Security at Petronas LNG Plants

Malaysian energy giant Petroliam Nasional Bhd. said it was working closely with authorities after the government tightened security at one of the world’s largest liquefied natural gas facilities in Sarawak state. A Petronas employee at the company’s headquarters received text messages that threatened to burn the LNG facilities from a phone number registered in Indonesia, Bernama reported, citing Deputy Prime Minister Fadillah Yusof. The matter is under police investigation, he added. The oil and gas company said there had been no impact to its operations or disruptions to its supplies, adding that the safety and wellbeing of its employees and contractors remained its highest priority. Petronas’ complex in Bintulu, on the Sarawak coast, spans 276 hectares (682 acres) and has a production capacity of nearly 30 million tons per year, according to its website. The National Security Council said on Monday it received information on a security threat against the LNG plants.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

13 state governors join coalition to promote EVs

Dive Brief: Hawaiʻi and Wisconsin joined the Affordable Clean Cars Coalition last week, increasing membership to 13 state governors.  Launched in May, the coalition aims to promote more affordable electric vehicles, support U.S. automotive manufacturers and preserve states’ authority under the Clean Air Act. The coalition was formed by the U.S. Climate Alliance, a bipartisan group of 24 governors. States will address their own challenges and opportunities while working together to achieve the coalition’s collective goals, the Climate Alliance said in a press release. Dive Insight: New electric vehicle sales were strong in the past two months as buyers rushed to take advantage of federal incentives before they expire Sept. 30. However, the share of EVs compared to total automotive sales declined in the second quarter both year-over-year and compared to the first quarter, according to CleanTechnica.  “Sales of EVs will likely fall dramatically when tax credits expire,” said Cox Automotive Senior Economist Charlie Chesbrough in an August 27 statement. A further blow to EVs came in June when Congress and the Trump administration ended California’s authority to set its own stronger vehicle emission standards. A 1990 federal law allows 17 additional states and the District of Columbia to follow California’s vehicle emissions regulations. The Alliance for Automotive Innovation, representing many of the industry’s automakers and suppliers, supported the administration’s actions. “An aggressive regulatory push toward battery electric vehicles – ahead of consumer demand and without market readiness – will reduce U.S. vehicle production and auto jobs versus a more balanced approach that prioritizes and preserves vehicle choice,” said the group’s president and CEO, John Bozzella, in an April 29 blog post. The coalition’s 11 initial governors said in a May 23 statement that “The federal government and Congress are putting polluters over people and creating needless chaos for consumers

Read More »

Smarter data for a smarter grid: AI-powered LiDAR is transforming utility infrastructure

James Conlin is a director of product for Sharper Shape. As power grids grow older and climate threats intensify, electric utilities face urgent pressure to modernize. Meeting today’s expectations for resilience, safety, and efficiency depends not just on upgrading physical infrastructure, but on having the right data — accurate, timely, and scalable insights into assets across vast and varied terrain. One technology is rapidly changing how utilities manage their infrastructure: LiDAR (Light Detection and Ranging). LiDAR captures millions to billions of precise, high-resolution 3D data points — forming what’s known as a point cloud. These point clouds create detailed digital models of utility networks and their surrounding environments, mapping everything from power lines and substations to terrain and vegetation. This level of visibility is critical for planning, maintenance, risk mitigation, and emergency response. But collecting LiDAR data is only the beginning. The real value comes from turning that data into something useful. That’s where classification comes in. Raw LiDAR point clouds are essentially unstructured spatial data. Each point marks a location in space but offers no context on its own. Is it part of a wire, a tree, or the ground? Without classification, there’s no way to know. Classification assigns meaning to these points by labeling them according to what they represent, transforming raw data into actionable information. For electric utilities, this process is essential. It enables vegetation management by identifying growth that’s encroaching on power lines before it becomes a hazard. It supports asset inspection by helping monitor conditions such as wire sag, pole tilt, or equipment degradation. It ensures compliance and safety by verifying that infrastructure meets required regulatory clearances. It aids in disaster modeling by identifying potential risk zones for wildfires, floods, or storms. And it guides system upgrades by informing the design of new infrastructure or

Read More »

ENGIE Partners with Prometheus for Texas Data Centers

ENGIE North America (ENGIE) has signed an agreement with Prometheus Hyperscale, a sustainable hyperscale data center developer, to co-locate data centers at select renewable and battery storage energy facilities along the Texas I-35 corridor. Under the exclusive agreement, Prometheus will deploy its high-efficiency, liquid-cooled data center infrastructure alongside ENGIE’s renewable and battery storage assets. The first sites equipped with high-performance, AI-ready data center compute capacity are expected to go live in 2026, with more locations planned from 2027, ENGIE said in a media release. “ENGIE is focused on delivering solutions to meet the growing demand for power across the U.S., with a strategic focus on enabling data center expansion. By leveraging our robust portfolio of wind, solar, and battery storage assets – combined with our commercial and industrial supply capabilities and deep trading expertise – we’re providing integrated energy solutions that support scalable, resilient, and sustainable infrastructure”, David Carroll, Chief Renewables Officer and SVP of ENGIE North America, said. “Prometheus is committed to developing sustainable, next-generation digital infrastructure for AI”, Bernard Looney, Chairman of Prometheus Hyperscale and former CEO of BP, said. “We cannot do this alone – ENGIE’s existing assets and expertise as a major player in the global energy transition make them a perfect partner as we work to build data centers that meet market needs today and tomorrow”. ENGIE added that Prometheus will work with Conduit to meet market needs quickly. Conduit is an on-site power generation provider for near-term bridging and back-up solutions. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR

Read More »

Will Technical Resistance for Oct NatGas Contract Hold?

Will technical resistance for the October natural gas contract hold? That was the question Eli Rubin, an energy analyst at EBW Analytics Group, asked in an EBW report sent to Rigzone by the EBW team on Tuesday, which highlighted that the October natural gas contract closed at $3.090 per million British thermal units (MMBtu) on Monday. That figure marked a 4.2¢, or 1.4 percent, rise on Friday’s close, the report outlined. “The NYMEX front-month briefly tested as high as $3.198 yesterday – pushing through technical resistance at $3.13 – before falling 10.8¢ into the close,” Rubin noted in the report. “Another technical retest of $3.20 per  million British thermal units may be coming this morning, aided by a modest upturn in CDDs [cooling degree days] and decline in production readings,” he added. However, Rubin warned in the report that the near-term fundamental picture “hardly warrants extensive optimism”. “Mid to late September warmth is modestly supportive, but also partially offset by stalling heating demand,” he added. “Tropics have been quiet, for now. Storage is rising faster than at a typical seasonal pace. Pipeline maintenance appears a key support of Henry Hub spot pricing,” he continued. Rubin went on to state in the report that the medium to long term structural outlook is robust. “Natural gas has surpassed multiple technical resistance levels over the past two weeks to add to momentum,” he said. “With storage likely to surpass 3,650 billion cubic feet by October 2nd – and chances for a mild October – the durability of the recent rally is uncertain despite a bullish long-term outlook for NYMEX gas,” he added. In a separate EBW report sent to Rigzone by the EBW team on Monday, Rubin noted that “weekend weather and LNG demand gains may align with bullish technicals to pose renewed

Read More »

Cadence adds Nvidia to digital twin tool for data center design

Even though its software covers 750 vendors, Cadence is promoting the Nvidia angle considerably, and understandably since Nvidia has so much momentum. Several months ago, it released blueprints for optimal data center designs, and now it has visualization software to use the designs. Knoth stress support for the DGX SuperPod, a massive piece of equipment with 10 or more racks of processing power and all the interconnection that goes inside of it. “This is a huge leg up for anyone who’s looking to either retrofit an existing data center with new processing power or building out a new one from scratch,” he said. As data centers move from megawatts to gigawatts, complexity increases at a considerable rate. The shift to liquid cooling adds even more of the complexity of calculating power usage, said Knoth. “Because all these things, when you start going into from the megawatt to the gigawatt scale, there are tremendous challenges, and that addition of liquid cooling has huge ramifications on the facility design. This is exactly where a physics-based digital twin come into play,” he said. “The old strategies of building a large shell and then putting compute inside it is not going to cut it, and so you need some new technology to actually make these things work,” he added. The Nvidia systems in the Cadence Reality Digital Twin Platform are available now upon request and will be included in the next software release later this year.

Read More »

Nvidia rolls out new GPUs for AI inferencing, large workloads

Inference is often considered to be a single step in the AI process, but it’s two workloads, according to Shar Narasimhan, director of product in Nvidia’s Data Center group. They are the context or prefill phase and the decode phase. Each of these two phases has different requirements of the underlying AI infrastructure. The prefill phase is compute-intensive, whereas the decode phase is memory-intensive, but up to now, GPU is asked to do both when it really does one task well. The Rubin CPX has been engineered to better the memory performance, Narasimhan said. So, the Rubin CPX is purpose-built for both phases, offering processing power as well as high throughput and efficiency. “It will dramatically increase the productivity and performance of AI factories,” said Narasimhan. It achieves this through massive token generation. Tokens equal work units in AI, particularly generative AI, so the more tokens generated, the more revenue generated. Nvidia is also announcing a new Vera Rubin NVL 144 CPX rack, offering 7.5 times the performance of a NVL72, the current top of the line system. Narasimhan said the NVL 144 CPX enables AI service providers to dramatically increase their profitability by delivering $5 billion of revenue for every $100 million invested in infrastructure. Rubin CPX is offered in multiple configurations, including the Vera Rubin NVL144 CPX, that can be combined with the Quantum‐X800 InfiniBand scale-out compute fabric or the Spectrum-XTM Ethernet networking platform with Nvidia Spectrum-XGS Ethernet technology and Nvidia ConnectX-9 SuperNICs. Nvidia Rubin CPX is expected to be available at the end of 2026.

Read More »

Google adds Gemini to its on-prem cloud for increased data protection

Google has announced the general availability of its Gemini artificial intelligence models on Google Distributed Cloud (GDC), making its generative AI product available on enterprise and government data centers. GDC is an on-premises implementation of Google Cloud, aimed at heavily regulated industries like medical and financial services to bring Google Cloud services within company firewalls rather than the public cloud. The launch of Gemini on GDC allows organizations with strict data residency and compliance requirements to deploy generative AI without compromising control over sensitive information. GDC uses Nvidia Hopper and Blackwell 0era GPU accelerators with automated load balancing and zero-touch updates for high availability. Security features include audit logging and access control capabilities that provide full transparency for customers. The platform also features Confidential Computing support for both CPUs (with Intel TDX) and GPUs (with Nvidia’s confidential computing) to secure sensitive data and prevent tampering or exfiltration.

Read More »

Nvidia networking roadmap: Ethernet, InfiniBand, co-packaged optics will shape data center of the future

Nvidia is baking into its Spectrum-X Ethernet platform a suite of algorithms that can implement networking protocols to allow Spectrum-X switches, ConnectX-8 SuperNICs, and systems with Blackwell GPUs to connect over wider distances without requiring hardware changes. These Spectrum-XGS algorithms use real-time telemetry—tracking traffic patterns, latency, congestion levels, and inter-site distances—to adjust controls dynamically. Ethernet and InfiniBand Developing and building Ethernet technology is a key part of Nvidia’s roadmap. Since it first introduced Spectrum-X in 2023, the vendor has rapidly made Ethernet a core development effort. This is in addition to InfiniBand development, which is still Nvidia’s bread-and-butter connectivity offering. “InfiniBand was designed from the ground up for synchronous, high-performance computing — with features like RDMA to bypass CPU jitter, adaptive routing, and congestion control,” Shainer said. “It’s the gold standard for AI training at scale, connecting more than 270 of the world’s top supercomputers. Ethernet is catching up, but traditional Ethernet designs — built for telco, enterprise, or hyperscale cloud — aren’t optimized for AI’s unique demands,” Shainer said. Most industry analysts predict Ethernet deployment for AI networking in enterprise and hyperscale deployments will increase in the next year; that makes Ethernet advancements a core direction for Nvidia and any vendor looking to offer AI connectivity options to customers. “When we first initiated our coverage of AI back-end Networks in late 2023, the market was dominated by InfiniBand, holding over 80% share,” wrote Sameh Boujelbene, vice president of Dell ’Oro Group, in a recent report. “Despite its dominance, we have consistently predicted that Ethernet would ultimately prevail at scale. What is notable, however, is the rapid pace at which Ethernet gained ground in AI back-end networks. As the industry moves to 800 Gbps and beyond, we believe Ethernet is now firmly positioned to overtake InfiniBand in these high-performance deployments.”

Read More »

Inside the AI-optimized data center: Why next-gen infrastructure is non-negotiable

How are AI data centers different from traditional data centers? AI data centers and traditional data centers can be physically similar, as they contain hardware, servers, networking equipment, and storage systems. The difference lies in their capabilities: Traditional data centers were built to support general computing tasks, while AI data centers are specifically designed for more sophisticated, time and resource-intensive workloads. Conventional data centers are simply not optimized for AI’s advanced tasks and necessary high-speed data transfer. Here’s a closer look at their differences: AI-optimized vs. traditional data centers Traditional data centers: Handle everyday computing needs such as web browsing, cloud services, email and enterprise app hosting, data storage and retrieval, and a variety of other relatively low-resource tasks. They can also support simpler AI applications, such as chatbots, that do not require intensive processing power or speed. AI data centers: Built to compute significant volumes of data and run complex algorithms, ML and AI tasks, including agentic AI workflows. They feature high-speed networking and low-latency interconnects for rapid scaling and data transfer to support AI apps and edge and internet of things (IoT) use cases. Physical infrastructure Traditional data centers: Typically composed of standard networking architectures such as CPUs suitable for handling networking, apps, and storage. AI data centers: Feature more advanced graphics processing units (GPU) (popularized by chip manufacturer Nvidia), tensor processing units (TPUs) (developed by Google), and other specialized accelerators and equipment. Storage and data management Traditional data centers: Generally, store data in more static cloud storage systems, databases, data lakes, and data lakehouses. AI data centers: Handle huge amounts of unstructured data including text, images, video, audio, and other files. They also incorporate high-performance tools including parallel file systems, multiple network servers, and NVMe solid state drives (SSDs). Power consumption Traditional data centers: Require robust cooling

Read More »

From Cloud to Concrete: How Explosive Data Center Demand is Redefining Commercial Real Estate

The world will generate 181 ZB of data in 2025, an increase of 23.13% year over year and 2.5 quintillion bytes (a quintillion byte is also called an exabyte, EB) created daily, according to a report from Demandsage. To put that in perspective: One exabyte is equal to 1 quintillion bytes, which is 1,000,000,000,000,000,000 bytes. That’s 29 TB every second, or 2.5 million TB per day. It’s no wonder data centers have become so crucial for creating, consuming, and storing data — and no wonder investor interest has skyrocketed.  The surging demand for secure, scalable, high-performance retail and wholesale colocation and hyperscale data centers is spurred by the relentless, global expansion of cloud computing and demand for AI as data generation from businesses, governments, and consumers continues to surge. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built.  As a result, investors increasingly view these facilities not just as technology assets, but as a unique convergence of real estate, utility infrastructure, and mission-critical systems. Capitalizing on this momentum, private equity and real estate investment firms are rapidly expanding into the sector through acquisitions, joint ventures, and new funds—targeting opportunities to build and operate facilities with a focus on energy efficiency and scalability.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »