Stay Ahead, Stay ONMINE

Baker Hughes, Petrobras Tie Up to Solve Flexible Pipes Corrosion Cracking

Energy technology major Baker Hughes Co. has launched a joint technology program with Petróleo Brasileiro S.A. (Petrobras) to solve stress corrosion cracking due to carbon dioxide (SCC-CO2) in flexible pipe systems. The pre-commercial agreement includes development and testing, along with an option to purchase the next-generation flexible pipes that will offer a prolonged service life […]

Energy technology major Baker Hughes Co. has launched a joint technology program with Petróleo Brasileiro S.A. (Petrobras) to solve stress corrosion cracking due to carbon dioxide (SCC-CO2) in flexible pipe systems.

The pre-commercial agreement includes development and testing, along with an option to purchase the next-generation flexible pipes that will offer a prolonged service life of 30 years in high-CO2 conditions, Baker Hughes said in a media release. The partnership will mainly be executed at Baker Hughes’ Energy Technology Innovation Center in Rio de Janeiro and the adjacent manufacturing facility for flexible pipe systems.

“Baker Hughes has led the way in addressing SCC-CO2, and we will bring that expertise and experience to bear in developing the definitive solution to this critical industry challenge”, Amerino Gatti, executive vice president for Oilfield Services and Equipment at Baker Hughes, said. “By deploying flexible pipe systems that last for decades, Petrobras can more efficiently unlock the vital natural resources that power the region, while also safely returning CO2 deep underground”.

SCC-CO2 was discovered in 2016 and can impact flexible pipes in pre-salt fields, which contain high levels of naturally occurring CO2. When water enters a pipe’s annulus area, it can lead to corrosion of the steel reinforcement layers, compromising structural integrity and shortening the system’s lifespan, Baker Hughes said. This challenge is especially pronounced in Brazil’s pre-salt fields, where Petrobras is reinjecting CO2 from its production processes into wells to decrease flaring and improve oil recovery, it said.

Until now, operators in high-CO2 environments have relied on solutions that mitigate the impact of SCC-CO2 while limiting the service life of risers and flowlines, Baker Hughes said. Its flexible pipe systems and advanced monitoring technologies have proven effective at minimizing this impact, and the company is a major supplier of flexible pipe systems to Petrobras, it said.

To contact the author, email [email protected]

What do you think? We’d love to hear from you, join the conversation on the

Rigzone Energy Network.

The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Keysight network packet brokers gain AI-powered features

The technology has matured considerably since then. Over the last five years, Singh said that most of Keysight’s NPB customers are global Fortune 500 organizations that have large network visibility practices. Meaning they deploy a lot of packet brokers with capabilities ranging anywhere from one gigabit networking at the edge,

Read More »

Adding, managing and deleting groups on Linux

$ sudo groupadd -g 1111 techs In this case, a specific group ID (1111) is being assigned. Omit the -g option to use the next available group ID (e.g., sudo groupadd techs). Once a group is added, you will find it in the /etc/group file. $ grep techs /etc/grouptechs:x:1111: Adding

Read More »

MISO proposes framework to speed generation interconnection

Dive Brief: The Midcontinent Independent System Operator on Monday asked federal regulators to approve an Expedited Resource Addition Study process, or ERAS, to provide a framework for the accelerated study of generation projects “that can address urgent resource adequacy and reliability needs in the near term.” MISO asked the Federal Energy Regulatory Commission to approve the ERAS proposal to be effective May 17. The grid operator is on pace for near-term capacity shortfalls, should resource retirements continue as planned, it said. MISO proposed for projects entering the ERAS process, as opposed to MISO’s standard Generator Interconnection Queue, to be studied serially each quarter and granted an Expedited Generator Interconnection Agreement within 90 days. Renewable energy stakeholders, however, warn the ERAS proposal “adds chaos to an already complex process.” Dive Insight: Recent surveys and forecasts demonstrate the urgency with which MISO needs to “address significant resource adequacy needs in its footprint that are compounded by the addition of unexpected large spot loads,” the grid operator told FERC. NERC’s 2024 Long-Term Reliability Assessment projected MISO will experience a 4.7 GW shortfall by 2028 if the current expected generator retirements occur, the grid operator said. And last year the grid operator and the Organization of MISO States published a report warning of possible capacity shortfalls beginning this summer. The ERAS proposal “is MISO’s answer to addressing these resource adequacy and reliability needs in the near-term,” it said in its proposal. “ERAS is a unique process which recognizes that the responsibility for providing grid reliability and resource adequacy in the MISO region is shared by Load Serving Entities … the states, and MISO.” According to MISO’s application, as of March 13 its generator interconnection queue contained 1,603 active interconnection requests. “This considerable backlog of applications is spread over all five of MISO’s study regions and includes queue cycles going

Read More »

Federal judge blocks EPA’s $14B GGRF funding freeze

A federal judge issued an order Tuesday blocking the U.S. Environmental Protection Agency’s freeze order on Greenhouse Gas Reduction Fund grants, finding that the EPA did not provide sufficient evidence of waste, fraud or abuse. “Based on the record before the court, and under the relevant statutes and various agreements, it does not appear that EPA Defendants took the legally required steps necessary to terminate these grants, such that its actions were arbitrary and capricious,” said U.S. District Court for the District of Columbia Judge Tanya Chutkan in a Tuesday memorandum opinion.  The case was brought before Chutkan on March 8 by the Climate United Fund, the recipient of a $6.97 billion Greenhouse Gas Reduction Fund grant which was frozen Feb. 18 after the EPA issued a notice of termination regarding the $20 billion GGRF fund.  The Climate United Fund said in a Monday release that the group, along with two other National Clean Investment Fund awardees, had been granted a temporary restraining order “halting the [EPA’s] termination of the grant agreements and preventing Citibank from transferring funds out of grantee bank accounts.” “Climate United will continue its legal process to fully restore its program,” the group said.  The other two NCIF awardees, the Coalition for Green Capital and Power Forward Communities, had $5 billion and $2 billion frozen, respectively, PBS reported. In a March 2 letter to Acting Inspector General Nicole Murley, Acting Deputy EPA Administrator Chad McIntosh said the EPA “launched certain oversight and accountability measures” to investigate GGRF disbursement for “financial mismanagement, conflicts of interest, and oversight failures.” But Chutkan said that when questioned at a March 12 hearing, “EPA Defendants proffered no evidence to support their basis for the sudden terminations, or that they followed the proper procedures.” “In the termination letters, EPA Defendants vaguely reference ‘multiple ongoing investigations’ into

Read More »

EIA Fuel Update Shows USA Gasoline, Diesel Prices Declining

The U.S. regular gasoline price and the U.S. on-highway diesel fuel price are both in a declining trend, the U.S. Energy Information Administration’s (EIA) latest gasoline and diesel fuel update showed. This update, which was released this week, put the average U.S. regular gasoline price at $3.078 per gallon on March 3, $3.069 per gallon on March 10, and $3.058 per gallon on March 17. It put the U.S. on-highway diesel fuel price at $3.635 per gallon on March 3, $3.582 per gallon on March 10, and $3.549 per gallon on March 17. Of the five Petroleum Administration for Defense District (PADD) regions highlighted in the EIA’s latest fuel update, the West Coast was shown to have the highest U.S. regular gasoline price as of March 17, at $4.061 per gallon. The Gulf Coast was shown to have the lowest U.S. regular gasoline price as of March 17, at $2.629 per gallon. In the update, the West Coast was also shown to have the highest U.S. on-highway diesel fuel price as of March 17, at $4.203 per gallon. The Gulf Coast was shown to have the lowest U.S. on-highway diesel fuel price as of March 17, at $3.245 per gallon. A glossary section of the EIA site notes that the 50 U.S. states and the District of Columbia are divided into five districts, with PADD 1 further split into three subdistricts. PADDs 6 and 7 encompass U.S. territories, the site adds. According to the AAA Fuel Prices website, the average U.S. regular gasoline price is $3.121 per gallon, as of March 20. Yesterday’s average was $3.102 per gallon, the week ago average was $3.079  per gallon, the month ago average was $3.165  per gallon, and the year ago average was $3.515 per gallon, the site showed. The average U.S. diesel price

Read More »

Inverness jobs growth on back of pumped hydro projects

Power generation firm Excitation & Engineering Services (EES) is expanding operations its operations to the Scottish Highlands amid a boom in energy investment in the region. EES said its new Inverness base will strengthen its ability to support its customers in the power sector as investment in hydroelectric and long duration energy storage (LDES) projects “continues to grow”. This includes the 1.3 GW Coire Glas project, set to be the first large-scale pumped storage scheme developed in the UK in 40 years. SSE Renewables is developing the £1.5 billion project in the Great Glen near Loch Lochy, around 50 miles from Inverness. When complete, the Coire Glas scheme will double the UK’s LDES capacity. With nine other pumped hydro schemes in development in Scotland, EES has launched a recruitment campaign for two engineers with local expertise to capitalise on opportunities in the region, “with potential for further expansion”. Inverness base EES director Ryan Kavanagh said establishing an Inverness base is a “major step towards enhancing our support for the region’s power generation industry”. “With more investment flowing into renewable energy, it’s crucial that we can offer specialised, responsive support locally,” he said. © Supplied by SSEAerial view of Loch Lochy, where the Coire Glas scheme will be built. “This office will help us serve our customers, improve collaboration with plant operators and support the maintenance and improvement of Scotland’s electricity supply.” EES founder and director Douglas Cope said the growing sector is a “great opportunity for engineers to develop their careers”. Cope founded the Tamworth-based firm in 2011 alongside a group of electrical engineers from firms including RWE and Alstom. “Scotland has a wealth of talent and we want to contribute to the region’s growth while fostering local expertise,” Cope said. Scotland pumped hydro boom Pumped storage projects and other

Read More »

Nigerian Oil Pipeline Sabotage Threatens Crude Output Revival

Nigeria’s push to revive oil production and encourage investment has been put at risk by sabotage at the heart of its crude pipeline system. Better security has been key to a recovery in the nation’s output, which rose 40% over the past few years after slumping to little more than half its historic peak. In January, Africa’s biggest producer even breached its once-distant OPEC quota.  The vandalism on a segment of the Trans-Niger Pipeline — which handles about 15% of the nation’s exports — is a setback for a government that had already taken measures to increase security in the area. President Bola Tinubu responded by imposing a state of emergency in Rivers State on Tuesday, citing an 18-month political standoff between local officials who he said failed to stop acts of sabotage by militants.  “This is a blow to the Tinubu government’s recent successes on oil output, gains driven in part by improved security measures,” said Clementine Wallop, director for sub-Saharan Africa at political-risk consultant Horizon Engage. “It is also a very difficult investment signal during a period where the government seemed to be turning a corner on energy.” Renaissance Africa Energy, a local consortium that only last week took control of assets including the TNP that it bought from Shell Plc, said it has no plans to issue a force majeure over exports of Bonny Light crude. Two tankers are waiting to load from the Bonny terminal, according to ship tracking data compiled by Bloomberg. In 2022, when Nigeria nearly dipped below a million barrels a day, security on the TNP had deteriorated to such an extent that the pipeline system had been illegally tapped in about 150 places. That meant producers only received a small fraction of the volumes they pumped through. Tightening security on oil pipelines

Read More »

CP2 LNG Gets Conditional Approval for Non-FTA Export

The United States Department of Energy (DOE) has granted a conditional permit for non-FTA exportation to CP2 LNG, a project of Venture Global Inc. under construction in Cameron Parish, Louisiana The project had already received authorization for the FTA portion of its request to export the equivalent of about 1.45 trillion cubic feet a year of natural gas, in a DOE order April 22, 2022. A final permit for the non-FTA portion has been withheld pending a DOE review of permitting considerations concerning greenhouse gas emissions, environmental impact, energy prices and domestic gas supply, according to a department order Wednesday granting the conditional permit. While the Trump administration ended ex-President Joe Biden’s pause of pending decisions on LNG export to countries with no free-trade agreement (FTA) with the U.S., the DOE under Trump indicated it would not junk a study published by the previous government on permitting considerations. In a January 21, 2025, statement the DOE said it was extending the deadline for the comment period for the results of that study from February 18, 2025, to March 20, 2025. “DOE expects to issue a final order to CP2 LNG in the coming months”, the department said in an online statement Wednesday. “We are grateful for the Trump Administration’s return to regular order and regulatory certainty that will allow us to further expand U.S. LNG exports, which have consistently been found to be in the public interest across multiple Administrations”, Venture Global chief executive Mike Sabel said in a company statement Wednesday. “This will enable us to provide our allies around the world with American LNG in just a few years and for decades to come”. Arlington, Virginia-based Venture Global said, “To date, the initial phase of CP2 LNG has been sold through 20-year sales and purchase agreements with ExxonMobil,

Read More »

PEAK:AIO adds power, density to AI storage server

There is also the fact that many people working with AI are not IT professionals, such as professors, biochemists, scientists, doctors, clinicians, and they don’t have a traditional enterprise department or a data center. “It’s run by people that wouldn’t really know, nor want to know, what storage is,” he said. While the new AI Data Server is a Dell design, PEAK:AIO has worked with Lenovo, Supermicro, and HPE as well as Dell over the past four years, offering to convert their off the shelf storage servers into hyper fast, very AI-specific, cheap, specific storage servers that work with all the protocols at Nvidia, like NVLink, along with NFS and NVMe over Fabric. It also greatly increased storage capacity by going with 61TB drives from Solidigm. SSDs from the major server vendors typically maxed out at 15TB, according to the vendor. PEAK:AIO competes with VAST, WekaIO, NetApp, Pure Storage and many others in the growing AI workload storage arena. PEAK:AIO’s AI Data Server is available now.

Read More »

SoftBank to buy Ampere for $6.5B, fueling Arm-based server market competition

SoftBank’s announcement suggests Ampere will collaborate with other SBG companies, potentially creating a powerful ecosystem of Arm-based computing solutions. This collaboration could extend to SoftBank’s numerous portfolio companies, including Korean/Japanese web giant LY Corp, ByteDance (TikTok’s parent company), and various AI startups. If SoftBank successfully steers its portfolio companies toward Ampere processors, it could accelerate the shift away from x86 architecture in data centers worldwide. Questions remain about Arm’s server strategy The acquisition, however, raises questions about how SoftBank will balance its investments in both Arm and Ampere, given their potentially competing server CPU strategies. Arm’s recent move to design and sell its own server processors to Meta signaled a major strategic shift that already put it in direct competition with its own customers, including Qualcomm and Nvidia. “In technology licensing where an entity is both provider and competitor, boundaries are typically well-defined without special preferences beyond potential first-mover advantages,” Kawoosa explained. “Arm will likely continue making independent licensing decisions that serve its broader interests rather than favoring Ampere, as the company can’t risk alienating its established high-volume customers.” Industry analysts speculate that SoftBank might position Arm to focus on custom designs for hyperscale customers while allowing Ampere to dominate the market for more standardized server processors. Alternatively, the two companies could be merged or realigned to present a unified strategy against incumbents Intel and AMD. “While Arm currently dominates processor architecture, particularly for energy-efficient designs, the landscape isn’t static,” Kawoosa added. “The semiconductor industry is approaching a potential inflection point, and we may witness fundamental disruptions in the next 3-5 years — similar to how OpenAI transformed the AI landscape. SoftBank appears to be maximizing its Arm investments while preparing for this coming paradigm shift in processor architecture.”

Read More »

Nvidia, xAI and two energy giants join genAI infrastructure initiative

The new AIP members will “further strengthen the partnership’s technology leadership as the platform seeks to invest in new and expanded AI infrastructure. Nvidia will also continue in its role as a technical advisor to AIP, leveraging its expertise in accelerated computing and AI factories to inform the deployment of next-generation AI data center infrastructure,” the group’s statement said. “Additionally, GE Vernova and NextEra Energy have agreed to collaborate with AIP to accelerate the scaling of critical and diverse energy solutions for AI data centers. GE Vernova will also work with AIP and its partners on supply chain planning and in delivering innovative and high efficiency energy solutions.” The group claimed, without offering any specifics, that it “has attracted significant capital and partner interest since its inception in September 2024, highlighting the growing demand for AI-ready data centers and power solutions.” The statement said the group will try to raise “$30 billion in capital from investors, asset owners, and corporations, which in turn will mobilize up to $100 billion in total investment potential when including debt financing.” Forrester’s Nguyen also noted that the influence of two of the new members — xAI, owned by Elon Musk, along with Nvidia — could easily help with fundraising. Musk “with his connections, he does not make small quiet moves,” Nguyen said. “As for Nvidia, they are the face of AI. Everything they do attracts attention.” Info-Tech’s Bickley said that the astronomical dollars involved in genAI investments is mind-boggling. And yet even more investment is needed — a lot more.

Read More »

IBM broadens access to Nvidia technology for enterprise AI

The IBM Storage Scale platform will support CAS and now will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using Nvidia BlueField-3 DPUs and Spectrum-X networking, IBM stated. The multimodal document data extraction workflow will also support Nvidia NeMo Retriever microservices. CAS will be embedded in the next update of IBM Fusion, which is planned for the second quarter of this year. Fusion simplifies the deployment and management of AI applications and works with Storage Scale, which will handle high-performance storage support for AI workloads, according to IBM. IBM Cloud instances with Nvidia GPUs In addition to the software news, IBM said its cloud customers can now use Nvidia H200 instances in the IBM Cloud environment. With increased memory bandwidth (1.4x higher than its predecessor) and capacity, the H200 Tensor Core can handle larger datasets, accelerating the training of large AI models and executing complex simulations, with high energy efficiency and low total cost of ownership, according to IBM. In addition, customers can use the power of the H200 to process large volumes of data in real time, enabling more accurate predictive analytics and data-driven decision-making, IBM stated. IBM Consulting capabilities with Nvidia Lastly, IBM Consulting is adding Nvidia Blueprint to its recently introduced AI Integration Service, which offers customers support for developing, building and running AI environments. Nvidia Blueprints offer a suite pre-validated, optimized, and documented reference architectures designed to simplify and accelerate the deployment of complex AI and data center infrastructure, according to Nvidia.  The IBM AI Integration service already supports a number of third-party systems, including Oracle, Salesforce, SAP and ServiceNow environments.

Read More »

Nvidia’s silicon photonics switches bring better power efficiency to AI data centers

Nvidia typically uses partnerships where appropriate, and the new switch design was done in collaboration with multiple vendors across different aspects, including creating the lasers, packaging, and other elements as part of the silicon photonics. Hundreds of patents were also included. Nvidia will licensing the innovations created to its partners and customers with the goal of scaling this model. Nvidia’s partner ecosystem includes TSMC, which provides advanced chip fabrication and 3D chip stacking to integrate silicon photonics into Nvidia’s hardware. Coherent, Eoptolink, Fabrinet, and Innolight are involved in the development, manufacturing, and supply of the transceivers. Additional partners include Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries, and TFC Communication. AI has transformed the way data centers are being designed. During his keynote at GTC, CEO Jensen Huang talked about the data center being the “new unit of compute,” which refers to the entire data center having to act like one massive server. That has driven compute to be primarily CPU based to being GPU centric. Now the network needs to evolve to ensure data is being fed to the GPUs at a speed they can process the data. The new co-packaged switches remove external parts, which have historically added a small amount of overhead to networking. Pre-AI this was negligible, but with AI, any slowness in the network leads to dollars being wasted.

Read More »

Critical vulnerability in AMI MegaRAC BMC allows server takeover

“In disruptive or destructive attacks, attackers can leverage the often heterogeneous environments in data centers to potentially send malicious commands to every other BMC on the same management segment, forcing all devices to continually reboot in a way that victim operators cannot stop,” the Eclypsium researchers said. “In extreme scenarios, the net impact could be indefinite, unrecoverable downtime until and unless devices are re-provisioned.” BMC vulnerabilities and misconfigurations, including hardcoded credentials, have been of interest for attackers for over a decade. In 2022, security researchers found a malicious implant dubbed iLOBleed that was likely developed by an APT group and was being deployed through vulnerabilities in HPE iLO (HPE’s Integrated Lights-Out) BMC. In 2018, a ransomware group called JungleSec used default credentials for IPMI interfaces to compromise Linux servers. And back in 2016, Intel’s Active Management Technology (AMT) Serial-over-LAN (SOL) feature which is part of Intel’s Management Engine (Intel ME), was exploited by an APT group as a covert communication channel to transfer files. OEM, server manufacturers in control of patching AMI released an advisory and patches to its OEM partners, but affected users must wait for their server manufacturers to integrate them and release firmware updates. In addition to this vulnerability, AMI also patched a flaw tracked as CVE-2024-54084 that may lead to arbitrary code execution in its AptioV UEFI implementation. HPE and Lenovo have already released updates for their products that integrate AMI’s patch for CVE-2024-54085.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »