Stay Ahead, Stay ONMINE

LLOG Progresses Salamanca FPU Project in GOM

LLOG Exploration Company, L.L.C. has advanced its Salamanca project in the Gulf of Mexico (GOM), which involves the conversion of a former GOM production facility into a floating production unit (FPU). The privately owned exploration and production company anticipates the final outfitting of the FPU to be completed in early 2025, in time for the […]

LLOG Exploration Company, L.L.C. has advanced its Salamanca project in the Gulf of Mexico (GOM), which involves the conversion of a former GOM production facility into a floating production unit (FPU).

The privately owned exploration and production company anticipates the final outfitting of the FPU to be completed in early 2025, in time for the project’s production target of mid-2025.

The Salamanca FPU will have a capacity of 60,000 barrels per day (bpd) of oil and 40 million cubic feet per day (MMcfpd) of natural gas. The approach “significantly minimizes environmental impact by reusing existing infrastructure and reduces time, ultimately enhancing economic returns,” LLOG said in a news release.

The hull refurbishment was completed at Seatrium in Brownsville, Texas and delivered to Kiewit’s yard in Ingleside in October 2024. In early November 2024, the new topside equipment and deck was successfully rejoined to the hull, LLOG stated.

Further, LLOG said that all of the initial wells to support the Salamanca FPU have been successfully drilled and cased, including discovery wells drilled at Castile and Leon, with additional successful development wells drilled in 2023 and 2024.

“The final well finished drilling in September 2024 at the Leon Development (Keathley Canyon 686 #4), with better-than-expected results, encountering greater than 1,000 feet of high quality oil-bearing sands,” LLOG said. The facility will be located in Keathley Canyon 689 in approximately 6,400 feet of water.

LLOG COO Eric Zimmermann said, “LLOG has a long history of developing prolific projects in the GOM safely, efficiently and economically. We are pleased to be progressing another world class project and to have reached several important milestones while also optimizing financial flexibility through securing financing for the Salamanca project. The unique aspect of the Salamanca facility is that the FPU is the first refurbishment of a GOM facility that was in production and being brought into commerce as a producing asset again. By modifying a previously built production unit compared with constructing a new facility, we are able to significantly reduce the time to bring these discoveries online”.

“Also, the project has a significantly positive environmental impact as it reuses an existing unit compared with abandonment of the unit, while also accomplishing approximately a 70% reduction in emissions impact compared to the construction of a new unit. As a Louisiana-based company, the other aspect of the project that brings us pride is the major construction for this project has been undertaken in shipyards and construction yards in Texas and Louisiana versus occurring internationally. Our ongoing success and achievements in delivering complex deepwater projects reflect the dedication and expertise of our outstanding team,” he added.

LLOG entered the Leon field as its operator in 2019 through an agreement with Repsol. LLOG is the operator of the Salamanca FPU, as well as the Leon and Castile discoveries with Repsol and O.G. Oil & Gas as non-operating working interest owners.

In November 2024, Karoon Energy Ltd and LLOG confirmed a hydrocarbon discovery in the Who Dat South exploration well in the GOM. Drilling in Who Dat South started in September 2024.

Karoon said in an earlier news release that the exploration well intersected several hydrocarbon-bearing sandstone intervals through the targeted Miocene zones, over a gross interval between 5,000 meters measured depth (MD) and a final total depth (TD) of 7,014 meters MD.

To contact the author, email [email protected]



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Five big takeaways from Nvidia GTC

Liquid cooling here to stay Liquid-cooled switches will become a necessity, not a choice, according to according to Sameh Boujelbene, vice president with the Dell’Oro Group. “After liquid cooling racks and servers, switches are next. NVIDIA’s latest 51.2 T SpectrumX switches offer both liquid-cooled and air-cooled options. However, all future

Read More »

20 powerful women shaping the networking industry

Women are severely underrepresented in top leadership roles across the business world. Only 10.4% of the Fortune 500 companies have women CEOs. In an AP survey of S&P 500 companies, only 25 of 341 CEOs were women. That disparity extends into the technology sector. The Women in Tech organization reports

Read More »

Nvidia wants to be a one-stop enterprise technology shop

“Nvidia has evolved from a gaming chip company to an AI supercomputer company with a deep and wide software stack that covers over a dozen vertical apps, super hi-speed electro-optical inter-processor communications, and a killer processor that uses the latest HBM4 high-speed high-density memory. The company also announced GPUs would

Read More »

California Gov. Newsom uses judicial streamlining provision to advance 600 MW of solar, storage

California Gov. Gavin Newsom, D, announced Wednesday that he has certified the Cornucopia Hybrid Solar Project using the California Environmental Quality Act’s judicial streamlining provision, speeding up the construction of 300 MW of solar along with 300 MW of battery storage. Once a project is certified, courts must decide on CEQA challenges to it within 270 days to the extent feasible, while still allowing those challenges to be heard, said a release from Newsom’s office.  The BayWa r.e. Americas project will be located in Fresno County, California, and is expected to power around 300,000 homes in the area.  Newsom’s office said the project’s combined 300 MW of generation and 300 MW of battery capacity will allow the solar farm to dispatch electricity at times of peak demand, “including evening and nighttime hours when renewable generation is limited.” The project’s developers also plan for it to be agrivoltaic, with sheep “grazing alongside solar panels to help manage vegetation,” said the governor’s office. A project’s selection for a judicial streamlining certification can reduce lawsuit-related delays from three to five years to around 270 days, said the governor’s office. This process was authorized by a 2021 state law allowing the governor to make those certifications, and a 2023 state law expanding the list of eligible projects to include certain green infrastructure projects.  Only 24 projects have been certified under the law so far. The certification came a few days after California regulators approved new maintenance and operation standards for battery storage resources, including a requirement for facility owners to develop emergency response and emergency action plans, following a January fire at Vistra Energy’s Moss Landing battery facility in California.  “The project aligns with California efforts focused on proactively addressing safety for battery storage systems through comprehensive state-level collaborations and regulatory updates,” said Newsom’s office.

Read More »

UAlbany decarbonization project to cut fossil fuel consumption 16%

The State University of New York at Albany is beginning a $30 million decarbonization project that will enable it to shut down its gas-fired boilers during the summer months, it announced earlier this month.  The project will install a high-efficiency electric centrifugal chiller and a heat recovery chiller, both connected to a new geothermal well field consisting of between 90 and 135 wells in a campus parking lot. The chillers will replace two gas-fired absorption chillers in the campus’ 1960s-era central power plant, the university says. In addition, the project will modify domestic hot water systems in over 25 buildings and install new low-temperature hot water piping in the campus athletic facilities. The geothermal heat recovery chiller will be able to meet all of the campus cooling, heating and domestic hot water loads during the summer months, the university says. The move is projected to reduce the university’s annual fossil fuel consumption by 16%, according to Indu, the energy officer at the University at Albany.  The pipes throughout the campus that deliver hot water can create technical problems when transitioning to geothermal or other non-combustion heating technologies because of their small diameter, Indu said.  With those pipes, “You cannot make steam or high-temperature hot water without burning some sort of fossil fuel,” Indu said.  This limitation is why the university will only be able to turn off its gas-fired boilers during the summer months. “As the heating load starts to increase, our pipe sizes are too small to heat the campus with just 180-degree [Fahrenheit] water,” she said.   Next steps To overcome the limitations of existing pipes, the university is evaluating the construction of a satellite energy hub: a fully electrified geothermal heat recovery plant on the other side of campus, closer to where major loads like residential buildings and

Read More »

Building performance standards set to proliferate, evolve in 2025

A shift in federal government priorities has created uncertainty over federal funding for energy efficiency and sustainability initiatives, but state and local governments are continuing to develop building performance policies.  While large cities have led the way on implementing building performance standards and benchmarking policies in the past few years, smaller cities — those with populations under 100,000 — could continue to push these policies forward in 2025, according to James Burton, manager of policy engagement and tracking at the Institute for Market Transformation. This is in spite of staffing and funding challenges that could more substantially impact them, Burton said.  Even with limited resources, municipalities across the country are working to implement environmental policies, ranging from building performance standards to energy benchmarking to energy codes, Burton said in a blog post. These policies provide a way for them to meet goals to reduce energy use, lower utility bills and advance climate action, he said. IMT works with cities and municipalities to develop and implement building performance standards that align with the goals of the National Building Performance Standards Coalition, a group of state and local governments that in January reaffirmed their commitment to implementing BPS. Evanston, Illinois, is an example of a smaller municipality pushing ahead on climate action. It became the first government to pass a building performance standard in 2025 with its Healthy Buildings Ordinance. The HBO, which will cover commercial and multifamily buildings over 20,000 square feet, condominiums over 50,000 square feet and municipal buildings over 10,000 square feet, requires these buildings to achieve zero on-site emissions and 100% renewable electricity procurement by 2050, Burton said.  The HBO follows many of IMT’s best practice recommendations, such as setting multiple metrics to encourage efficiency alongside emissions reductions and setting up a community accountability board to ensure equity in the

Read More »

DOE withdraws, postpones multiple appliance energy efficiency rules

The U.S. Department of Energy on Monday announced it would withdraw four appliance efficiency standards and officially postpone the effective dates for three other rules, continuing the Trump administration’s efforts to dismantle the agency’s appliance efficiency program. While several of DOE’s actions were previously announced or are relatively minor, the agency’s decision to withdraw a rule related to electric motors is “uncharted territory,” Appliance Standards Awareness Project Executive Director Andrew deLaski said. DOE has “officially withdrawn four conservation standards, including standards on electric motors, ceiling fans, dehumidifiers, and external power supplies,” the agency said in a statement. “This continued commitment to the American people will slash unnecessary red tape and regulations that raise prices, reduce consumer choice, and frustrate the American people.” DOE also announced it has officially postponed the effective dates for three home appliance rules, including those covering test procedures for central air conditioners and heat pumps, efficiency standards for walk-in coolers and freezers and standards for gas instantaneous water heaters. “By removing burdensome regulations put in place by the Biden administration, we are returning freedom of choice to the American people, ensuring consumers can choose the home appliances that work best for their lives and budgets,” Secretary of Energy Chris Wright said in a statement. “This power should not belong to the federal government.” DOE first said in February that it planned to postpone the implementation of several appliance energy efficiency standards finalized by the Biden administration. The natural gas sector hailed the announcement as a win for consumer choice, while efficiency advocates warn the decision could add billions to utility bills. While delaying or not finalizing rule updates begun under the previous administration isn’t particularly noteworthy, deLaski said DOE’s decision on electric motors is different. That rule was signed by a DOE official and put out to the

Read More »

New England could connect 9.6 GW of offshore wind without new infrastructure: report

If sited in the correct locations, around eight 1,200 MW offshore wind farms could be connected to the New England grid and operate simultaneously at full power without first constructing new transmission infrastructure and without significant curtailment, said a Friday report from ISO New England. ISO-NE’s analysis found that up to 38% of existing major coastal substations in New England that it studied “may be electrically suitable for a 1,200 MW offshore wind interconnection without constructing any new transmission infrastructure and without upgrading any existing transmission infrastructure to address thermal concerns.” Up to 86% of the substations studied may be suitable for connection without new infrastructure, but some of them would require upgrades, said the regional transmission organization. The analysis also found that “a much smaller subset of these substations may be able to accommodate a 2,000 MW wind farm without any new transmission infrastructure.” However, ISO-NE said, this report is “based solely on N-1 DC thermal steady-state analysis, which helps provide high-level information about system constraints” and neither the initial study nor this offshore wind-specific analysis included “the more detailed analyses” of a full interconnection study. “Though full interconnection studies are required, high-level results suggest that significant amounts of offshore wind may be able to connect to the region without upgrades,” the ISO said in the report’s conclusion. “However, achieving these totals will depend on careful planning and coordination between states and stakeholders.” This analysis indicates that relocating some offshore wind points of interconnection further south, from Maine to the Boston area, could lead to “significant cost savings,” said the report. Even if those points of interconnection are moved, though, “upgrades are still necessary on the North-South interfaces to accommodate the combination of load growth from electrification and significant increase in generation build out in northern New England,” the report

Read More »

Colonial Pipeline Responds to Protest Against Pipeline Changes

Colonial Pipeline Co. defended a proposal for operational changes on its fuel network after objections from oil majors Exxon Mobil Corp. and Chevron Corp. and commodities trader Trafigura. Colonial, which operates the largest gasoline pipeline in the US, said the changes “will enhance pipeline integrity and reliability and create more capacity for shippers” in a Monday filing with the US Federal Energy Regulatory Commission.  According to the filing, the proposed changes would mitigate risks associated with “pressure cycling,” which occurs when changes in internal pressure lead to stress in the pipe wall. By transporting fewer products on the pipeline route, among other changes, “Colonial will experience fewer segment slowdowns and shutdowns (and the associated restarts) that more frequently arise when transporting multiple products in the same cycle,” the company said in the filing. Trafigura, Exxon Mobil, Chevron and several other refiners previously filed motions asking the regulator to block Colonial’s proposed changes. Among the potential changes are halting the transport of volatile grade five gasoline on the maxed-out pipeline and boosting capacity by several thousand barrels a day. Shippers on the system, which transports about 2.5 million barrels of fuel a day from the refinery belt of Texas and Louisiana to demand centers in the East Coast, say the changes will contribute to operational hurdles and higher costs. Large swaths of the East Coast, where several refineries have shuttered in recent years, depend on Colonial’s pipeline to meet fuel demand, giving it an outsized effect on the domestic fuel market. If approved, the changes would likely take effect in September. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

PEAK:AIO adds power, density to AI storage server

There is also the fact that many people working with AI are not IT professionals, such as professors, biochemists, scientists, doctors, clinicians, and they don’t have a traditional enterprise department or a data center. “It’s run by people that wouldn’t really know, nor want to know, what storage is,” he said. While the new AI Data Server is a Dell design, PEAK:AIO has worked with Lenovo, Supermicro, and HPE as well as Dell over the past four years, offering to convert their off the shelf storage servers into hyper fast, very AI-specific, cheap, specific storage servers that work with all the protocols at Nvidia, like NVLink, along with NFS and NVMe over Fabric. It also greatly increased storage capacity by going with 61TB drives from Solidigm. SSDs from the major server vendors typically maxed out at 15TB, according to the vendor. PEAK:AIO competes with VAST, WekaIO, NetApp, Pure Storage and many others in the growing AI workload storage arena. PEAK:AIO’s AI Data Server is available now.

Read More »

SoftBank to buy Ampere for $6.5B, fueling Arm-based server market competition

SoftBank’s announcement suggests Ampere will collaborate with other SBG companies, potentially creating a powerful ecosystem of Arm-based computing solutions. This collaboration could extend to SoftBank’s numerous portfolio companies, including Korean/Japanese web giant LY Corp, ByteDance (TikTok’s parent company), and various AI startups. If SoftBank successfully steers its portfolio companies toward Ampere processors, it could accelerate the shift away from x86 architecture in data centers worldwide. Questions remain about Arm’s server strategy The acquisition, however, raises questions about how SoftBank will balance its investments in both Arm and Ampere, given their potentially competing server CPU strategies. Arm’s recent move to design and sell its own server processors to Meta signaled a major strategic shift that already put it in direct competition with its own customers, including Qualcomm and Nvidia. “In technology licensing where an entity is both provider and competitor, boundaries are typically well-defined without special preferences beyond potential first-mover advantages,” Kawoosa explained. “Arm will likely continue making independent licensing decisions that serve its broader interests rather than favoring Ampere, as the company can’t risk alienating its established high-volume customers.” Industry analysts speculate that SoftBank might position Arm to focus on custom designs for hyperscale customers while allowing Ampere to dominate the market for more standardized server processors. Alternatively, the two companies could be merged or realigned to present a unified strategy against incumbents Intel and AMD. “While Arm currently dominates processor architecture, particularly for energy-efficient designs, the landscape isn’t static,” Kawoosa added. “The semiconductor industry is approaching a potential inflection point, and we may witness fundamental disruptions in the next 3-5 years — similar to how OpenAI transformed the AI landscape. SoftBank appears to be maximizing its Arm investments while preparing for this coming paradigm shift in processor architecture.”

Read More »

Nvidia, xAI and two energy giants join genAI infrastructure initiative

The new AIP members will “further strengthen the partnership’s technology leadership as the platform seeks to invest in new and expanded AI infrastructure. Nvidia will also continue in its role as a technical advisor to AIP, leveraging its expertise in accelerated computing and AI factories to inform the deployment of next-generation AI data center infrastructure,” the group’s statement said. “Additionally, GE Vernova and NextEra Energy have agreed to collaborate with AIP to accelerate the scaling of critical and diverse energy solutions for AI data centers. GE Vernova will also work with AIP and its partners on supply chain planning and in delivering innovative and high efficiency energy solutions.” The group claimed, without offering any specifics, that it “has attracted significant capital and partner interest since its inception in September 2024, highlighting the growing demand for AI-ready data centers and power solutions.” The statement said the group will try to raise “$30 billion in capital from investors, asset owners, and corporations, which in turn will mobilize up to $100 billion in total investment potential when including debt financing.” Forrester’s Nguyen also noted that the influence of two of the new members — xAI, owned by Elon Musk, along with Nvidia — could easily help with fundraising. Musk “with his connections, he does not make small quiet moves,” Nguyen said. “As for Nvidia, they are the face of AI. Everything they do attracts attention.” Info-Tech’s Bickley said that the astronomical dollars involved in genAI investments is mind-boggling. And yet even more investment is needed — a lot more.

Read More »

IBM broadens access to Nvidia technology for enterprise AI

The IBM Storage Scale platform will support CAS and now will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using Nvidia BlueField-3 DPUs and Spectrum-X networking, IBM stated. The multimodal document data extraction workflow will also support Nvidia NeMo Retriever microservices. CAS will be embedded in the next update of IBM Fusion, which is planned for the second quarter of this year. Fusion simplifies the deployment and management of AI applications and works with Storage Scale, which will handle high-performance storage support for AI workloads, according to IBM. IBM Cloud instances with Nvidia GPUs In addition to the software news, IBM said its cloud customers can now use Nvidia H200 instances in the IBM Cloud environment. With increased memory bandwidth (1.4x higher than its predecessor) and capacity, the H200 Tensor Core can handle larger datasets, accelerating the training of large AI models and executing complex simulations, with high energy efficiency and low total cost of ownership, according to IBM. In addition, customers can use the power of the H200 to process large volumes of data in real time, enabling more accurate predictive analytics and data-driven decision-making, IBM stated. IBM Consulting capabilities with Nvidia Lastly, IBM Consulting is adding Nvidia Blueprint to its recently introduced AI Integration Service, which offers customers support for developing, building and running AI environments. Nvidia Blueprints offer a suite pre-validated, optimized, and documented reference architectures designed to simplify and accelerate the deployment of complex AI and data center infrastructure, according to Nvidia.  The IBM AI Integration service already supports a number of third-party systems, including Oracle, Salesforce, SAP and ServiceNow environments.

Read More »

Nvidia’s silicon photonics switches bring better power efficiency to AI data centers

Nvidia typically uses partnerships where appropriate, and the new switch design was done in collaboration with multiple vendors across different aspects, including creating the lasers, packaging, and other elements as part of the silicon photonics. Hundreds of patents were also included. Nvidia will licensing the innovations created to its partners and customers with the goal of scaling this model. Nvidia’s partner ecosystem includes TSMC, which provides advanced chip fabrication and 3D chip stacking to integrate silicon photonics into Nvidia’s hardware. Coherent, Eoptolink, Fabrinet, and Innolight are involved in the development, manufacturing, and supply of the transceivers. Additional partners include Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries, and TFC Communication. AI has transformed the way data centers are being designed. During his keynote at GTC, CEO Jensen Huang talked about the data center being the “new unit of compute,” which refers to the entire data center having to act like one massive server. That has driven compute to be primarily CPU based to being GPU centric. Now the network needs to evolve to ensure data is being fed to the GPUs at a speed they can process the data. The new co-packaged switches remove external parts, which have historically added a small amount of overhead to networking. Pre-AI this was negligible, but with AI, any slowness in the network leads to dollars being wasted.

Read More »

Critical vulnerability in AMI MegaRAC BMC allows server takeover

“In disruptive or destructive attacks, attackers can leverage the often heterogeneous environments in data centers to potentially send malicious commands to every other BMC on the same management segment, forcing all devices to continually reboot in a way that victim operators cannot stop,” the Eclypsium researchers said. “In extreme scenarios, the net impact could be indefinite, unrecoverable downtime until and unless devices are re-provisioned.” BMC vulnerabilities and misconfigurations, including hardcoded credentials, have been of interest for attackers for over a decade. In 2022, security researchers found a malicious implant dubbed iLOBleed that was likely developed by an APT group and was being deployed through vulnerabilities in HPE iLO (HPE’s Integrated Lights-Out) BMC. In 2018, a ransomware group called JungleSec used default credentials for IPMI interfaces to compromise Linux servers. And back in 2016, Intel’s Active Management Technology (AMT) Serial-over-LAN (SOL) feature which is part of Intel’s Management Engine (Intel ME), was exploited by an APT group as a covert communication channel to transfer files. OEM, server manufacturers in control of patching AMI released an advisory and patches to its OEM partners, but affected users must wait for their server manufacturers to integrate them and release firmware updates. In addition to this vulnerability, AMI also patched a flaw tracked as CVE-2024-54084 that may lead to arbitrary code execution in its AptioV UEFI implementation. HPE and Lenovo have already released updates for their products that integrate AMI’s patch for CVE-2024-54085.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »