Stay Ahead, Stay ONMINE

Verizon, Nvidia team up for enterprise AI networking

The platform will support multitenancy, allowing it to cater to various use cases or customers. It can also be deployed on a customer’s premises, either through a permanent private on-site network or portable private network connectivity. Additionally, the platform will be capable of scaling on demand to meet the specific application needs of the customer. […]

The platform will support multitenancy, allowing it to cater to various use cases or customers. It can also be deployed on a customer’s premises, either through a permanent private on-site network or portable private network connectivity. Additionally, the platform will be capable of scaling on demand to meet the specific application needs of the customer.

“We’re leveraging our network’s unique strengths including private networks and Verizon’s global industry leadership in private MEC, combined with Nvidia’s AI compute capabilities to enable real-time AI applications that require security, ultra-low latency and high bandwidth,” Srini Kalapala, senior vice president of technology and product development at Verizon, said in a statement.

The software stack is being built to handle compute-intensive apps, including generative AI large language models and vision language models, video streaming, broadcast management, computer vision, augmented/virtual/extended reality (AR/VR/XR), autonomous mobile robot/automated guided vehicle (AMR/AGV), and IoT.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia is still working with suppliers on RAM chips for Rubin

Nvidia changed its requirements for suppliers of the next generation of high-bandwidth memory, HBM4, but is close to certifying revised chips from Samsung Electronics for use in its AI systems, according to reports. Nvidia revised its specifications for memory chips for its Rubin platform in the third quarter of 2025,

Read More »

Storage shortage may cause AI delays for enterprises

Higher prices ahead All indicators are showing a steep price increase for memory and storage in 2026. Brad Gastwirth, for example, says he met with many of the most important players in the market at CES earlier this month, and his analysis suggests there will be a 50% or more

Read More »

Rust 1.93 updates bundled musl library to boost networking

The Rust team has unveiled Rust 1.93, the latest version of the programming language designed to create fast and safe system-level software. This release improves operations involving the DNS resolver for the musl implementation of the  C standard library. Linux binaries are expected to be more reliable for networking as

Read More »

IEA upgrades forecast for 2026 oil demand growth

Global oil demand growth is projected to average 930,000 b/d in 2026, up from 850,000 b/d in 2025, the International Energy Agency (IEA) said in its January 2026 Oil Market Montly Report, reflecting a normalization of economic conditions after last year’s tariff disruptions and oil prices trending lower than a year ago.  This contrasts with the agency’s earlier projections of 830,000 b/d for 2025 and 860,000 b/d for 2026. The recovery in petrochemical feedstock demand will be partially offset by a continued slowdown in gasoline demand growth. All of the growth in 2026 will again come from non-OECD countries, IEA said in the report.  Global oil supply fell by 350,000 b/d month-on-month in December to 107.4 million b/d, 1.6 million b/d below the record high reached in September. Production declines in Kazakhstan and some Middle Eastern OPEC producers were partially offset by a strong rebound in Russian output. Global oil supply is now projected to grow by 2.5 million b/d this year to 108.7 million b/d, following a 3 million b/d increase in 2025. Non-OPEC+ countries contributed 1.8 million b/d of the growth in 2025 and 1.3 million b/d of the growth in 2026. “The current global surplus has been underpinned by a robust growth in oil supply since the start of 2025, with non-OPEC+ producers accounting for close to 60% of the 3 million b/d total increase. Saudi Arabia has led the rise in OPEC+ supply following the unwinding of production cuts, while the Americas quintet of the US, Canada, Brazil, Guyana, and Argentina has dominated non-OPEC+ increases. Barring any significant sustained disruptions to output – and if OPEC+ stays the course with its current production policy and activity in the US shale patch avoids major downshifts – global oil supplies could increase by a further 2.5 million b/d

Read More »

Hamm suspends Bakken drilling; Continental reallocates capital to Argentina’s Neuquén basin

Continental Resources has suspended new drilling activity in North Dakota’s Bakken shale, citing inadequate economics under current oil price conditions. The decision was disclosed by founder and controlling shareholder Harold Hamm during an investor call hosted by Bloomberg, where Hamm said prevailing prices no longer support incremental drilling in the play. After more than 30 years of continuous activity in the Bakken, margins have compressed to levels below Continental’s return thresholds, Hamm said. Elevated service costs and weaker crude pricing have pushed breakeven requirements above current market levels, he continued.  The company characterized the move as a tactical pause rather than a permanent exit, indicating that drilling could resume if pricing improves.  The Bakken was central to the early expansion of horizontal drilling and hydraulic fracturing in the US. However, as the play matures, productivity gains have moderated and capital efficiency has come under pressure.  Argentina Meanwhile, Continental is advancing its first large-scale expansion outside the US, targeting Vaca Muerta in Argentina’s Neuquén basin. On Jan. 5, 2026, Continental finalized an agreement with Pan American Energy (PAE), acquiring a 20% non-operated interest in four shale blocks. PAE Group chief executive Marcos Bulgheroni said Continental’s participation adds technical expertise focused on efficiency and risk reduction. Continental chief executive officer Doug Lawler said the company views Vaca Muerta as one of the most competitive shale plays globally. That deal follows Continental’s acquisition of a 90% operated interest in Los Toldos II Oeste from Pluspetrol, establishing the company in Argentina as both an operator and a non-operating partner.

Read More »

Venture Global gets arbitrator’s nod in Repsol dispute

An arbitrator has ruled in favor of Venture Global Inc. in the liquefied natural gas company’s dispute with Repsol LNG Holding SA over its decision to delay the commercial operations date for its 10-million tonne/year Calcasieu Pass LNG plant. In a filing with the US Securities and Exchange Commission, executives of Virginia-based Venture Global said the International Chamber of Commerce’s arbitration body has found that the company acted as a “reasonable and prudent operator” and hadn’t breached the terms of its contract with Spain’s Repsol. The companies’ tussle centers on Venture Global’s decision earlier this decade to—citing problems with some Calcasieu Pass equipment as well as the plant’s power infrastructure—hold off on declaring COD but still export some cargoes during the commissioning phase at spot rates in early 2022. “The company is pleased that another arbitral tribunal has ruled in [Venture Global Calcasieu Pass LLC]’s favor in the proceeding with Repsol,” Venture Global’s SEC filing reads. “Multiple proceedings have now affirmed what the company has stated from the outset: VGCP has fully honored the clear and mutually agreed-upon terms of its long-term contracts without exception.” Repsol and other big energy names—Shell, bp and China Petroleum & Chemical Corp. among them—had claimed Venture Global was profiteering from high spot rates rather than meeting its contracts with them and took their cases to the International Chamber of Commerce, claiming damages that could have topped $7 billion. The ICC’s arbitrators also have sided with Venture Global in Shell’s case but last October ruled in favor of bp, whose executives’ claim of damages of at least $1 billion is expected to be adjudicated this year. Venture Global—whose leaders declared COD for Calcasieu Pass last April—nearly 4 months ago settled its dispute with China Petroleum & Chemical Corp. Shares of Venture Global (Ticker: VG) were

Read More »

EIA: US crude inventories up 3.6 million bbl

US crude oil inventories for the week ended Jan. 16, excluding the Strategic Petroleum Reserve, increased by 3.6 million bbl from the previous week, according to data from the US Energy Information Administration. The report was released a day later than usual due to the federal holiday Jan. 19. At 426.0 million bbl, US crude oil inventories are about 2% below the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories increased by 6.0 million bbl from last week and are about 5% above the 5-year average for this time of year. Finished gasoline inventories and blending components inventories both increased last week. Distillate fuel inventories increased by 3.3 million bbl last week and are about 1% below the 5-year average for this time of year. Propane-propylene inventories decreased by 2.1 million bbl from last week and are 39% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.6 million b/d for the week ended Jan. 16, which was 354,000 b/d less than the previous week’s average. Refineries operated at 93.3% of capacity. Gasoline production decreased, averaging 8.8 million b/d. Distillate fuel production decreased by 210,000 b/d, averaging 5.1 million b/d. US crude oil imports averaged 6.4 million b/d, down by 645,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged 6.2 million b/d, 5.3% less than the same 4-week period last year. Total motor gasoline imports averaged 412,000 b/d. Distillate fuel imports averaged 215,000 b/d.

Read More »

Aramco Starts 1st Bond Sale of The Year

Aramco is launching its first bond sale of the year, following two debt issuances last year as the world’s largest oil company aims to increase borrowing levels and support investment and dividend payments. The government-owned oil producer plans to sell dollar-denominated bonds on international markets, according to a statement to the Saudi stock exchange. Aramco is marketing debt with maturities ranging from three to 30 years, said a person with knowledge of the matter who asked not to be identified. Aramco is a key contributor to Saudi state finances, with large dividend payments supplementing royalties linked to crude sales. As oil prices have dipped and OPEC+ policy limited Saudi production, cash flows lagged payouts before a rebound in the third quarter. Aramco’s $17 billion in debt sales over the last two years helped support payouts. Saudi Arabia’s budget remains heavily dependent on oil revenue as the kingdom pursues an ambitious modernization drive. Crude prices remain well below levels needed to balance the state budget, forcing the government to project spending shortfalls for the coming years. Initial pricing thoughts range from about 100 basis points over US Treasuries for the three-year tranche to about 165 basis points for the longer maturity. The market expects the bond sale will raise about $2 billion. Aramco has turned to debt to augment its cash flow and plans to invest more than $50 billion this year in oil and natural gas production, while maintaining its high base dividend of $21 billion. In November, the company reported a surprise jump in third-quarter profit as rising production outweighed lower crude prices. Earnings are set to slip for the full year, estimates compiled by Bloomberg show. While Brent crude has risen this year amid geopolitical tensions, including US attacks on or threats of action against fellow OPEC producers Venezuela and

Read More »

EIA Sees Glut Widening in 2026

World petroleum and other liquid fuels production will outweigh consumption by 2.83 million barrels per day in 2026. That’s according to the U.S. Energy Information Administration’s (EIA) January short term energy outlook (STEO), which projected that global petroleum and other liquid fuels production and consumption will average 107.65 million barrels per day and 104.82 million barrels per day, respectively, this year. A quarterly breakdown included in the EIA’s latest STEO projected that production will average 106.93 million barrels per day in the first quarter of 2026, 107.52 million barrels per day in the second quarter, and 107.92 million barrels per day in the third quarter, and 108.24 million barrels per day in the fourth quarter. Another quarterly breakdown in the EIA’s January STEO forecast that consumption will come in at 103.36 million barrels per day in the first quarter of this year, 104.86 million barrels per day in the second quarter, and 105.66 million barrels per day in the third quarter, and 105.38 million barrels per day in the fourth quarter. The EIA’s latest STEO showed that world petroleum and other liquid fuels production outweighed consumption by 2.59 million barrels per day in 2025. In this STEO, the EIA highlighted that output averaged 103.67 million barrels per day in the first quarter of 2025, 105.21 million barrels per day in the second quarter, 107.88 million barrels per day in the third quarter, and 108.30 million barrels per day in the fourth quarter. The STEO showed that consumption came in at 101.96 million barrels per day in the first quarter of last year, 103.68 million barrels per day in the second quarter, 104.55 million barrels per day in the third quarter, and 104.52 million barrels per day in the fourth quarter. Looking ahead to 2027 in its January STEO, the EIA

Read More »

Photonic chip vendor snags Gates investment

“Moore’s Law is slowing, but AI can’t afford to wait. Our breakthrough in photonics unlocks an entirely new dimension of scaling, by packing massive optical parallelism on a single chip,” said Patrick Bowen, CEO of Neurophos. “This physics-level shift means both efficiency and raw speed improve as we scale up, breaking free from the power walls that constrain traditional GPUs.” The new funding includes investments from Microsoft’s investment fund M12 that will help speed up delivery of Neurophos’ first integrated photonic compute system, including datacenter-ready OPU modules. Neurophos is not the only company exploring this field. Last April, Lightmatter announced the launch of photonic chips to tackle data center bottlenecks, And in 2024, IBM said its researchers were exploring optical chips and developing a prototype in this area.

Read More »

Intel wrestling with CPU supply shortage

“We have important customers in the data center side. We have important OEM customers on both data center and client and that needs to be our priority to get the limited supply we have to those customers,” he added. CEO Lip-Bu Tan added that the continuing proliferation and diversification of AI workloads is placing significant capacity constraints on traditional and new hardware infrastructure, reinforcing the growing and essential role CPUs play in the AI era. Because of this, Intel decided to simplify its server road map, focusing resources on the 16-channel Diamond Rapids product and accelerate the introduction of Coral Rapids. Intel had removed multithreading from diamond Rapids, presumably to get rid of the performance bottlenecks. With each core running two threads, they often competed for resources. That’s why, for example, Ampere does not use threading but instead applies many more cores per CPU. With Coral Rapids, Intel is not only reintroducing multi-threading back into our data center road map but working closely with Nvidia to build a custom Xeon fully integrated with their NVLink technology to Build the tighter connection between Intel Xeon processors and Nvidia GPUs. Another aspect impacting supply has been yields or the new 18A process node. Tan said he was disappointed that the company could not fully meet the demand of the markets, and that while yields are in line with internal plans, “they’re still below where I want them to be,” Tan said.  Tan said yields for 18A are improving month-over-month and Intel is targeting a 7% to 8% improvement each month.

Read More »

Intel’s AI pivot could make lower-end PCs scarce in 2026

However, he noted, “CPUs are not being cannibalized by GPUs. Instead, they have become ‘chokepoints’ in AI infrastructure.” For instance, CPUs such as Granite Rapids are essential in GPU clusters, and for handling agentic AI workloads and orchestrating distributed inference. How pricing might increase for enterprises Ultimately, rapid demand for higher-end offerings resulted in foundry shortages of Intel 10/7 nodes, Bickley noted, which represent the bulk of the company’s production volume. He pointed out that it can take up to three quarters for new server wafers to move through the fab process, so Intel will be “under the gun” until at least Q2 2026, when it projects an increase in chip production. Meanwhile, manufacturing capacity for Xeon is currently sold out for 2026, with varying lead times by distributor, while custom silicon programs are seeing lead times of 6 to 8 months, with some orders rolling into 2027, Bickley said. In the data center, memory is the key bottleneck, with expected price increases of more than 65% year over year in 2026 and up to 25% for NAND Flash, he noted. Some specific products have already seen price inflation of over 1,000% since 2025, and new greenfield capacity for memory is not expected until 2027 or 2028. Moor’s Sag was a little more optimistic, forecasting that, on the client side, “memory prices will probably stabilize this year until more capacity comes online in 2027.” How enterprises can prepare Supplier diversification is the best solution for enterprises right now, Sag noted. While it might make things more complex, it also allows data center operators to better absorb price shocks because they can rebalance against suppliers who have either planned better or have more resilient supply chains.

Read More »

Reports of SATA’s demise are overblown, but the technology is aging fast

The SATA 1.0 interface made its debut in 2003. It was developed by a consortium consisting of Intel, Dell, and storage vendors like Seagate and Maxtor. It quickly advanced to SATA III in 2009, but there never was a SATA IV. There was just nibbling around the edges with incremental updates as momentum and emphasis shifted to PCI Express and NVMe. So is there any life to be had in the venerable SATA interface? Surprisingly, yes, say the analysts. “At a high level, yes, SATA for consumer is pretty much a dead end, although if you’re storing TB of photos and videos, it is still the least expensive option,” said Bob O’Donnell, president and chief analyst with TECHnalysis Research. Similarly for enterprise, for massive storage demands, the 20 and 30 TB SATA drives from companies like Seagate and WD are apparently still in wide use in cloud data centers for things like cold storage. “In fact, both of those companies are seeing recording revenues based, in part, on the demand for these huge, high-capacity low-cost drives,” he said. “SATA doesn’t make much sense anymore. It underperforms NVMe significantly,” said Rob Enderle, principal analyst with The Enderle Group. “It really doesn’t make much sense to continue make it given Samsung allegedly makes three to four times more margin on NVMe.” And like O’Donnell, Enderle sees continued life for SATA-based high-capacity hard drives. “There will likely be legacy makers doing SATA for some time. IT doesn’t flip technology quickly and SATA drives do wear out, so there will likely be those producing legacy SATA products for some time,” he said.

Read More »

DCN becoming the new WAN for AI-era applications

“DCN is increasingly treated as an end-to-end operating model that standardizes connectivity, security policy enforcement, and telemetry across users, the middle mile, and cloud/application edges,” Sanchez said. Dell’Oro defines DCN as platforms and services that deliver consistent connectivity, policy enforcement, and telemetry from users, across the WAN, to distributed cloud and application edges spanning branch sites, data centers and public clouds. The category is gaining relevance as hybrid architectures and AI-era traffic patterns increase the operational penalty for fragmented control planes. DCN buyers are moving beyond isolated upgrades and are prioritizing architectures that reduce operational seams across connectivity, security and telemetry so that incident response and change control can follow a single thread, according to Dell’Oro’s research. What makes DCN distinct is that it links user-to-application experience with where policy and visibility are enforced. This matters as application delivery paths become more dynamic and workloads shift between on-premises data centers, public cloud, and edge locations. The architectural requirement is eliminating handoffs between networking and security teams rather than optimizing individual network segments. Where DCN is growing the fastest Cloud/application edge is the fastest-growing DCN pillar. This segment deploys policy enforcement and telemetry collection points adjacent to workloads rather than backhauling traffic to centralized security stacks. “Multi-cloud remains a reality, but it is no longer the durable driver by itself,” Sanchez said. “Cloud/application edge is accelerating because enterprises are trying to make application paths predictable and secure across hybrid environments, and that requires pushing application-aware steering, policy enforcement, and unified telemetry closer to workloads.”

Read More »

Edged US Builds Waterless, High-Density AI Data Center Campuses at Scale

Edged US is targeting a narrow but increasingly valuable lane of the hyperscale AI infrastructure market: high-density compute delivered at speed, paired with a sustainability posture centered on waterless, closed-loop cooling and a portfolio-wide design PUE target of roughly 1.15. Two recent announcements illustrate the model. In Aurora, Illinois, Edged is developing a 72-MW facility purpose-built for AI training and inference, with liquid-to-chip cooling designed to support rack densities exceeding 200 kW. In Irving, Texas, a 24-MW campus expansion combines air-cooled densities above 120 kW per rack with liquid-to-chip capability reaching 400 kW. Taken together, the projects point to a consistent strategy: standardized, multi-building campuses in major markets; a vertically integrated technical stack with cooling at its core; and an operating model built around repeatable designs, modular systems, and readiness for rapidly escalating AI densities. A Campus-First Platform Strategy Edged US’s platform strategy is built around campus-scale expansion rather than one-off facilities. The company positions itself as a gigawatt-scale, AI-ready portfolio expanding across major U.S. metros through repeatable design targets and multi-building campuses: an emphasis that is deliberate and increasingly consequential. In Chicago/Aurora, Edged is developing a multi-building campus with an initial facility already online and a second 72-MW building under construction. Dallas/Irving follows the same playbook: the first facility opened in January 2025, with a second 24-MW building approved unanimously by the city. Taken together with developments in Atlanta, Chicago, Columbus, Dallas, Des Moines, Kansas City, and Phoenix, the footprint reflects a portfolio-first mindset rather than a collection of bespoke sites. This focus on campus-based expansion matters because the AI factory era increasingly rewards developers that can execute three things at once: Lock down power and land at scale. Standardize delivery across markets. Operate efficiently while staying aligned with community and regulatory expectations. Edged is explicitly selling the second

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »