Stay Ahead, Stay ONMINE

Ethernet roadmap: AI drives high-speed, efficient Ethernet networks

“While the IEEE P802.3dj project is working toward defining 200G per lane for Ethernet by late 2026, the industry is (loudly) asking for 400G per lane yesterday, if not sooner,” Jones wrote in a recent Ethernet Alliance blog. In a post about Ethernet’s AI evolution, John D’Ambrosia wrote about the development of 400 Gb/s signaling: […]

“While the IEEE P802.3dj project is working toward defining 200G per lane for Ethernet by late 2026, the industry is (loudly) asking for 400G per lane yesterday, if not sooner,” Jones wrote in a recent Ethernet Alliance blog.

In a post about Ethernet’s AI evolution, John D’Ambrosia wrote about the development of 400 Gb/s signaling: “The IEEE P802.3dj project is defining the underlying 200Gb/s PAM4 signaling technologies in support of chip-to-chip, chip-to-module, backplane, copper cable, and single-mode fiber technologies to facilitate the numerous specifications for 200GbE, 400GbE, 800GbE, and 1.6TbE. These efforts are expected to be completed in the second half of 2026 so AI applications will have some near-term solutions to leverage. However, the staggering growth rates of computational power require the industry to start looking beyond 200 Gb/sec based signaling now for the networks of the future.”

“One of the outcomes of [the TEF] event was the realization the development of 400Gb/sec signaling would be an industry-wide problem. It wasn’t solely an application, network, component, or interconnect problem,” stated D’Ambrosia, who is a distinguished engineer with the Datacom Standards Research team at Futurewei Technologies, a U.S. subsidiary of Huawei, and the chair of the IEEE P802.3dj 200Gb/sec, 400Gb/sec, 800Gb/sec and 1.6Tb/sec Task Force. “Overcoming the challenges to support 400 Gb/s signaling will likely require all the tools available for each of the various layers and components.”

The IEEE in January began an “802.3 Ethernet Interconnect for AI” assessment, a multivendor effort to assess a number of key requirements for Ethernet and AI, such as:

  • What are the interconnect requirements for the different AI networks?
  • What are the performance requirements of these interconnects?
  • What are the priorities for the development of these interconnects?
  • What tradeoffs can be made between latency and resilience/reach/power?

“We are actively trying to figure out and understand which set of problems to solve here,” Jones said.

Ethernet vs. InfiniBand

There’s also the trend of moving AI networks toward Ethernet rather than current connectivity stalwart InfiniBand.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Ubuntu namespace vulnerability should be addressed quickly: Expert

Thus, “there is little impact of not ‘patching’ the vulnerability,” he said. “Organizations using centralized configuration tools like Ansible may deploy these changes with regularly scheduled maintenance or reboot windows.”  Features supposed to improve security Ironically, last October Ubuntu introduced AppArmor-based features to improve security by reducing the attack surface

Read More »

Google Cloud partners with mLogica to offer mainframe modernization

Other than the partnership with mLogica, Google Cloud also offers a variety of other mainframe migration tools, including Radis and G4 that can be employed to modernize specific applications. Enterprises can also use a combination of migration tools to modernize their mainframe applications. Some of these tools include the Gemini-powered

Read More »

Macquarie Strategists Forecast USA Crude Inventory Rise

In an oil and gas report sent to Rigzone late Monday by the Macquarie team, Macquarie strategists revealed that they are forecasting that U.S. crude inventories will be up 4.2 million barrels for the week ending March 28. “This follows a 3.3 million barrel draw for the week ending March 21 and compares to our initial expectation for a larger crude build this week,” the strategists said in the report. “For this week’s crude balance, from refineries, we model crude runs down meaningfully (-0.4 million barrels per day) following a strong print last week,” they added. “Among net imports, we model a moderate increase, with exports (-1.0 million barrels per day) and imports (-0.7 million barrels per day) much lower on a nominal basis,” they continued. The strategists warned in the report that timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for a bounce (+0.3 million barrels per day) this week,” they said in the report. “Rounding out the picture, we anticipate another small increase in SPR [Strategic Petroleum Reserve] stocks (+0.3 MM BBL) this week,” they added. The strategists also noted in the report that, “among products”, they “look for draws in gasoline (-0.9 million barrels) and distillate (-4.1 million barrels), with jet stocks effectively flat”. “We model implied demand for these three products at ~14.4 million barrels per day for the week ending March 28,” they said. In its latest weekly petroleum status report at the time of writing, which was released on March 26 and included data for the week ending March 21, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories, excluding those in the SPR, decreased by 3.3 million barrels from the week ending March 14 to the

Read More »

NEO Energy seeks contractors for Donan, Balloch and Lochranza decommissioning

NEO Energy has released five tenders seeking contractors to help decommission its Donan, Balloch and Lochranza fields, along with the Global Producer III floating production offloading and storage (FPSO) vessel. According to data from the North Sea Transition Authority’s (NSTA’s) Pathfinder database, the decommissioning campaign is expected to start in the second quarter of 2026 at the earliest, when work to disconnect the subsea infrastructure is expected to commence. This will also see the FPSO unmoored and towed to an unspecified location. By 2027, NEO plans to begin recovering the subsea infrastructure, followed by plugging and abandoning a total of 19 wells in 2028.- To help with this, NEO Energy is looking for a contractor to perform P&A activities on the wells. The tender is expected to take place on 31 December 2025 and has a value of over £25 million The company also announced four additional tenders, each with a value of less than £25m, covering recycling the FPSO, flushing, isolating and disconnecting the subsea infrastructure from the FPSO, disconnecting the moorings and towing the FPSO, and bulk seabed clearance. NEO Energy recently announced plans to merge its North Sea operations with Repsol Resources UK’s. The deal will see Repsol retain $1.8 billion (£1.4bn) in decommissioning liabilities related to its legacy assets, which NEO said will enhance the cash flows of the merged business. NEO said it expects to complete the deal during the third quarter of 2025, subject to regulatory approvals. © Supplied by SystemNinian South. CNRL Canadian Natural Resources Ltd (CNRL) has issued two tenders to assist with decommissioning its Ninian field in the Northern North Sea, located east of Shetland. The decommissioning scope consists of three areas, covering the Ninian South Platform, Ninian Central Platform and the Ninian subsea infrastructure, which includes the Strathspey, Lyell, Columba

Read More »

Eni, Saipem Extend Biorefining Collaboration

Eni SpA and Saipem SpA have extended a deal to collaborate on building biorefineries and converting traditional refineries. The agreement, first signed 2023, combines Eni’s technological expertise with Saipem’s expertise in the design and construction of such plants. Italian state-backed integrated energy company Eni holds a 21.19 percent stake in energy engineering company Saipem. “The agreement concerns, in particular, the construction of new biorefineries, the conversion of traditional refineries into biorefineries and, generally, the development of new initiatives by Eni in the field of industrial transformation”, Eni said in an online statement. “Through this agreement, Eni, in line with its goal of decarbonizing processes and products, intends to further develop its biorefining capacity through the development of new initiatives to produce biofuels both for aviation (SAF, Sustainable Aviation Fuel) and for land and sea mobility (HVO, Hydrotreated Vegetable Oil). “At the same time, Saipem further strengthens its distinctive expertise in biorefining and decarbonization”. Under the agreement Eni recently awarded Saipem a contract for engineering, procurement services and the purchase of critical equipment for the upgrade of a biorefinery in Porto Marghera. The project will increase the plant’s capacity from 400,000 metric tons a year to 600,000 metric tons per year. The upgrade will also enable the facility to produce SAF from 2027. In November 2024 Eni also picked Saipem for the conversion of the Livorno refinery into a biorefinery, as part of their biorefining collaboration. In both projects Saipem also carried out preparatory engineering activities such as feasibility studies and front-end engineering design. The two contracts are valued about EUR 320 million ($345.4 million), according to Eni. Eni, through subsidiary Enilive, has a biorefining production capacity of 1.65 million metric tons per annum (MMtpa). Eni aims to raise this to over 5 MMtpa by 2030 as part of its efforts

Read More »

Prairie Closes $603MM DJ Basin Acquisition from Bayswater

Prairie Operating Co. said it has closed its $602.75 million acquisition of certain Denver-Julesburg Basin (DJ Basin) assets from Bayswater Exploration and Production and its affiliated entities, which strengthens its position “as a leading operator” in the basin. The acquisition boosts Prairie’s production by approximately 25,700 net barrels of oil equivalent per day (boepd), consisting of 69 percent liquids, the company said in a news release. It also adds 24,000 net acres to the company’s approximately 600 highly economic drilling locations and roughly 10 years of drilling inventory. The assets contribute 77.9 million barrels of oil equivalent (MMboe) in proved reserves with an estimated PV-10 value of $1.1 billion, Prairie said. With the expansion, Prairie said it anticipates a substantial uplift in its 2025 production, revenue, and adjusted EBITDA. Prairie said the transaction was funded through a combination of proceeds from a new issuance of series F convertible preferred stock to a single institutional investor, a common stock public offering, a draw on the company’s newly expanded $1 billion credit facility, and a direct issuance of common stock to Bayswater. Following the closing, Prairie has approximately 35.4 million shares of common stock outstanding. “This acquisition is a pivotal moment for Prairie, significantly expanding our operational footprint in the DJ Basin,” Prairie Chairman and CEO Edward Kovalik said. “By integrating these high-quality assets, we are materially enhancing our production profile, strengthening our financial position, and creating meaningful value for our shareholders. Prairie remains singularly focused on executing our strategic vision to become a premier high-growth, low-cost oil producer”. Prairie President Gary Hanna said, “The addition of the Bayswater Assets further establishes Prairie as a leading operator in the DJ Basin. These assets are a strong complement to our existing portfolio, and we remain focused on maximizing operational efficiencies, optimizing production, and

Read More »

Ukraine Receives Critical Energy Equipment from Norway

Norway has delivered critical energy equipment for Ukraine via the United Nations Development Program (UNDP) to help ensure uninterrupted energy supply amid the war. As part of the aid, state-owned oil and gas company Naftogaz Group received gas-fired generator sets with a combined capacity of 150 megawatts. These will provide backup power and heating for critical infrastructure and residential areas, Naftogaz said in an online statement. “This support would strengthen the energy security of two major Ukrainian cities and provide electricity and heat to over 500,000 residents of the Dnipropetrovsk region during the next heating season”, Naftogaz said. Its subsidiary JSC Ukrgasvydobuvannya also received equipment to enhance natural gas production. Meanwhile state-owned electricity transmission system operator NPC Ukrenergo received two 330-kilovolt autotransformers with a capacity of 200 megavolt-amperes. “This support from Norway is a vital lifeline, enabling us to strengthen our energy infrastructure and build resilience against future disruptions”, said Ukrainian Energy Minister German Galushchenko. Naftogaz said, “Norway and UNDP are working closely to restore Ukraine’s power system, combining Norway’s financial support with UNDP’s operational expertise”. Support under the collaboration includes the provision of generators and solar power plants, as well as power and heat for schools, hospitals and other critical facilities, according to Naftogaz. “The initiative has already benefited millions of Ukrainians by strengthening essential services like healthcare and education, modernizing the energy sector, and enabling businesses to operate with fewer interruptions”, the UNDP said separately. It said reconstruction for Ukraine’s energy sector needed an estimated $67.78 billion as of December 2024. “The regions with the largest estimated needs are Zaporizhzhia, Kharkiv, Dnipropetrovsk, Donetsk, Odesa, and Sumy oblast”, the UNDP said. “The attacks on the energy system have caused civilian suffering and general economic attrition. Immediate power outages have affected around 1.5 million people, disrupting heating, water supply and sanitation, public

Read More »

ECITB commits to £2m investment in Aberdeen, Humber, Teesside and more

The Engineering Construction Industry Training Board (ECITB) has announced a further £2 million investment in ‘skills hubs’ across the UK over the next two years. It is directing funds towards “industrial cluster hot spots” such as the north-east of Scotland, Teesside and the Humber, Scotland, South Wales and the Solent. This follows on from a previous £1m investment through the trade body’s Regional Skills Hub Funding initiative to increase training provider capacity and grow new entrant numbers into the engineering and construction industry (ECI) as it contends with skills shortages. ECITB’s cash has already been directed to the Humber region, Teesside, the north-east of Scotland and the wider UK, the organisation explained, as it plans to announce further projects to receive backing “shortly”. Andrew Hockey, CEO of ECITB, commented: “This extra investment will help further address skills shortages by enhancing training and assessment infrastructure and capabilities at both colleges and independent training providers located in Britain’s industrial heartlands that will directly increase the flow of trained workers into the industry.” This comes soon after an ECITB report, which found the oil and gas workforce is older than other sectors, and it is unlikely that young people will fill the gap left by retirees. © Supplied by ECITB/ Dave DodgeECITB CEO Andrew Hockey meeting Work Ready learners at SETA in Southampton. Photo by Dave Dodge. The ECITB recently worked with industry partners as part of the Net Zero Teesside cluster project, which received £478,000 funding last month. The funding will contribute to an immersive pipefitting, welding, mechanical and project-based training rig and includes enhanced pipefitting facilities. The joint venture between BP and Equinor recently faced criticism from MPs who claimed government investment of £21.7bn in “unproven technologies” was “risky”. One of the first businesses to secure funds from the initial £1m

Read More »

Talent gap complicates cost-conscious cloud planning

The top strategy so far is what one enterprise calls the “Cloud Team.” You assemble all your people with cloud skills, and your own best software architect, and have the team examine current and proposed cloud applications, looking for a high-level approach that meets business goals. In this process, the team tries to avoid implementation specifics, focusing instead on the notion that a hybrid application has an agile cloud side and a governance-and-sovereignty data center side, and what has to be done is push functionality into the right place. The Cloud Team supporters say that an experienced application architect can deal with the cloud in abstract, without detailed knowledge of cloud tools and costs. For example, the architect can assess the value of using an event-driven versus transactional model without fixating on how either could be done. The idea is to first come up with approaches. Then, developers could work with cloud providers to map each approach to an implementation, and assess the costs, benefits, and risks. Ok, I lied about this being the top strategy—sort of, at least. It’s the only strategy that’s making much sense. The enterprises all start their cloud-reassessment journey on a different tack, but they agree it doesn’t work. The knee-jerk approach to cloud costs is to attack the implementation, not the design. What cloud features did you pick? Could you find ones that cost less? Could you perhaps shed all the special features and just host containers or VMs with no web services at all? Enterprises who try this, meaning almost all of them, report that they save less than 15% on cloud costs, a rate of savings that means roughly a five-year payback on the costs of making the application changes…if they can make them at all. Enterprises used to build all of

Read More »

Lightmatter launches photonic chips to eliminate GPU idle time in AI data centers

“Silicon photonics can transform HPC, data centers, and networking by providing greater scalability, better energy efficiency, and seamless integration with existing semiconductor manufacturing and packaging technologies,” Jagadeesan added. “Lightmatter’s recent announcement of the Passage L200 co-packaged optics and M1000 reference platform demonstrates an important step toward addressing the interconnect bandwidth and latency between accelerators in AI data centers.” The market timing appears strategic, as enterprises worldwide face increasing computational demands from AI workloads while simultaneously confronting the physical limitations of traditional semiconductor scaling. Silicon photonics offers a potential path forward as conventional approaches reach their limits. Practical applications For enterprise IT leaders, Lightmatter’s technology could impact several key areas of infrastructure planning. AI development teams could see significantly reduced training times for complex models, enabling faster iteration and deployment of AI solutions. Real-time AI applications could benefit from lower latency between processing units, improving responsiveness for time-sensitive operations. Data centers could potentially achieve higher computational density with fewer networking bottlenecks, allowing more efficient use of physical space and resources. Infrastructure costs might be optimized by more efficient utilization of expensive GPU resources, as processors spend less time waiting for data and more time computing. These benefits would be particularly valuable for financial services, healthcare, research institutions, and technology companies working with large-scale AI deployments. Organizations that rely on real-time analysis of large datasets or require rapid training and deployment of complex AI models stand to gain the most from the technology. “Silicon photonics will be a key technology for interconnects across accelerators, racks, and data center fabrics,” Jagadeesan pointed out. “Chiplets and advanced packaging will coexist and dominate intra-package communication. The key aspect is integration, that is companies who have the potential to combine photonics, chiplets, and packaging in a more efficient way will gain competitive advantage.”

Read More »

Silicon Motion rolls SSD kit to bolster AI workload performance

The kit utilizes the PCIe Dual Ported enterprise-grade SM8366 controller with support for PCIe Gen 5 x4 NVMe 2.0 and OCP 2.5 data center specifications. The 128TB SSD RDK also supports NVMe 2.0 Flexible Data Placement (FDP), a feature that allows advanced data management and improved SSD write efficiency and endurance. “Silicon Motion’s MonTitan SSD RDK offers a comprehensive solution for our customers, enabling them to rapidly develop and deploy enterprise-class SSDs tailored for AI data center and edge server applications.” said Alex Chou, senior vice president of the enterprise storage & display interface solution business at Silicon Motion. Silicon Motion doesn’t make drives, rather it makes reference design kits in different form factors that its customers use to build their own product. Its kits come in E1.S, E3.S, and U.2 form factors. The E1.S and U.2 forms mirror the M.2, which looks like a stick of gum and installs on the motherboard. There are PCI Express enclosures that hold four to six of those drives and plug into one card slot and appear to the system as a single drive.

Read More »

Executive Roundtable: Cooling Imperatives for Managing High-Density AI Workloads

Michael Lahoud, Stream Data Centers: For the past two years, Stream Data Centers has been developing a modular, configurable air and liquid cooling system that can handle the highest densities in both mediums. Based on our collaboration with customers, we see a future that still requires both cooling mediums, but with the flexibility to deploy either type as the IT stack destined for that space demands. With this necessity as a backdrop, we saw a need to develop a scalable mix-and-match front-end thermal solution that gives us the ability to late bind the equipment we need to meet our customers’ changing cooling needs. It’s well understood that liquid far outperforms air in its ability to transport heat, but further to this, with the right IT configuration, cooling fluid temperatures can also be raised, and this affords operators the ability to use economization for a greater number of hours a year. These key properties can help reduce the energy needed for the mechanical part of a data center’s operations substantially.  It should also be noted that as servers are redesigned for liquid cooling and the onboard server fans get removed or reduced in quantity, more of the critical power delivered to the server is being used for compute. This means that liquid cooling also drives an improvement in overall compute productivity despite not being noted in facility PUE metrics.  Counter to air cooling, liquid cooling certainly has some added management challenges related to fluid cleanliness, concurrent maintainability and resiliency/redundancy, but once those are accounted for, the clusters become stable, efficient and more sustainable with improved overall productivity.

Read More »

Airtel connects India with 100Tbps submarine cable

“Businesses are becoming increasingly global and digital-first, with industries such as financial services, data centers, and social media platforms relying heavily on real-time, uninterrupted data flow,” Sinha added. The 2Africa Pearls submarine cable system spans 45,000 kilometers, involving a consortium of global telecommunications leaders including Bayobab, China Mobile International, Meta, Orange, Telecom Egypt, Vodafone Group, and WIOCC. Alcatel Submarine Networks is responsible for the cable’s manufacturing and installation, the statement added. This cable system is part of a broader global effort to enhance international digital connectivity. Unlike traditional telecommunications infrastructure, the 2Africa Pearls project represents a collaborative approach to solving complex global communication challenges. “The 100 Tbps capacity of the 2Africa Pearls cable significantly surpasses most existing submarine cable systems, positioning India as a key hub for high-speed connectivity between Africa, Europe, and Asia,” said Prabhu Ram, VP for Industry Research Group at CyberMedia Research. According to Sinha, Airtel’s infrastructure now spans “over 400,000 route kilometers across 34+ cables, connecting 50 countries across five continents. This expansive infrastructure ensures businesses and individuals stay seamlessly connected, wherever they are.” Gogia further emphasizes the broader implications, noting, “What also stands out is the partnership behind this — Airtel working with Meta and center3 signals a broader shift. India is no longer just a consumer of global connectivity. We’re finally shaping the routes, not just using them.”

Read More »

Former Arista COO launches NextHop AI for customized networking infrastructure

Sadana argued that unlike traditional networking where an IT person can just plug a cable into a port and it works, AI networking requires intricate, custom solutions. The core challenge is creating highly optimized, efficient networking infrastructure that can support massive AI compute clusters with minimal inefficiencies. How NextHop is looking to change the game for hyperscale networking NextHop AI is working directly alongside its hyperscaler customers to develop and build customized networking solutions. “We are here to build the most efficient AI networking solutions that are out there,” Sadana said. More specifically, Sadana said that NextHop is looking to help hyperscalers in several ways including: Compressing product development cycles: “Companies that are doing things on their own can compress their product development cycle by six to 12 months when they partner with us,” he said. Exploring multiple technological alternatives: Sadana noted that hyperscalers might try and build on their own and will often only be able to explore one or two alternative approaches. With NextHop, Sadana said his company will enable them to explore four to six different alternatives. Achieving incremental efficiency gains: At the massive cloud scale that hyperscalers operate, even an incremental one percent improvement can have an oversized outcome. “You have to make AI clusters as efficient as possible for the world to use all the AI applications at the right cost structure, at the right economics, for this to be successful,” Sadana said. “So we are participating by making that infrastructure layer a lot more efficient for cloud customers, or the hyperscalers, which, in turn, of course, gives the benefits to all of these software companies trying to run AI applications in these cloud companies.” Technical innovations: Beyond traditional networking In terms of what the company is actually building now, NextHop is developing specialized network switches

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »