Stay Ahead, Stay ONMINE

Network resiliency climbs in importance for businesses

Managed services now the norm Most organizations no longer manage networks on their own: 72% supplement in-house teams with third-party providers, according to C1’s report. It finds businesses are shifting to managed services, which is consistent with other industry research. A report from KPMG found that 73% of organizations have implemented managed services in some […]

Managed services now the norm

Most organizations no longer manage networks on their own: 72% supplement in-house teams with third-party providers, according to C1’s report. It finds businesses are shifting to managed services, which is consistent with other industry research. A report from KPMG found that 73% of organizations have implemented managed services in some areas of their business.

The main factors for organizations when choosing vendors are availability of managed services and industry experience. But there are other considerations as well, such as contract flexibility, advisory support, and reputation. (C1 offers a broad range of managed services that span the scope of enterprise infrastructure.)

Security and privacy in conflict with modernization

Despite strong investment plans, organizations continue to face obstacles. According to survey respondents, the most common barriers to investing in networks are data security and privacy concerns (45%), rapidly changing technology landscape (39%), and challenges tied to regulatory requirements and legacy complexity (both 37%). Incompatibility with existing infrastructure and a lack of organizational priority were also named as barriers to network investments.

Once modernization projects are put into practice, organizations face various challenges. Forty-five percent of IT and business leaders said it’s difficult to balance modernization with day-to-day operations. The same percentage of the respondents pointed to challenges with security and compliance requirements, and maintaining compatibility with existing systems. Downtime or service disruptions (42%) and the increased complexity of network architectures (37%) rounded out the list. A smaller number reported high implementation costs and limited in-house expertise.

The main benefit of building resilience into networks is protecting critical data, according to 47% of the respondents. Minimizing downtime (42%) and boosting network performance (40%) are just as important. Some also noted benefits beyond the technical side, including stronger customer trust and knowing the business can keep running even if an outage happens.

Final thoughts

Historically, most business leaders didn’t put a lot of thought into the network. It was viewed as the “plumbing” of the company, with many considering it a commodity. That’s changed today as all digital technologies, such as cloud, mobility, IoT and AI, are network centric, and, without the network, businesses can’t operate.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Network resiliency climbs in importance for businesses

Managed services now the norm Most organizations no longer manage networks on their own: 72% supplement in-house teams with third-party providers, according to C1’s report. It finds businesses are shifting to managed services, which is consistent with other industry research. A report from KPMG found that 73% of organizations have

Read More »

Australian provider outage leaves emergency callers unable to connect

Further recommendations included the requirement that network operators establish a Triple Zero custodian, with responsibility for the efficient functioning of the Triple Zero ecosystem, including monitoring its end-to-end performance. In addition, providers had to conduct six-monthly end-to-end testing of all aspects of the ecosystem within and across networks, which must

Read More »

9 Linux certifications to boost your career

Price: $369 Exam format: 90 minutes, 90 questions (multiple-choice and performance-based) Prerequisites: None (12 months Linux experience recommended) Focus: System management, security, troubleshooting, automation across distributions  Salary range: $79,000-$105,000 Best for: Career changers, IT generalists Recertification: Certification expires 3 years after it is granted and requires 50 continuing education credits

Read More »

Nvidia reportedly acquires Enfabrica CEO and chip technology license

Another Enfabrica technology that’s of interest to Nvidia, according to Forrester principal analyst Charlie Dai, is Elastic Memory Fabric System (EMFASYS) that became generally available in July. EMFASYS provides AI servers flexible access to memory bandwidth and capacity through a standalone device that connects over standard network ports. The combination

Read More »

Carbon markets are incomplete without nuclear

 Guido Núñez-Mujica is director of data science at the Anthropocene Institute. As the world doubles down on net-zero targets, carbon markets have become a cornerstone of our global climate response. But their credibility hinges on one critical question: Are we truly valuing all forms of low-carbon energy? Right now, the answer is an emphatic no. And one glaring omission proves it: nuclear power. Despite being the second-largest source of low-carbon electricity on the planet, nuclear energy is still largely excluded from voluntary carbon markets. Major standards like Verra and Gold Standard currently do not allow nuclear projects to generate carbon credits. This omission doesn’t just weaken the integrity of the system: it distorts the entire market. Nuclear power prevents 430 million metric tons of CO2 emissions each year. That’s more than the annual emissions of Japan. Finland reduced their coal use by 70% after their latest nuclear reactor went online. Nuclear plants operate at high capacity, providing reliable baseload power that complements variable renewables like wind and solar. And yet, the enormous climate value of that contribution goes unrewarded — financially and symbolically. To put it in perspective: while wind and solar generation often fluctuate with the weather and time of day, nuclear plants run more than 90% of the time. This makes them a crucial anchor in a clean energy grid — one that allows intermittent renewables to expand without sacrificing reliability or stability and able to replace fossil fuels 24/7. This has serious implications. Around the world, aging nuclear reactors are being decommissioned without adequate replacements. In many cases, they are replaced by fossil fuels. When Germany shut down its nuclear fleet, coal use spiked. In California, emissions rose after San Onofre closed. In New York City, shutting down Indian Point led to such an increase in emissions that the Texas grid is cleaner now

Read More »

BP Halts Plans to Build Biofuels Plant in the Netherlands

BP Plc is to forgo building a biofuels plant in the Netherlands as the UK energy giant continues to focus its downstream portfolio, following a strategic pivot earlier this year to return to focusing on its core oil and gas business. The decision, confirmed by a company spokesperson on Monday, marks the second Dutch biofuels plant to not move ahead this year, after Shell Plc shelved its facility under construction in favor of shedding low-carbon businesses to boost profitability.  BP hadn’t begun construction of its site in Rotterdam, and previously stopped plans to build biofuels plants at its Kwinana facility in Australia. BP has also paused work at plants in Germany and the US, leaving its Spanish site of Castellon as the possible long-term option for development. Reuters first reported BP scrapping plans to build the Dutch plant. BP has previously said it needs a 15% return on investment on biofuels investments.  The London-based company plans to invest $1.5 billion to $2 billion a year through 2027 on energy transition businesses, down from plans of spending more than $5 billion annually on the transition. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Strategists Expect USA Crude and Product Draws This Week

In an oil and gas report sent to Rigzone late Monday by the Macquarie team, strategists at Macquarie, including Walt Chancellor, revealed that they expect U.S. crude and product draws this week. “We are forecasting U.S. crude inventories down 3.3 million barrels for the week ending September 19,” the strategists said in the report. “This follows a 9.3 million barrel draw in the prior week, with the crude balance realizing tighter than our expectations,” the strategists added. “For this week’s balance, from refineries, we model another reduction in crude runs (-0.2 million barrels per day). Among net imports, we model a large increase, with exports modestly lower (-0.3 million barrels per day) and imports higher (+0.6 million barrels per day) on a nominal basis,” they continued. The strategists warned in the report that the timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj. +transfers), we look for a reduction (-0.2 million barrels per day) on a nominal basis this week. Rounding out the picture, we anticipate a smaller increase (+0.3 million barrels) in SPR [U.S. Strategic Petroleum Reserve] stocks this week,” the strategists said in the report. The strategists went on to note in the report that, “among products”, they “look for modest draws across the board (gasoline/ distillate/jet -1.3/-0.6/-0.3 million barrels)”. “We model implied demand for these three products at ~14.4 million barrels per day for the week ending September 19,” they strategists added in the report. In its latest weekly petroleum status report at the time of writing, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories, excluding those in the SPR, decreased by 9.3 million barrels from the week ending September 5 to the week ending September 12. That EIA report was released

Read More »

Revolution Wind to resume construction after judge grants injunction

Dive Brief: Construction work will resume “as soon as possible” on the 700-MW Revolution Wind project offshore Rhode Island, joint owner Ørsted said Monday after a judge granted the project a preliminary injunction against the Trump administration’s August stop work order. “Revolution Wind has demonstrated likelihood of success on the merits of its underlying claims,” U.S. District Court for the District of Columbia Judge Royce Lamberth said in his Monday ruling. “It is likely to suffer irreparable harm in the absence of an injunction … maintaining the status quo by granting the injunction is in the public interest.” Ørsted said the ruling will allow the project to “restart impacted activities while the underlying lawsuit challenging the stop-work order progresses. Revolution Wind will continue to seek to work collaboratively with the U.S. Administration and other stakeholders toward a prompt resolution.” Dive Insight: The Bureau of Ocean Energy Management is enjoined from imposing the stop work order until the court decides otherwise, Lamberth ruled. Reuters reported Monday that Lamberth said during a hearing that if Revolution Wind “cannot meet benchmark deadlines, the entire project could collapse … There is no doubt in my mind of irreparable harm to the plaintiffs.” Revolution Wind, a subsidiary of Ørsted and a 50/50 joint venture between Ørsted and Global Infrastructure Partners’ Skyborn Renewables, is fully permitted with 80% of its construction done. The project is slated for completion in 2026, at which point it will deliver power to Connecticut and Rhode Island.  The attorneys general for Connecticut and Rhode Island also filed suit against the Trump administration earlier this month, requesting an injunction on the basis that the federal government “arbitrarily reversed course and issued a Stop Work Order without explanation … despite the States’ and others’ deep reliance interests.” BOEM said in its stop work

Read More »

New DOE Initiative Seeks to Hasten US Grid Projects

The United States Department of Energy (DOE) has announced a program to accelerate large-scale power transmission and generation projects. “The Speed to Power initiative will help ensure the United States has the power needed to win the global artificial intelligence race while continuing to meet growing demand for affordable, reliable and secure energy”, DOE said in a statement on its website. To kickstart the initiative, DOE issued a Request for Information (RFI) for projects that the agency could prioritize for siting and permitting support, technical assistance or funding. “DOE is interested in identifying geographic areas or zones where targeted federal investment in transmission, generation or grid infrastructure could unlock or accelerate large-scale economic activity tied to electric load growth”, stated the RFI, published online. “These may include regions experiencing substantial near-term demand from data centers, manufacturing or other large load users, as well as areas with untapped development potential constrained by inadequate grid infrastructure”. The RFI said, “In addition, DOE is requesting stakeholder input on how to best utilize its funding programs and authorities to rapidly expand energy generation and transmission grid capacity to meet electricity demand growth across the country in a reliable and affordable manner”. Responses are due November 21. According to a DOE report July 2025, 104 gigawatts (GW) of firm capacity are set to be retired in the U.S. by 2030. While the DOE’s analysis projects 209 GW of new capacity by the end of the decade, only 22 GW of these would come from firm baseload sources. “Even assuming no retirements, the model found increased risk of outages in 2030 by a factor of 34”, the report said, blaming in part a shift to “intermittent” sources. “[T]he average year co-incident peak load is projected to grow from a current average peak of 774 GW to

Read More »

Hess Midstream Expects Lower Activity in Guidance Update

Hess Midstream LP has revised its financial and operational guidance for 2025, 2026 and 2027. This update is based on an anticipated reduction in Bakken rig activity by Chevron, which will decrease from four to three drilling rigs starting in the fourth quarter of 2025, Hess Midstream said in a media release. Hess Midstream said it continues to expect long-term growth in gas throughput volumes in the Bakken through at least 2027, while oil throughput volumes are now expected to plateau in 2026 as a result of lower planned rig activity. The company said it expects throughput volumes to generally stay above already-established minimum volume commitments. The company earlier projected, for 2025, gas gathering of 475-485 million cubic feet per day (MMcfd), crude oil gathering of 120,000-130,000 barrels per day (bpd), gas processing of 455-465 MMcfd, crude terminal volumes of 130,000-140,000 bpd, and water gathering of 125,000-135,000 bpd. In its latest update, Hess Mindstream said its full-year gas throughput guidance has been shifted based on adverse weather conditions and maintenance in the third quarter and lower expected third-party volumes in the fourth quarter. In 2025, full-year gas gathering volumes are now anticipated to average between 455 and 465 MMcd, and gas processing volumes are now expected to average between 440 and 450 MMcfd. For the year ending December 31, 2025, the company earlier provided unaudited financial guidance with a net income projection of $685-735 million, and Adjusted EBITDA of $1,235-1,285 million. Capital expenditures are expected to be approximately $300 million, with adjusted free cash flow forecast to be between $725 million and $775 million. Hess Midstream said that it now expects significantly lower capital spending in 2026 and 2027 based on the suspension of early engineering activities on the Capa gas plant and removal of the project from its forward

Read More »

Nvidia and OpenAI open $100B, 10 GW data center alliance

A Nvidia spokesperson said that this deal is separate from Project Stargate, the $500 billion data center project announced earlier this year featuring OpenAI, Oracle, and SoftBank. It launched with much hoopla but has since struggled to gain any traction. OpenAI is already an exclusive AI partner for Microsoft, offering ChatGPT through the Bing search engine and Microsoft Office 365. Microsoft promised in January to invest $85 billion in AI data centers. However, that deal seems to be unraveling. OpenAI Has partnered with Oracle to offer its services through Oracle Cloud Infrastructure, while Microsoft has added Anthropic’s Perplexity generative AI service alongside ChatGPT. OpenAI’s next-generation datacenters will use Nvidia’s Vera Rubin platform, which went into production in August and is expected to begin shipping late next year. They are expected to be capable of performing FP4 inference at 3.6 exaflops and FP8 training at 1.2 exaflops.

Read More »

Community Watch: Data Center Pushback – Q3 2025

As the pace of data center construction accelerates, so too does the wave of local resistance. While multi-billion-dollar investment announcements often draw national or even global attention, the disputes that arise around individual projects typically play out at the local or regional level — and receive far less visibility. With this recurring feature, Data Center Frontier will highlight community opposition efforts that are shaping, delaying, or in some cases halting, data center development. Tarboro, North Carolina: Energy Storage Solutions Project At first glance, the proposal seemed like a win for Tarboro: a $6.2 billion hyperscale data center on a 50-acre site already zoned for heavy industrial use. But after more than five hours of deliberation, the town council voted 6–1 against granting a special use permit for the project. North Carolina’s unusual quasi-judicial process limited how the council could reach its decision. Because the permit required a courtroom-style proceeding, members were allowed to weigh only factual evidence and expert testimony, not personal opinions or community objections. Developer Danieal Schaffer has since stated he will take the next step of appealing the decision to the Edgecombe Superior Court. Menomonie, Wisconsin: Mystery Data Center Raises Alarm When the town of Menomonie annexed more than 300 acres of farmland, residents quickly grew uneasy about the project’s true purpose. Official information was limited to a vague reference to a “potential data center,” accompanied by a FAQ article on the town’s website. According to Fox Business News, city officials were told only that the project involved a U.S. company and one of the five major tech firms. In a community of just over 16,000 people, opposition has gained significant traction. A Facebook group called Save Our City. Stop the Menomonie Data Center now counts more than 8,000 members. With no clear tenant identified and only

Read More »

Who wins/loses with the Intel-Nvidia union?

In announcing the deal, Jensen Huang emphasized the client aspect of the deal, saying future Intel chips would have Nvidia GPUs baked into them instead of Intel’s own GPU technology. But there will be impact for the server business as well. There are two things the analysts all agree on:  AMD is the big loser in this deal. It had the advantage of CPU and GPU combination that Intel and Nvidia didn’t have individually. It was apparent in supercomputers like Frontier and El Capitan, which are an all-AMD design of CPUs and GPUs working in tandem. Now the two companies are joined at the hip and will have a competitive offering in due time. The second area of agreement is that the future of Jaguar Shores, Intel’s AI accelerator based on its GPU technology and the Gaudi AI accelerator is uncertain. “Nvidia already has solutions here and it doesn’t make sense for Intel to work on a redundant product that needs to be marketed over an established one,” said Nguyen. A significant event coming from this deal is that Intel is adopting the Nvidia proprietary NVlink high-speed interconnect protocols. “This means that Intel has essentially determined its ability to compete head-to-head with Nvidia in the current large scale AI marketplace, despite its best efforts, have mostly failed,” wrote Jack Gold of J. Gold Associates in a research note. Gold notes that Nvidia already uses a few Xeon data center chips to power their largest systems, and the x86 chips provide most of the controls and pre-processing that their large-scale GPU racks require. By accelerating the performance of the Xeon, the GPU benefits as well. That leaves the question mark hanging over Nvidia’s Arm CPUs, which is likely to continue for “niche areas,” Gold wrote. “But with this announcement, it now

Read More »

Executive Roundtable: The Integration Imperative

Mukul  Girotra, Ecolab: The AI infrastructure revolution is forcing a complete rethinking of how thermal, water, and power systems interact. It’s breaking down decades of siloed engineering approaches that are now proving inadequate given the increased rack demands. Traditionally, data centers were designed with separate teams managing power, cooling, and IT equipment. AI scale requires these systems to operate holistically, with real-time coordination between power management, thermal control, and workload orchestration. Here’s how Ecolab is addressing integration: We extend our digitally enabled approach from site to chip, spanning cooling water, direct-to-chip systems, and adiabatic units, driving cleanliness, performance, and optimized water and energy use across all layers of cooling infrastructure.  Through collaborations like the one with Digital Realty, our AI-driven water conservation solution is expected to drive up to 15% water savings, significantly reducing demand on local water systems.  Leveraging the ECOLAB3D™ platform, we provide proactive analytics and real-time data to optimize water and power use at the asset, site and enterprise levels, creating real operational efficiency and turning cooling management into a strategic advantage. We provide thermal, hydro and chemistry expertise that considers power constraints, IT equipment requirements, and day-to-day facility operational realities. This approach prevents the sub-optimization that can occur when these systems are designed in isolation.  Crucially, we view cooling through the lens of the water-energy nexus: choices at the rack or chiller level affect both Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE) of a data center, so our recommendations balance energy, water, and lifecycle considerations to deliver reliable performance and operational efficiency. The companies that will succeed in AI infrastructure deployment are those that abandon legacy siloed approaches and embrace integrated thermal management as a core competitive capability.

Read More »

Executive Roundtable: CapEx vs. OpEx in the AI Era – Balancing the Rush to Build with Long-Term Efficiency

Becky Wacker, Trane:  Focusing on post-initial construction CapEx expenditures, finding a balance between capital expenditure (CapEx) and operational expenditure (OpEx) is crucial for efficient capital deployment for data center operators. This balance can be influenced by ownership strategy, cash position, budget planning duration, sustainability goals, and contract commitments and durations with end users. At Trane, we focus on understanding these key characteristics of operations and tailor our ongoing support to best meet the unique business objectives and needs of our customers. We address these challenges through three major approaches: 1.    Smart Services Solutions:  Our smart services solutions improve system efficiency through AI-driven tools and a large fleet of truck-based service providers. By keeping system components operating at peak efficiency, preventing unanticipated failures, and balancing the critical needs of both digital monitoring and well-trained technicians, we maintain critical systems. This approach reduces OpEx through efficient operation and minimizes unplanned CapEx expenditures. Consequently, this enables improved budgeting and the ability to invest in additional data centers or other business ventures. 2.   Sustainable and Flexible System Design:  As a global climate innovator, Trane designs our products and collaborates with engineers and owners to integrate these products into highly efficient system solutions. We apply this approach not only in the initial design of the data center but also in planning for future flexibility as demand increases or components require replacement. This proactive strategy reduces ongoing utility bills, minimizes CapEx for upgrades, and helps meet sustainability goals. By focusing on both immediate and long-term efficiency, Trane ensures that data center operators can maintain optimal performance while adhering to environmental standards. 3.   Flexible Financial Solutions:  Trane’s Energy Services solutions have a 25+ year history of providing Energy Performance Contracting solutions. These can be leveraged to provide upgrades and energy optimization to cooling, power, water, and

Read More »

OpenAI and Oracle’s $300B Stargate Deal: Building AI’s National-Scale Infrastructure

Oracle’s ‘Astonishing’ Quarter Stuns Wall Street, Targeting Cloud Growth and Global Data Center Expansion Oracle’s FY Q1 2026 earnings report on September 9 — along with its massive cloud backlog — stunned Wall Street with its blow-out Q1 earnings. The market reacted positively to the huge growth in infrastructure revenue and performance obligations (RPO), a measure of future revenue from customer contracts, which indicates significant growth potential and Oracle’s increasing role in AI technology—even as earnings and revenue missed estimates. After the earnings announcement, Oracle stock soared more than 36%, marking its biggest daily gain since December 1992 and adding more than $250 billion in market value to the company. The company’s stock surge came even as the software giant’s earnings and lower-than-expected revenue. Leaders reported company’s RPO jumped about 360% in the quarter to $455 billion, indicating its potential growth and demand for its cloud services and infrastructure. As a result, Oracle CEO Safra Catz projects that its GPU‑heavy Oracle Cloud Infrastructure (OCI) business will grow 77% to $18 billion in its current fiscal year (2026) and soar to $144 billion in 2030. The earnings announcement also made Oracle’s Co-Founder, Chairman and CTO Larry Ellison the richest person in the world briefly, with shares of Oracle surging as much as 43%. By the end of the trading day, his wealth increased nearly $90 billion to $383 billion, just shy of Tesla CEO Elon Musk’s $384 billion fortune. Also on the earnings call, Ellison announced that in October at the Oracle AI World event, the company will introduce the Oracle AI Database OCI for customers to use the Large Language Model (LLM) of their choice—including Google’s Gemini, OpenAI’s ChatGPT, xAI’s Grok, etc.—directly on top of the Oracle Database to easily access and analyze all existing database data. Capital Expenditure Strategy These astonishing numbers are due

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »