Stay Ahead, Stay ONMINE

OpenAI’s strategic gambit: The Agents SDK and why it changes everything for enterprise AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI reshaped the enterprise AI landscape Tuesday with the release of its comprehensive agent-building platform – a package combining a revamped Responses API, powerful built-in tools and an open-source Agents SDK. While this announcement might have […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


OpenAI reshaped the enterprise AI landscape Tuesday with the release of its comprehensive agent-building platform – a package combining a revamped Responses API, powerful built-in tools and an open-source Agents SDK.

While this announcement might have been overshadowed by other AI headlines — Google’s unveiling of the impressive open-source Gemma 3 model, and the emergence of Manus, a Chinese startup whose autonomous agent platform astonished observers — it is clearly a significant move for enterprises to be aware of. It consolidates a previously fragmented complex API ecosystem into a unified, production-ready framework.

For enterprise AI teams, the implications are potentially profound: Projects that previously demanded multiple frameworks, specialized vector databases and complex orchestration logic can now be achieved through a single, standardized platform. But perhaps most revealing is OpenAI’s implicit acknowledgment that solving AI agent reliability issues requires outside expertise. This shift comes amid growing evidence that external developers are finding innovative solutions to agent reliability — something that the shocking Manus release also clearly demonstrated.

This strategic concession represents a critical turning point: OpenAI recognizes that even with its vast resources, the path to truly reliable agents requires opening up to outside developers who can discover innovative solutions and workarounds that OpenAI’s internal teams might miss.

A unified approach to agent development

At its core, the announcement represents OpenAI’s comprehensive strategy to provide a complete, production-ready stack for building AI agents. The release brings several key capabilities into a unified framework:

  1. The Responses API builds on the Chat Completions API but adds seamless integration for tool use, with improved interface design for creating agents;
  2. Built-in tools include web search, file search and computer use (the technology behind OpenAI’s Operator feature);
  3. An open-source Agents SDK for orchestrating single-agent and multi-agent workflows with handoffs.

What makes this announcement transformative is how it addresses the fragmentation that has plagued enterprise AI development. Companies that decide to standardize on OpenAI’s API format and open SDK will no longer need to cobble together different frameworks, manage complex prompt engineering or struggle with unreliable agents.

“The word ‘reliable’ is so key,” Sam Witteveen, co-founder of Red Dragon, an independent developer of AI agents, said in a recent conversation with me on a video podcast deep dive on the release. “We’ve talked about it many times…most agents are just not reliable. And so OpenAI is looking at like, ‘Okay, how do we bring this sort of reliability in?’”

After the announcement, Jeff Weinstein, the product lead of payments company Stripe took to X to say Stripe had already demonstrated the practical application of OpenAI’s new Agents SDK by releasing a toolkit that enables developers to integrate Stripe’s financial services into agentic workflows. This integration allows for the creation of AI agents capable of automating payments to contractors by checking files to see who needed payment or not, and billing and other transactions.

Strategic implications for OpenAI and the market

This release reveals a significant shift in OpenAI’s strategy. Having established a lead with foundation models, the company is now consolidating its position in the agent ecosystem through several calculated moves:

1. Opening up to external innovation

OpenAI acknowledges that even its extensive resources aren’t enough to outpace community innovation. The launch of tools and an open-source SDK suggests a major strategic concession.

The timing of the release coincided with the emergence of Manus, which impressed the AI community with a very capable autonomous agent platform — demonstrating capabilities using existing models from Claude and Qwen, essentially showing that clever integration and prompt engineering could achieve reliability that even major AI labs were struggling with.

“Maybe even OpenAI are not the best at making Operator,” Witteveen noted, referring to the web-browsing tool that OpenAI shipped in late January, but which we found had bugs and was inferior to competitor Proxy. “Maybe the Chinese startup has some nice hacks in their prompt, or in whatever, that they’re able to use these sort of open-source tools.”

The lesson is clear: OpenAI needs the community’s innovation to improve reliability. Any team, no matter how good they are, whether it’s OpenAI, Anthropic, Google — they just can’t try out as many things as the open source community can.

2. Securing the enterprise market through API standardization

OpenAI’s API format has emerged as the de facto standard for large language model (LLM) interfaces, supported by multiple vendors including Google’s Gemini and Meta’s Llama. OpenAI’s change in its API is significant because a lot of third-party players are going to fall in line and support these other changes as well.

By controlling the API standard while making it more extensible, OpenAI looks set to create a powerful network effect. Enterprise customers can adopt the Agents SDK knowing it works with multiple models, but OpenAI maintains its position at the center of the ecosystem.

3. Consolidating the RAG pipeline

The file search tool challenges database companies like Pinecone, Chroma, Weaviate and others. OpenAI now offers a complete retrieval-augmented generation (RAG) tool out-of-the-box. The question now is what happens to this long list of RAG vendors or other agent orchestration vendors that popped up with large funding to go after the enterprise AI opportunity — if you can just get a lot of this through a single standard like OpenAI.

In other words, enterprises may consider consolidating multiple vendor relationships into a single API provider, OpenAI. Companies can upload any data documents they want to use with OpenAI’s leading foundation models — and search it all within the API. While enterprises may encounter limitations compared to dedicated RAG databases like Pinecone, OpenAI’s built-in file and web search tools offer clear citations and URLs — which is critical for enterprises prioritizing transparency and accuracy.

This citation capability is key for enterprise environments where transparency and verification are essential – allowing users to trace exactly where information comes from and validate its accuracy against the original documents.

The enterprise decision-making calculus

For enterprise decision-makers, this announcement offers opportunities to streamline AI agent development but also requires careful assessment of potential vendor lock-in and integration with existing systems.

1. The reliability imperative

Enterprise adoption of AI agents has been slowed by reliability concerns. OpenAI’s computer use tool, for example, achieves 87% on the WebVoyager benchmark for browser-based tasks but only 38.1% on OSWorld for operating system tasks.

Even OpenAI acknowledges this limitation in its announcement, saying that human oversight is recommended. However, by providing the tools and observability features to track and debug agent performance, enterprises can now more confidently deploy agents with appropriate guardrails.

2. The lock-in question

While adopting OpenAI’s agent ecosystem offers immediate advantages, it raises concerns about vendor lock-in. As Ashpreet Bedi, founder of AgnoAGI, pointed out after the announcement: “The Responses API is intentionally designed to prevent developers from switching providers by changing the base_url.”

However, OpenAI has made a significant concession by allowing its Agents SDK to work with models from other providers. The SDK supports outside models, provided they offer a Chat Completions-style API endpoint. This multi-model approach provides enterprises with some flexibility while still keeping OpenAI at the center.

3. The competitive advantage of the full stack

The comprehensive nature of the release — from tools to API to SDK — creates a compelling advantage for OpenAI compared to competitors like Anthropic or Google, which have taken more piecemeal approaches to agent development.

This is where Google, in particular, has dropped the ball. It has tried multiple different ways to do this from within their current cloud offerings, but haven’t gotten to the point of where someone can upload PDFs and use Google Gemini for RAG.

Impact on the agent ecosystem

This announcement significantly reshapes the landscape for companies building in the agent space. Players like LangChain and CrewAI, which have built frameworks for agent development, now face direct competition from OpenAI’s Agents SDK. Unlike OpenAI, these companies don’t have a huge, growing foundation LLM business to support their frameworks. This dynamic could accelerate consolidation in the agent framework space, with developers with big incentives gravitating toward OpenAI’s production-ready solution.

Meanwhile, OpenAI monetizes developer usage, charging (.3) per call for GPT-4o and (.2.5) for GPT-4o-mini for web searches, with prices rising to .5 per call for high-context searches — making it competitively priced.

By providing built-in orchestration through the Agents SDK, OpenAI enters direct competition with platforms focused on agent coordination. The SDK’s support for multi-agent workflows with handoffs, guardrails and tracing creates a complete solution for enterprise needs.

Is production readiness just around the corner?

It’s too early to tell how well the new solutions work. People are only now starting to use Agents SDK for production. Despite the comprehensive nature of the release, questions remain because OpenAI’s previous attempts at agent frameworks, like the experimental Swarm and the Assistants API, didn’t fully meet enterprise needs. 

For the open-source offering, it’s unclear whether OpenAI will accept pull requests and submitted code from external people.

The deprecation of the Assistants API (planned for mid-2026) signals OpenAI’s confidence in the new approach, however. Unlike the Assistants API, which wasn’t extremely popular, the new Responses API and Agents SDK appear more thoughtfully designed based on developer feedback.

A true strategic pivot

While OpenAI has long been at the forefront of foundation model development, this announcement represents a strategic pivot; the company could potentially become the central platform for agent development and deployment.

By providing a complete stack from tools to orchestration, OpenAI is positioning itself to capture the enterprise value created atop its models. At the same time, the open-source approach with Agents SDK acknowledges that even OpenAI cannot innovate quickly enough in isolation.

For enterprise decision-makers, the message is clear: OpenAI is going all-in on agents as the next frontier of AI development. Whether building custom agents in-house or working with partners, enterprises now have a more cohesive, production-ready path forward — albeit one that places OpenAI at the center of their AI strategy.

The AI wars have entered a new phase. What began as a race to build the most powerful foundation models has evolved into a battle for who will control the agent ecosystem — and with this comprehensive release, OpenAI has just made its most decisive move yet to have all roads to enterprise AI agents run through its platform.

Check out this video for a deeper dive conversation between me and developer Sam Witteveen about what the OpenAI release means for the enterprise:

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nutanix expands beyond HCI

The Pure Storage integration will also be supported within Cisco’s FlashStack offering, creating a “FlashStack with Nutanix” solution with storage provided by Pure, networking capabilities as well as UCS servers from Cisco, and then the common Nutanix Cloud Platform. Cloud Native AOS: Breaking free from hypervisors Another sharp departure from

Read More »

IBM introduces new generation of LinuxOne AI mainframe

In addition to generative AI applications, new multiple model AI approaches are engineered to enhance prediction and accuracy in many industry use cases like advanced fraud detection, image processing and retail automation, according to IBM. LinuxONE Emperor 5 also comes with advanced security features specifically designed for the AI threat

Read More »

Oceaneering Secures Multi-Year GTA Job from BP

Oceaneering International Inc.’s Offshore Projects Group (OPG) has secured a multi-year deal from BP Mauritania Investments Ltd. Oceaneering said in a media release that the contract includes the provision of subsea inspection, maintenance, and repair (IMR) services and remotely operated vehicle (ROV) services in the Greater Tortue Ahmeyim (GTA) field. Oceaneering will provide support for this contract using a multi-purpose vessel equipped with two of its work-class ROVs. The project will also involve management, engineering, and integration services delivered by Oceaneering’s local and international teams. Engineering and pre-mobilization activities have started, and field operations are anticipated to begin in the second quarter of 2025. The contract is initially set for three years, with two additional one-year extension options available. “We believe that our expertise in delivering high-quality subsea solutions in harsh environments, utilizing our advanced products and services, was a key element to winning this contract. We look forward to supporting bp’s operations in this field”, Ben Laura, Oceaneering’s Chief Operating Officer, said. In mid-April, BP loaded the first cargo of liquefied natural gas (LNG) for export from the Greater Tortue Ahmeyim project. The initial shipment of LNG at GTA marks BP’s third major upstream project launch this year, the company said. These projects are among the first of 10 anticipated by the conclusion of 2027, aligning with BP’s strategy to expand its upstream oil and gas operations, it said in a media release announcing the first LNG shipment. GTA stands out as one of Africa’s most significant offshore ventures, hosting gas reserves at depths reaching 2,850 meters (9,350 feet), according to BP. The governments of Mauritania and Senegal have recognized it as “a project of strategic national importance”. Upon full commissioning, GTA Phase 1 is projected to generate approximately 2.4 million tonnes of LNG annually, catering to global energy

Read More »

China Teapots Make Rare Murban Oil Purchases

At least two independent Chinese refiners have made a rare foray into the broader international crude market, snapping up cargoes of Middle Eastern oil for delivery next month. Fuhai Group Co. and Shaanxi Yanchang Petroleum Group each purchased around 1 million barrels of Abu Dhabi’s Murban crude, according to people familiar with the matter who asked not to be identified because the information is private. The trades were handled by international and Chinese firms, they added. The current trading cycle is for crude loaded in July, and any cargoes bought now for June is considered prompt. Traders were mixed on the reason behind the purchase, with some pointing to a supply overhang in the Middle Eastern market – meaning cheaper barrels – and others flagging costlier fuel oil. The two refiners didn’t respond to a request for comment. China’s smaller processors, known as teapots, typically opt for discounted crude from Iran and Russia, and often use fuel oil as a feedstock. However, the dirty fuel is trading at an unusual premium to international benchmarks, while a tax crackdown by Beijing has added to costs. “Straight-run fuel oil is simply not an economic feedstock currently,” said June Goh, a senior oil market analyst at Sparta Commodities in Singapore. “With healthy simple margins, independent refineries in China are taking this opportunity to buy incremental crude to increase run rates.” Around 12 million barrels of heavy fuel oil and residual fuels flowed into China in April, the lowest since September 2023, according to Kpler. Processing rates by teapots in Shandong, meanwhile, have edged higher since the end of February ahead of peak summer demand, OilChem data shows. Fuhai and Yanchang purchased the Murban cargoes at a premium of around $5 a barrel to August ICE Brent futures, traders said. What do you think? We’d love

Read More »

All about All-Energy 2025

The All-Energy conference is set to return to Glasgow this month as it aims to top last year’s record-breaking show. The event, hosted by RX, aims to bring people from across the low-carbon energy sector together for one of the biggest dates in the industry’s calendar this year. All-Energy was initially set to be kicked off by first minister John Swinney. However, due to “an urgent matter”, he will no longer be in attendance. In his place, deputy first minister, Kate Forbes, will join the opening session alongside UK energy minister Michael Shanks, GB Energy chairman Juergen Maier and more as they discuss ‘Britain’s Clean Power Mission’ on 14 May. However, Shanks is also not set to appear in Glasgow due to Parliamentary commitments, instead he will dial in live via the internet. © Erikka Askeland/DCT MediaEnergy Minister Michael Shanks in a recorded message for day two of Floating Offshore Wind conference in Aberdeen. Unlike its contemporaries, All-Energy does not set itself a theme for each outing, instead, it continuously focuses on “Clean Power 2030 and after 2030,” an event spokesperson explained. All-Energy is a two-day event, running Wednesday 14 May and Thursday 15 May, taking place at Glasgow’s SEC. The organisers have also arranged an evening networking get-together at Glasgow Science Centre on the first night. The spokesperson told Energy Voice that it is important that the event touches on “topics for everyone and all the sectors we serve”, these include the move towards net zero, a just energy transition, and grid upgrades, among others. Last year’s conference saw All-Energy’s previous attendance record topped by 21% and although the event’s organisers don’t make predictions on head count, it said that signups for the year were “running 16% above this time last year”. The group behind the UK’s largest renewable

Read More »

Consultations look to energy market future

Paula Kidd and Philip Reid, Partners CMS discuss two major consultations that are poised to affect the energy industry in the UK.    About partnership content Some Energy Voice online content is funded by outside parties. The revenue from this helps to sustain our independent news gathering. You will always know if you are reading paid-for material as it will be clearly labelled as “Partnership” on the site and on social media channels, This can take two different forms. “Presented by”This means the content has been paid for and produced by the named advertiser. “In partnership with”This means the content has been paid for and approved by the named advertiser but written and edited by our own commercial content team. On March 5, two consultations in relation to the future of the energy market in the UK were launched by the UK Government – Oil and Gas Price Mechanism (the “Fiscal Consultation”) and Building the North Sea’s Energy Future (the “DESNZ Consultation”). These consultations have been seeking input from various stakeholders to develop robust frameworks that support economic growth, job security and environmental sustainability. DESNZ consultation The DESNZ Consultation set out the UK Government’s vision for transforming the North Sea into a leading offshore clean energy industry while continuing to manage the increasingly maturing offshore oil and gas industry. The overarching objective of the consultation was stated to be to ensure long-term jobs, growth and investment in North Sea communities. The consultation initially set out key policy considerations and outlined its plans to invest in various clean energy industries including offshore wind, carbon capture, usage and storage (CCUS), and hydrogen. Key initiatives include establishing Great British Energy (headquartered in Aberdeen) to drive clean energy jobs and investment, and supporting the development of floating offshore wind, CCUS and hydrogen projects. The

Read More »

Hornsea 4 cancellation puts pressure on AR7

The UK government has proposed changes to the way it procures offshore wind as it now needs to claw back capacity after the massive Hornsea 4 project ground to a halt. The Department for Energy Security and Net Zero (DESNZ) confirmed changes to the way it will run its contracts for difference (CfD) auctions, starting with the upcoming Allocation Round 7 (AR7), expected this year. Under the reforms, the government would no longer set a monetary budget for the various technologies across the auction, such as the £1.5 billion allocated for offshore wind in AR6, at the start of the auction. Instead, the government would publish a “capacity ambition,” stating instead the amount of power it aims to procure. However, it would still publish a budget for the auction after the process has run. In addition, the reforms envision allowing the secretary of state to see the anonymous bids, including price and capacity. They would use this information to determine how much capacity to procure and to set the final budget. AR7 The amendments will also end flexible bidding for fixed-bottom offshore wind applications. According to the proposals, flexible bids are no longer useful if the auction sets the budget after seeing the bids in advance. Finally, the proposed reforms also considered accelerating the offshore wind part of the auction if developers get their bids in on time and there are no appeals. However, the government said that legislation needed to make change could not be delivered before AR7 – though it did not rule it out for subsequent auctions. © Supplied by OrstedOrsted’s Hornsea One wind farm. It added that the government is exploring non-legislative routes to accelerate a fixed-bottom offshore wind auction in time for AR7. In comments to Energy Voice, Aegir Insights market analyst Signe Tellier Christensen

Read More »

Grid queue: Lay of the land for renewables developers is still unclear

Renewable energy developments can only export the electricity they produce to the grid if they have a grid connection. This has created a large queue of developers waiting for a connection date for their projects, which can extend to over a decade in the future. This backlog is causing significant uncertainty for developers and strain on some renewable projects preventing their construction from being progressed. Once they are in it, developers rarely leave the queue even if they ultimately decide that their project isn’t viable. As the queue currently operates on a “first come, first served” basis, it means that viable and ready-to-build projects can be delayed longer than necessary. To help address these lengthy delays and enable new clean energy projects to secure grid connections, a new grid queue management system is being developed by the National Energy System Operator (NESO). Expected to be introduced this summer, this new system aims to ease the current bottleneck by allocating “confirmed connection dates, connection points and queue positions” to projects which are deemed viable and ready to progress over those which don’t meet its criteria. One of the biggest changes for developers will be demonstrating they have secured land rights to keep their place in the queue when satisfying the milestones known as “gate 2”. While this new initiative will be welcomed across the renewables sector, it raises several issues for project developers to consider including how they negotiate new land agreements. NESO has been clear that nothing short of a signed option agreement will be required for projects to qualify for a grid position under gate 2 – an exclusivity agreement or heads of terms will no longer suffice. Although NESO is clear that only projects that are demonstrably viable will keep their place in the grid connection queue, how

Read More »

Tech CEOs warn Senate: Outdated US power grid threatens AI ambitions

The implications are clear: without dramatic improvements to the US energy infrastructure, the nation’s AI ambitions could be significantly constrained by simple physical limitations – the inability to power the massive computing clusters necessary for advanced AI development and deployment. Streamlining permitting processes The tech executives have offered specific recommendations to address these challenges, with several focusing on the need to dramatically accelerate permitting processes for both energy generation and the transmission infrastructure needed to deliver that power to AI facilities, the report added. Intrator specifically called for efforts “to streamline the permitting process to enable the addition of new sources of generation and the transmission infrastructure to deliver it,” noting that current regulatory frameworks were not designed with the urgent timelines of the AI race in mind. This acceleration would help technology companies build and power the massive data centers needed for AI training and inference, which require enormous amounts of electricity delivered reliably and consistently. Beyond the cloud: bringing AI to everyday devices While much of the testimony focused on large-scale infrastructure needs, AMD CEO Lisa Su emphasized that true AI leadership requires “rapidly building data centers at scale and powering them with reliable, affordable, and clean energy sources.” Su also highlighted the importance of democratizing access to AI technologies: “Moving faster also means moving AI beyond the cloud. To ensure every American benefits, AI must be built into the devices we use every day and made as accessible and dependable as electricity.”

Read More »

Networking errors pose threat to data center reliability

Still, IT and networking issues increased in 2024, according to Uptime Institute. The analysis attributed the rise in outages due to increased IT and network complexity, specifically, change management and misconfigurations. “Particularly with distributed services, cloud services, we find that cascading failures often occur when networking equipment is replicated across an entire network,” Lawrence explained. “Sometimes the failure of one forces traffic to move in one direction, overloading capacity at another data center.” The most common causes of major network-related outages were cited as: Configuration/change management failure: 50% Third-party network provider failure: 34% Hardware failure: 31% Firmware/software error: 26% Line breakages: 17% Malicious cyberattack: 17% Network overload/congestion failure: 13% Corrupted firewall/routing tables issues: 8% Weather-related incident: 7% Configuration/change management issues also attributed for 62% of the most common causes of major IT system-/software-related outages. Change-related disruptions consistently are responsible for software-related outages. Human error continues to be one of the “most persistent challenges in data center operations,” according to Uptime’s analysis. The report found that the biggest cause of these failures is data center staff failing to follow established procedures, which has increased by about 10 percentage points compared to 2023. “These are things that were 100% under our control. I mean, we can’t control when the UPS module fails because it was either poorly manufactured, it had a flaw, or something else. This is 100% under our control,” Brown said. The most common causes of major human error-related outages were reported as:

Read More »

Liquid cooling technologies: reducing data center environmental impact

“Highly optimized cold-plate or one-phase immersion cooling technologies can perform on par with two-phase immersion, making all three liquid-cooling technologies desirable options,” the researchers wrote. Factors to consider There are numerous factors to consider when adopting liquid cooling technologies, according to Microsoft’s researchers. First, they advise performing a full environmental, health, and safety analysis, and end-to-end life cycle impact analysis. “Analyzing the full data center ecosystem to include systems interactions across software, chip, server, rack, tank, and cooling fluids allows decision makers to understand where savings in environmental impacts can be made,” they wrote. It is also important to engage with fluid vendors and regulators early, to understand chemical composition, disposal methods, and compliance risks. And associated socioeconomic, community, and business impacts are equally critical to assess. More specific environmental considerations include ozone depletion and global warming potential; the researchers emphasized that operators should only use fluids with low to zero ozone depletion potential (ODP) values, and not hydrofluorocarbons or carbon dioxide. It is also critical to analyze a fluid’s viscosity (thickness or stickiness), flammability, and overall volatility. And operators should only use fluids with minimal bioaccumulation (the buildup of chemicals in lifeforms, typically in fish) and terrestrial and aquatic toxicity. Finally, once up and running, data center operators should monitor server lifespan and failure rates, tracking performance uptime and adjusting IT refresh rates accordingly.

Read More »

Cisco unveils prototype quantum networking chip

Clock synchronization allows for coordinated time-dependent communications between end points that might be cloud databases or in large global databases that could be sitting across the country or across the world, he said. “We saw recently when we were visiting Lawrence Berkeley Labs where they have all of these data sources such as radio telescopes, optical telescopes, satellites, the James Webb platform. All of these end points are taking snapshots of a piece of space, and they need to synchronize those snapshots to the picosecond level, because you want to detect things like meteorites, something that is moving faster than the rotational speed of planet Earth. So the only way you can detect that quickly is if you synchronize these snapshots at the picosecond level,” Pandey said. For security use cases, the chip can ensure that if an eavesdropper tries to intercept the quantum signals carrying the key, they will likely disturb the state of the qubits, and this disturbance can be detected by the legitimate communicating parties and the link will be dropped, protecting the sender’s data. This feature is typically implemented in a Quantum Key Distribution system. Location information can serve as a critical credential for systems to authenticate control access, Pandey said. The prototype quantum entanglement chip is just part of the research Cisco is doing to accelerate practical quantum computing and the development of future quantum data centers.  The quantum data center that Cisco envisions would have the capability to execute numerous quantum circuits, feature dynamic network interconnection, and utilize various entanglement generation protocols. The idea is to build a network connecting a large number of smaller processors in a controlled environment, the data center warehouse, and provide them as a service to a larger user base, according to Cisco.  The challenges for quantum data center network fabric

Read More »

Zyxel launches 100GbE switch for enterprise networks

Port specifications include: 48 SFP28 ports supporting dual-rate 10GbE/25GbE connectivity 8 QSFP28 ports supporting 100GbE connections Console port for direct management access Layer 3 routing capabilities include static routing with support for access control lists (ACLs) and VLAN segmentation. The switch implements IEEE 802.1Q VLAN tagging, port isolation, and port mirroring for traffic analysis. For link aggregation, the switch supports IEEE 802.3ad for increased throughput and redundancy between switches or servers. Target applications and use cases The CX4800-56F targets multiple deployment scenarios where high-capacity backbone connectivity and flexible port configurations are required. “This will be for service providers initially or large deployments where they need a high capacity backbone to deliver a primarily 10G access layer to the end point,” explains Nguyen. “Now with Wi-Fi 7, more 10G/25G capable POE switches are being powered up and need interconnectivity without the bottleneck. We see this for data centers, campus, MDU (Multi-Dwelling Unit) buildings or community deployments.” Management is handled through Zyxel’s NebulaFlex Pro technology, which supports both standalone configuration and cloud management via the Nebula Control Center (NCC). The switch includes a one-year professional pack license providing IGMP technology and network analytics features. The SFP28 ports maintain backward compatibility between 10G and 25G standards, enabling phased migration paths for organizations transitioning between these speeds.

Read More »

Engineers rush to master new skills for AI-driven data centers

According to the Uptime Institute survey, 57% of data centers are increasing salary spending. Data center job roles that saw the highest increases were in operations management – 49% of data center operators said they saw highest increases in this category – followed by junior and mid-level operations staff at 45%, and senior management and strategy at 35%. Other job categories that saw salary growth were electrical, at 32% and mechanical, at 23%. Organizations are also paying premiums on top of salaries for particular skills and certifications. Foote Partners tracks pay premiums for more than 1,300 certified and non-certified skills for IT jobs in general. The company doesn’t segment the data based on whether the jobs themselves are data center jobs, but it does track 60 skills and certifications related to data center management, including skills such as storage area networking, LAN, and AIOps, and 24 data center-related certificates from Cisco, Juniper, VMware and other organizations. “Five of the eight data center-related skills recording market value gains in cash pay premiums in the last twelve months are all AI-related skills,” says David Foote, chief analyst at Foote Partners. “In fact, they are all among the highest-paying skills for all 723 non-certified skills we report.” These skills bring in 16% to 22% of base salary, he says. AIOps, for example, saw an 11% increase in market value over the past year, now bringing in a premium of 20% over base salary, according to Foote data. MLOps now brings in a 22% premium. “Again, these AI skills have many uses of which the data center is only one,” Foote adds. The percentage increase in the specific subset of these skills in data centers jobs may vary. The Uptime Institute survey suggests that the higher pay is motivating workers to stay in the

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »