Stay Ahead, Stay ONMINE

Microsoft announces over 50 AI tools to build the ‘agentic web’ at Build 2025

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Microsoft launched a comprehensive strategy to position itself at the center of what it calls the “open agentic web” at its annual Build conference this morning, introducing dozens of AI tools and platforms designed to help […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Microsoft launched a comprehensive strategy to position itself at the center of what it calls the “open agentic web” at its annual Build conference this morning, introducing dozens of AI tools and platforms designed to help developers create autonomous systems that can make decisions and complete tasks with limited human intervention.

The Redmond, Wash.-based technology giant introduced more than 50 announcements spanning its entire product portfolio, from GitHub and Azure to Windows and Microsoft 365, all focused on advancing AI agent technologies that can work independently or collaboratively to solve complex business problems.

“We’ve entered the era of AI agents,” said Frank Shaw, Microsoft’s Chief Communications Officer, in a blog post coinciding with the Build announcements. “Thanks to groundbreaking advancements in reasoning and memory, AI models are now more capable and efficient, and we’re seeing how AI systems can help us all solve problems in new ways.”

How AI agents transform software development through autonomous capabilities

The concept of the “agentic web” moves far beyond today’s AI assistants. While current AI tools mainly respond to human questions and commands, agents actively initiate tasks, make decisions independently, coordinate with other AI systems, and complete complex workflows with minimal human supervision. This marks a fundamental shift in how AI systems operate and interact with both users and other technologies.

Kevin Scott, Microsoft’s CTO, described this shift during a press conference as fundamentally changing how humans interact with technology: “Reasoning will continue to improve. We’re going to see great progress there. But there are a handful of new things that have to start happening pretty quickly in order for agents to be the recipients of more complicated work.”

One critical missing element, according to Scott, is memory: “One of the things that is quite conspicuously missing right now in agents is memory.” To address this, Microsoft is introducing several memory-related technologies, including structured RAG (Retrieval-Augmented Generation), which helps AI systems more precisely recall information from large volumes of data.

“You will likely have a personal agent and a work agent, and the work agent is going to have a whole bunch of your employer’s information that belongs to both you and your employer,” explained Steven Bathiche, CVP and technical fellow at Microsoft, during a presentation about agents.

Bathiche emphasized that this contextual awareness is crucial for creating agents that “understand you well, contextualize where you are and what you want to do, and ultimately understand you so that you can click fewer buttons at the end of the day.” This shift from purely reactive AI to systems with persistent memory represents one of the most profound aspects of the agentic revolution.

GitHub evolves from code completion to autonomous developer experience

Microsoft is placing GitHub, its popular developer platform, at the forefront of its agentic strategy with the introduction of the GitHub Copilot coding agent, which goes beyond suggesting code snippets to autonomously solving programming tasks.

The new GitHub Copilot coding agent can now operate as a member of software development teams, autonomously refactoring code, improving test coverage, fixing defects, and even implementing new features. For complex tasks, GitHub Copilot can collaborate with other agents across all stages of the software lifecycle.

Microsoft is also open-sourcing GitHub Copilot Chat in Visual Studio Code, allowing the developer community to contribute to its evolution. This reflects Microsoft’s dual approach of both leading AI innovation while embracing open-source principles.

“Over the next few months, the AI-powered capabilities from the GitHub Copilot extensions will be part of the VS Code open-source repository, the same open-source repository that drives the most popular software development tool,” the company explained in its announcement, emphasizing its commitment to transparency and community-driven innovation.

Multi-agent systems enable complex business workflows and process automation

For businesses looking to deploy AI agents, Microsoft unveiled significant updates to its Azure AI Foundry, a platform for developing and managing AI applications and agents.

Ray Smith, VP of AI Agents at Microsoft, highlighted the importance of multi-agent systems in an exclusive interview with VentureBeat: “Multi-agent invocation, debugging and drilling down into those multiple agents is key, and that extends beyond just Copilot Studio to what’s coming with Azure AI Foundry agents. Our customers have consistently emphasized that this multi-agent capability is essential for their needs.”

Smith explained why splitting tasks across multiple agents is crucial: “It’s very hard to create a reliable process that you squeeze into one agent. Breaking it up into parts improves maintainability and makes building solutions easier, but it also significantly enhances reliability as well.”

The Azure AI Foundry Agent Service, now generally available, allows developers to build enterprise-grade AI agents with support for multi-agent workflows and open protocols like Agent2Agent (A2A) and Model Context Protocol (MCP). This enables organizations to orchestrate multiple specialized agents to handle complex tasks.

Local AI capabilities expand as processing power shifts to client devices

While cloud-based AI has dominated headlines, Microsoft is making a significant push toward local, on-device AI with several announcements targeting developers who want to deploy AI directly on user devices.

Windows AI Foundry, an evolution of Windows Copilot Runtime, provides a unified platform for local AI development on Windows. It includes Windows ML, a built-in AI inferencing runtime, and tools for preparing and optimizing models for on-device deployment.

“Foundry Local will make it easy to run AI models, tools and agents directly on-device, whether Windows 11 or MacOS,” the company announced. “Leveraging ONNX Runtime, Foundry Local is designed for situations where users can save on internet data usage, prioritize privacy and reduce costs.”

Steven Bathiche explained during a presentation how client-side AI has advanced remarkably fast: “We’re super busy trying to essentially predict and stay ahead. Most of our predictions come true within three or four months, which is kind of crazy, because I’m used to predicting a year or two years out, and then feeling good about that timeline. Now it’s like we’re stressed all the time, but it’s all fun.”

Security and identity management address enterprise AI governance challenges

As agent usage proliferates across organizations, Microsoft is addressing the critical need for security, governance, and compliance with several new capabilities designed to prevent what it calls “agent sprawl.”

Microsoft Entra Agent ID, now in preview, agents that developers create in Microsoft Copilot Studio or Azure AI Foundry are automatically assigned unique identities in an Entra directory, helping enterprises securely manage agents right from the start and avoid ‘agent sprawl’ that could lead to blind spots,” according to the announcement.

Microsoft is also integrating its Purview data security and compliance controls with its AI platforms, allowing developers to build AI solutions with enterprise-grade security and compliance features. This includes Data Loss Prevention controls for Microsoft 365 Copilot agents and new capabilities for detecting sensitive data in AI interactions.

Ray Smith advised IT teams managing security: “Building solutions from the ground up gives you total flexibility, but then you have to add in a lot of the controls around these frameworks yourself. The beauty of Copilot Studio is we’re giving you a managed infrastructure framework with lifecycle management and many governance and observability capabilities built in.”

Scientific discovery platform demonstrates how AI agents transform R&D timelines

Perhaps one of the most ambitious applications of AI agents announced at Build is Microsoft Discovery, a platform designed to accelerate scientific research and development across industries from pharmaceuticals to materials science.

Jason Zander, the CVP of Advanced Communications & Technologies at Microsoft, described in an exclusive interview with VentureBeat how this platform was used to discover a non-PFAS immersion coolant for data centers in just 200 hours — a process that traditionally takes years.

“In our area, our data centers are huge for us because we’re a hyperscaler,” Zander said. “Using this framework, we were able to screen 367,000 potential candidates in just 200 hours. We then took this to a partner who helped synthesize the results.”

Zander elaborated on how this represents a dramatic acceleration of traditional R&D timelines: “The meta point is, all those things took, in some cases, years or even a decade to create. Now they’ve been banned due to regulatory constraints. And the real business question companies need to answer is: you need to replace these products because you have offerings that are now banned…and it took you years to create your existing products. How do you compress that development timeline going forward?”

Industry standards create ecosystem for interoperable agents across platforms

Central to Microsoft’s vision is the advancement of open standards that enable agent interoperability across different platforms and services, with the Model Context Protocol (MCP) playing a particularly important role.

The company announced its joining of the MCP Steering Committee and introduced two new contributions to the MCP ecosystem: an updated authorization specification and a design for an MCP server registry service.

Jay Parikh, who leads Microsoft’s Core AI team, emphasized the importance of openness and interoperability: “Inside Microsoft, this is all about learning faster. Speed is essential because the world is changing so rapidly with new technologies, applications, and competitors emerging constantly.”

Microsoft also introduced NLWeb, a new open project that “can play a similar role to HTML for the agentic web,” allowing websites to provide conversational interfaces for users with the model of their choice and their own data.

Microsoft’s agent strategy positions it at center of next computing paradigm

The breadth and depth of Microsoft’s announcements at Build 2025 underscore the company’s all-in approach to AI agents as the next major computing paradigm.

“The last time that I was as excited about being a software developer or a technologist as I am now was in the 90s,” Kevin Scott said during the press conference. “One of the reasons why is I had this kid-in-a-candy-store feeling with building blocks that even someone like me could fully understand. I could grasp how each of these individual pieces worked and how they composed together, and I could just go play.”

Industry analysts note that Microsoft’s approach — combining cloud and edge AI, open standards with proprietary technologies, and developer tools with business applications — positions the company as a central player in the emerging agentic ecosystem.

For enterprise customers, the immediate impact may be most visible in increased automation of complex workflows, more intelligent responses to business events, and the ability to build custom agents that incorporate domain-specific knowledge and processes.

As we transition from a web of information to a web of agents, Microsoft’s strategy mirrors its earlier approach to cloud computing — providing comprehensive tools, platforms, and infrastructure while simultaneously advancing open standards.

The question now isn’t whether AI agents will transform business operations, but how quickly organizations can adapt to a world where machines don’t just respond to commands, but anticipate needs, make decisions, and fundamentally reshape how work gets done.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia introduces ‘ridesharing for AI’ with DGX Cloud Lepton

The platform is currently in early access but already CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Nscale, SoftBank, and Yotta have agreed to make “tens of thousands of GPUs” available for customers. Developers can utilize GPU compute capacity in specific regions for both on-demand and long-term computing, supporting strategic and

Read More »

Nvidia opens NVLink to competitive processors

Until now, NVLink has been limited to Nvidia GPUs and CPUs, but with NVLink Fusion, non-Nvidia semi-custom accelerators will be able to use it. Nvidia says there will be two configurations for NVLink Fusion: for connecting custom CPUs to Nvidia GPUs and for connecting Nvidia’s Grace and future CPUs to

Read More »

How AI changes your multicloud network architecture

As enterprises find ever more use cases for generative AI (genAI) and agentic AI, their ability to achieve optimal business outcomes from these use cases will depend on the strength of their hybrid multicloud networks. Typically, these workloads demand higher-bandwidth, low-latency connectivity for centralized application delivery (LLM development), and AI

Read More »

Republican budget squeezes out of House committee, but deeper IRA cuts could come

Republican holdouts on the House Budget Committee allowed their party’s massive budget bill to advance Sunday night after negotiating deeper cuts to the Inflation Reduction Act, but they continue to call for even more cuts to clean energy incentives. The bill had been blocked from passing Friday night by Reps. Chip Roy, R-Texas; Ralph Norman, R-S.C.; Andrew Clyde, R-Ga.; and Josh Brecheen, R-Okla. Rep. Lloyd Smucker, R-Pa., also voted no; he said on X that he “fully support[s] the One Big Beautiful Bill” and his vote was “a procedural requirement to preserve the committee’s opportunity to reconsider the motion to advance OBBB.” Brecheen said in a Friday X post that he felt the House “cannot allow wind and solar tax credits, in current form, to continue in the ‘One Big Beautiful Bill.’ As it is currently written, Green New Scam subsidy phaseouts are delayed until 2029 — with some of these subsidies lasting until 2041!” After a weekend of negotiation, Republican lawmakers struck a deal for the bill to make deeper cuts to items including the IRA’s clean energy incentives, resulting in the legislation passing out of committee 17-16 late Sunday night. However, Roy said in a Sunday post on X that while the new bill “reduces the availability of future subsidies under the green new scam,” it “does not yet meet the moment — leaving almost half of the green new scam subsidies continuing.” The new shape of the bill is not fully known, but the legislation was advanced to the House Rules Committee, which will take it up early Wednesday morning. Breechen said after the Friday vote that he was grateful to President Trump “for leading the charge to end these Green New Scam giveaways” and thanked Speaker of the House Mike Johnson, R-La., Majority Leader Steve Scalise, R-La., and House Budget Committee

Read More »

Is your electric bill too high? Thank LNG exports.

Lt. Gen. Russel L. Honoré (Ret.) is a former commanding officer of the U.S. First Army. He is currently head of The Green Army, an organization dedicated to finding solutions to pollution. Just months after declaring a false “energy emergency,” the administration is moving to sell more American gas overseas, including to our competitors. It’s not only a disaster for the climate and our national security, but it will push American’s electricity bills through the roof. Energy prices are already skyrocketing. Electricity providers and their representatives are blaming regulators. Some elected officials, understandably under fire from their constituents, point the finger at greedy corporations. Meanwhile, apologists for fossil fuel companies are writing trendy think pieces putting the blame for high prices at the feet of green energy providers. The evidence for those claims are even thinner than the paper they’re printed on. There are plenty of factors at play that can explain rising energy costs. Some, like the huge demands placed on the grid by power thirsty data centers, crypto mining operations and AI are already widely known. But the role of LNG exports is not receiving nearly enough public scrutiny, especially since gas prices all but set electricity prices. While it’s been billed as “clean, American energy,” or “liquefied natural gas,” the product we’re talking about is a fossil fuel. It’s mostly methane, the greenhouse gas that traps 80 times more heat in the atmosphere than does carbon dioxide. Its liquefied form, which is pumped into massive supertanker ships and sold overseas, comes at an enormous cost, requiring massive outlays of energy to chill the fuel into a liquid form. It also harms our climate along every step of its journey as it leaks into the atmosphere from the well head, through the pipeline, to liquefaction, shipping and eventually

Read More »

Aberdeen’s Centurion makes 21st acquisition of Manchester-based Aerial

Dyce-headquartered Centurion Group has acquired Manchester-based Aerial Platforms Ltd (APL), a rental provider of powered access lifting equipment for working-at-height. Headquartered in Leigh, with additional hubs in Newcastle and Carlisle, APL is an owner managed business that delivers safety-driven scissor lifts, boom lifts and telehandlers across the construction, infrastructure and retail sectors. Through its new purchase, Centurion has gained three new operating locations in England and access to major industrial customers in the lifting equipment space. The company said that APL will benefit from its financial scale and additional backing to invest in new rental assets, infrastructure and capabilities – supporting accelerated growth across the UK. Centurion Group revealed this year that has prepared a $100-million (£81m) acquisition fund to help drive the next stage of its growth. Prior to its purchase of APL, Centurion had bought 20 businesses around the world since it was formed in 2017. The company has previously said that acquisitions are central to its strategy to grow its market share in renewables, minerals, infrastructure, environmental, defence and government. Former chief financial officer Euan Leask took over the company starting in February after Houston-based CEO Fernando Assing retired. Leask said APL “has a strong reputation across the UK in the powered access market, making it a great addition to our UK & Europe operations. “The acquisition not only expands our footprint across England, but also strengthens our position in the construction, infrastructure and retail markets, and grows the range of lifting services we can offer our customers.” Backed by private equity firm SCF Partners, Centurion has grown organically and by acquisition from around $200m in revenue when it was formed by the merger of SCF and ATR to approximately $500m at the end of 2024. Centurion added that it has an active pipeline of acquisition opportunities

Read More »

Electric utilities must disclose PJM votes under new Maryland law

Dive Brief: Maryland public electric utilities must disclose how they vote at PJM Interconnection stakeholder meetings under a law signed May 13 by Democratic Gov. Wes Moore. On or before February 1 each year, electric companies covered by the bill, or their local affiliates, must file a comprehensive report with the Maryland Public Service Commission that include both public and nonpublic votes on matters before PJM. The first filing deadline comes next year. Similar bills are under consideration in Delaware, Pennsylvania and Illinois, all of which are partially or wholly within PJM territory. PJM is responsible for North America’s largest transmission grid. Dive Insight: Maryland HB 121 passed both houses of the state legislature with overwhelming bipartisan support.  A similar transparency requirement appeared last year in broader utility legislation that would have required all Maryland utilities to seek PJM membership and refrain from ratebasing lobbying expenses.  The 2024 bill did not make it to Moore’s desk, but the prohibition on lobbying-expense ratebasing appeared in a comprehensive package this year that also requires separate rate structures for large-load customers, imposes new restrictions on gas infrastructure investments and authorizes solicitations for nearly 5 GW of dispatchable generation and energy storage resources. The 2025 bill now awaits the governor’s signature. HB 121 is a “common-sense bill” that prevents utilities from operating “in the dark” while “Maryland families struggle with soaring electric bills,” sponsor Lorig Charkoudian, a Democratic state lawmaker who represents parts of Maryland’s Washington, D.C. suburbs, said in a May 14 op-ed in the Baltimore Sun. “Until now, there was no requirement that utilities tell the public — or even state regulators — how they’re voting on transmission policies or market rules that can add hundreds of millions of dollars to our monthly bills,” Charkoudian said. A PJM representative indicated the new

Read More »

South Bow Sees Sequential Rise in Earnings

South Bow Corp. has reported rising figures for the first quarter of 2025, including a record throughput. The company said in its quarterly report that its net income was $88 million, up from $55 million for the previous quarter, but well below the $112 million reported for the corresponding quarter a year prior. The company recorded normalized earnings before interest, taxes, depreciation and amortization (EBITDA) of $266 million. A decline in demand for uncommitted capacity on South Bow’s pipeline systems led to an 8 percent reduction in normalized EBITDA compared to the fourth quarter of 2024. South Bow said its throughput in the first quarter reached 613,000 barrels per day (bbl/d) on the Keystone pipeline, with a System Operating Factor (SOF) of 98 percent, and approximately 726,000 bbl/d on the U.S. Gulf Coast segment of the Keystone pipeline system. In April, the company activated emergency response protocols due to an oil release at MP-171 of the Keystone pipeline near Fort Ransom, North Dakota. The company was able to restart the pipeline within days. South Bow’s Q1 revenue was $498 million, $10 million above that of the fourth quarter of 2024. However, the figure was below the $544 million reported for the corresponding quarter a year prior. In the Western Canadian Sedimentary Basin, pipeline capacity for crude oil remains greater than the supply, South Bow said. Looking ahead, the uncommitted capacity demand on South Bow’s Keystone Pipeline is anticipated to stay low in the short term. Moreover, swiftly evolving global trade policies and tariffs have created economic and geopolitical instability, resulting in considerable fluctuations in commodity prices and pricing differentials, the company said. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments

Read More »

North America Adds Rigs for First Time in Months

North America added five rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on May 16. Although the U.S. dropped a total of two rigs week on week, Canada added a total of seven rigs during the same period, taking the total North America rig count up to 697, comprising 576 rigs from the U.S. and 121 from Canada, the count outlined. Of the total U.S. rig count of 576, 563 rigs are categorized as land rigs, 11 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 473 oil rigs, 100 gas rigs, and three miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 520 horizontal rigs, 41 directional rigs, and 15 vertical rigs. Week on week, the U.S. land and inland water rig counts each dropped by one, and the country’s offshore rig count remained unchanged, the count highlighted. The U.S. oil and gas rig counts each decreased by one week on week, and its miscellaneous rig count remained unchanged, the count showed. Baker Hughes revealed that the U.S. horizontal rig count dropped by two week on week, while its directional and vertical rig counts remained unchanged during the period. A major state variances subcategory included in the rig count showed that, week on week, New Mexico and Texas each dropped two rigs, and Wyoming and Ohio each added one rig. A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Permian basin dropped three rigs and the Utica basin added one rig. Canada’s total rig count of 121 is made up of 74 oil rigs and 47 gas rigs, Baker Hughes pointed out. The country’s

Read More »

Tariff uncertainty weighs on networking vendors

“Our guide assumes current tariffs and exemptions remain in place through the quarter. These include the following: China at 30%, partially offset by an exemption for semiconductors and certain electronic components; Mexico and Canada at 25% for the components and products that are not eligible for the current exemptions,” Cisco CFO Scott Herron told Wall Street analysts in the company’s quarterly earnings report on May 14. At this time, Cisco expects little impact from tariffs on steel and aluminum and retaliatory tariffs, Herron said. “We’ll continue to leverage our world-class supply chain team to help mitigate the impact,” he said, adding that “the flexibility and agility we have built into our operations over the last few years, the size and scale of our supply chain, provides us some unique advantages as we support our customers globally.” “Once the tariff scenario stabilizes, there [are] steps that we can take to mitigate it, as you’ve seen us do with China from the first Trump administration. And only after that would we consider price [increases],” Herron said. Similarly, Extreme Networks noted the changing tariff conditions during its earnings call on April 30. “The tariff situation is very dynamic, I think, as everybody knows and can appreciate, and it’s kind of hard to call. Yes, there was concern initially given the magnitude of tariffs,” said Extreme Networks CEO Ed Meyercord on the earnings call. “The larger question is, will all of the changes globally in trade and tariff policy have an impact on demand? And that’s hard to call at this point. And we’re going to hold as far as providing guidance or judgment on that until we have finality come July.” Financial news Meanwhile, AI is fueling high expectations and influencing investments in enterprise campus and data center environments.

Read More »

Liquid cooling becoming essential as AI servers proliferate

“Facility water loops sometimes have good water quality, sometimes bad,” says My Troung, CTO at ZutaCore, a liquid cooling company. “Sometimes you have organics you don’t want to have inside the technical loop.” So there’s one set of pipes that goes around the data center, collecting the heat from the server racks, and another set of smaller pipes that lives inside individual racks or servers. “That inner loop is some sort of technical fluid, and the two loops exchange heat across a heat exchanger,” says Troung. The most common approach today, he says, is to use a single-phase liquid — one that stays in liquid form and never evaporates into a gas — such as water or propylene glycol. But it’s not the most efficient option. Evaporation is a great way to dissipate heat. That’s what our bodies do when we sweat. When water goes from a liquid to a gas it’s called a phase change, and it uses up energy and makes everything around it slightly cooler. Of course, few servers run hot enough to boil water — but they can boil other liquids. “Two phase is the most efficient cooling technology,” says Xianming (Simon) Dai, a professor at University of Texas at Dallas. And it might be here sooner than you think. In a keynote address in March at Nvidia GTC, Nvidia CEO Jensen Huang unveiled the Rubin Ultra NVL576, due in the second half of 2027 — with 600 kilowatts per rack. “With the 600 kilowatt racks that Nvidia is announcing, the industry will have to shift very soon from single-phase approaches to two-phase,” says ZutaCore’s Troung. Another highly-efficient cooling approach is immersion cooling. According to a Castrol survey released in March, 90% of 600 data center industry leaders say that they are considering switching to immersion

Read More »

Cisco taps OpenAI’s Codex for AI-driven network coding

“If you want to ask Codex a question about your codebase, click “Ask”. Each task is processed independently in a separate, isolated environment preloaded with your codebase. Codex can read and edit files, as well as run commands including test harnesses, linters, and type checkers. Task completion typically takes between 1 and 30 minutes, depending on complexity, and you can monitor Codex’s progress in real time,” according to OpenAI. “Once Codex completes a task, it commits its changes in its environment. Codex provides verifiable evidence of its actions through citations of terminal logs and test outputs, allowing you to trace each step taken during task completion,” OpenAI wrote. “You can then review the results, request further revisions, open a GitHub pull request, or directly integrate the changes into your local environment. In the product, you can configure the Codex environment to match your real development environment as closely as possible.” OpenAI is releasing Codex as a research preview: “We prioritized security and transparency when designing Codex so users can verify its outputs – a safeguard that grows increasingly more important as AI models handle more complex coding tasks independently and safety considerations evolve. Users can check Codex’s work through citations, terminal logs and test results,” OpenAI wrote.  Internally, technical teams at OpenAI have started using Codex. “It is most often used by OpenAI engineers to offload repetitive, well-scoped tasks, like refactoring, renaming, and writing tests, that would otherwise break focus. It’s equally useful for scaffolding new features, wiring components, fixing bugs, and drafting documentation,” OpenAI stated. Cisco’s view of agentic AI Patel stated that Codex is part of the developing AI agent world, where Cisco envisions billions of AI agents will work together to transform and redefine the architectural assumptions the industry has relied on. Agents will communicate within and

Read More »

US companies are helping Saudi Arabia to build an AI powerhouse

AMD announced a five-year, $10 billion collaboration with Humain to deploy up to 500 megawatts of AI compute in Saudi Arabia and the US, aiming to deploy “multi-exaflop capacity by early 2026.” AWS, too, is expanding its data centers in Saudi Arabia to bolster Humain’s cloud infrastructure. Saudi Arabia has abundant oil and gas to power those data centers, and is growing its renewable energy resources with the goal of supplying 50% of the country’s power by 2030. “Commercial electricity rates, nearly 50% lower than in the US, offer potential cost savings for AI model training, though high local hosting costs due to land, talent, and infrastructure limit total savings,” said Eric Samuel, Associate Director at IDC. Located near Middle Eastern population centers and fiber optic cables to Asia, these data centers will offer enterprises low-latency cloud computing for real-time AI applications. Late is great There’s an advantage to being a relative latecomer to the technology industry, said Eric Samuel, associate director, research at IDC. “Saudi Arabia’s greenfield tech landscape offers a unique opportunity for rapid, ground-up AI integration, unburdened by legacy systems,” he said.

Read More »

AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers

Humain will deploy the Nvidia Omniverse platform as a multi-tenant system to drive acceleration of the new era of physical AI and robotics through simulation, optimization and operation of physical environments by new human-AI-led solutions. The AMD deal did not discuss the number of chips involved in the deal, but it is valued at $10 billion. AMD and Humain plan to develop a comprehensive AI infrastructure through a network of AMD-based AI data centers that will extend from Saudi Arabia to the US and support a wide range of AI workloads across corporate, start-up, and government markets. Think of it as AWS but only offering AI as a service. AMD will provide its AI compute portfolio – Epyc, Instinct, and FPGA networking — and the AMD ROCm open software ecosystem, while Humain will manage the delivery of the hyperscale data center, sustainable power systems, and global fiber interconnects. The partners expect to activate a multi-exaflop network by early 2026, supported by next-generation AI silicon, modular data center zones, and a software platform stack focused on developer enablement, open standards, and interoperability. Amazon Web Services also got a piece of the action, announcing a more than $5 billion investment to build an “AI zone” in the Kingdom. The zone is the first of its kind and will bring together multiple capabilities, including dedicated AWS AI infrastructure and servers, UltraCluster networks for faster AI training and inference, AWS services like SageMaker and Bedrock, and AI application services such as Amazon Q. Like the AMD project, the zone will be available in 2026. Humain only emerged this month, so little is known about it. But given that it is backed by Crown Prince Salman and has the full weight of the Kingdom’s Public Investment Fund (PIF), which ranks among the world’s largest and

Read More »

Check Point CISO: Network segregation can prevent blackouts, disruptions

Fischbein agrees 100% with his colleague’s analysis and adds that education and training can help prevent such incidents from occurring. “Simulating such a blackout is impossible, it has never been done,” he acknowledges, but he is committed to strengthening personal and team training and risk awareness. Increased defense and cybersecurity budgets In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. You have to look for a security platform that makes things easier, faster, and simpler,” he says. ” Today there are excellent tools that can stop all kinds of attacks.” “Since 2010, there have been cybersecurity systems, also from Check Point, that help prevent this type of incident from happening, but I’m not sure that [Spain’s electricity blackout] was a cyberattack.” Leading the way in email security According to Gartner’s Magic Quadrant, Check Point is the leader in email security platforms. Today email is still responsible for 88% of all malicious file distributions. Attacks that, as Fischbein explains, enter through phishing, spam, SMS, or QR codes. “There are two challenges: to stop the threats and not to disturb, because if the security tool is a nuisance it causes more harm than good. It is very important that the solution does not annoy [users],” he stresses. “As almost all attacks enter via e-mail, it is

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »