Stay Ahead, Stay ONMINE

Microsoft announces over 50 AI tools to build the ‘agentic web’ at Build 2025

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Microsoft launched a comprehensive strategy to position itself at the center of what it calls the “open agentic web” at its annual Build conference this morning, introducing dozens of AI tools and platforms designed to help […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Microsoft launched a comprehensive strategy to position itself at the center of what it calls the “open agentic web” at its annual Build conference this morning, introducing dozens of AI tools and platforms designed to help developers create autonomous systems that can make decisions and complete tasks with limited human intervention.

The Redmond, Wash.-based technology giant introduced more than 50 announcements spanning its entire product portfolio, from GitHub and Azure to Windows and Microsoft 365, all focused on advancing AI agent technologies that can work independently or collaboratively to solve complex business problems.

“We’ve entered the era of AI agents,” said Frank Shaw, Microsoft’s Chief Communications Officer, in a blog post coinciding with the Build announcements. “Thanks to groundbreaking advancements in reasoning and memory, AI models are now more capable and efficient, and we’re seeing how AI systems can help us all solve problems in new ways.”

How AI agents transform software development through autonomous capabilities

The concept of the “agentic web” moves far beyond today’s AI assistants. While current AI tools mainly respond to human questions and commands, agents actively initiate tasks, make decisions independently, coordinate with other AI systems, and complete complex workflows with minimal human supervision. This marks a fundamental shift in how AI systems operate and interact with both users and other technologies.

Kevin Scott, Microsoft’s CTO, described this shift during a press conference as fundamentally changing how humans interact with technology: “Reasoning will continue to improve. We’re going to see great progress there. But there are a handful of new things that have to start happening pretty quickly in order for agents to be the recipients of more complicated work.”

One critical missing element, according to Scott, is memory: “One of the things that is quite conspicuously missing right now in agents is memory.” To address this, Microsoft is introducing several memory-related technologies, including structured RAG (Retrieval-Augmented Generation), which helps AI systems more precisely recall information from large volumes of data.

“You will likely have a personal agent and a work agent, and the work agent is going to have a whole bunch of your employer’s information that belongs to both you and your employer,” explained Steven Bathiche, CVP and technical fellow at Microsoft, during a presentation about agents.

Bathiche emphasized that this contextual awareness is crucial for creating agents that “understand you well, contextualize where you are and what you want to do, and ultimately understand you so that you can click fewer buttons at the end of the day.” This shift from purely reactive AI to systems with persistent memory represents one of the most profound aspects of the agentic revolution.

GitHub evolves from code completion to autonomous developer experience

Microsoft is placing GitHub, its popular developer platform, at the forefront of its agentic strategy with the introduction of the GitHub Copilot coding agent, which goes beyond suggesting code snippets to autonomously solving programming tasks.

The new GitHub Copilot coding agent can now operate as a member of software development teams, autonomously refactoring code, improving test coverage, fixing defects, and even implementing new features. For complex tasks, GitHub Copilot can collaborate with other agents across all stages of the software lifecycle.

Microsoft is also open-sourcing GitHub Copilot Chat in Visual Studio Code, allowing the developer community to contribute to its evolution. This reflects Microsoft’s dual approach of both leading AI innovation while embracing open-source principles.

“Over the next few months, the AI-powered capabilities from the GitHub Copilot extensions will be part of the VS Code open-source repository, the same open-source repository that drives the most popular software development tool,” the company explained in its announcement, emphasizing its commitment to transparency and community-driven innovation.

Multi-agent systems enable complex business workflows and process automation

For businesses looking to deploy AI agents, Microsoft unveiled significant updates to its Azure AI Foundry, a platform for developing and managing AI applications and agents.

Ray Smith, VP of AI Agents at Microsoft, highlighted the importance of multi-agent systems in an exclusive interview with VentureBeat: “Multi-agent invocation, debugging and drilling down into those multiple agents is key, and that extends beyond just Copilot Studio to what’s coming with Azure AI Foundry agents. Our customers have consistently emphasized that this multi-agent capability is essential for their needs.”

Smith explained why splitting tasks across multiple agents is crucial: “It’s very hard to create a reliable process that you squeeze into one agent. Breaking it up into parts improves maintainability and makes building solutions easier, but it also significantly enhances reliability as well.”

The Azure AI Foundry Agent Service, now generally available, allows developers to build enterprise-grade AI agents with support for multi-agent workflows and open protocols like Agent2Agent (A2A) and Model Context Protocol (MCP). This enables organizations to orchestrate multiple specialized agents to handle complex tasks.

Local AI capabilities expand as processing power shifts to client devices

While cloud-based AI has dominated headlines, Microsoft is making a significant push toward local, on-device AI with several announcements targeting developers who want to deploy AI directly on user devices.

Windows AI Foundry, an evolution of Windows Copilot Runtime, provides a unified platform for local AI development on Windows. It includes Windows ML, a built-in AI inferencing runtime, and tools for preparing and optimizing models for on-device deployment.

“Foundry Local will make it easy to run AI models, tools and agents directly on-device, whether Windows 11 or MacOS,” the company announced. “Leveraging ONNX Runtime, Foundry Local is designed for situations where users can save on internet data usage, prioritize privacy and reduce costs.”

Steven Bathiche explained during a presentation how client-side AI has advanced remarkably fast: “We’re super busy trying to essentially predict and stay ahead. Most of our predictions come true within three or four months, which is kind of crazy, because I’m used to predicting a year or two years out, and then feeling good about that timeline. Now it’s like we’re stressed all the time, but it’s all fun.”

Security and identity management address enterprise AI governance challenges

As agent usage proliferates across organizations, Microsoft is addressing the critical need for security, governance, and compliance with several new capabilities designed to prevent what it calls “agent sprawl.”

Microsoft Entra Agent ID, now in preview, agents that developers create in Microsoft Copilot Studio or Azure AI Foundry are automatically assigned unique identities in an Entra directory, helping enterprises securely manage agents right from the start and avoid ‘agent sprawl’ that could lead to blind spots,” according to the announcement.

Microsoft is also integrating its Purview data security and compliance controls with its AI platforms, allowing developers to build AI solutions with enterprise-grade security and compliance features. This includes Data Loss Prevention controls for Microsoft 365 Copilot agents and new capabilities for detecting sensitive data in AI interactions.

Ray Smith advised IT teams managing security: “Building solutions from the ground up gives you total flexibility, but then you have to add in a lot of the controls around these frameworks yourself. The beauty of Copilot Studio is we’re giving you a managed infrastructure framework with lifecycle management and many governance and observability capabilities built in.”

Scientific discovery platform demonstrates how AI agents transform R&D timelines

Perhaps one of the most ambitious applications of AI agents announced at Build is Microsoft Discovery, a platform designed to accelerate scientific research and development across industries from pharmaceuticals to materials science.

Jason Zander, the CVP of Advanced Communications & Technologies at Microsoft, described in an exclusive interview with VentureBeat how this platform was used to discover a non-PFAS immersion coolant for data centers in just 200 hours — a process that traditionally takes years.

“In our area, our data centers are huge for us because we’re a hyperscaler,” Zander said. “Using this framework, we were able to screen 367,000 potential candidates in just 200 hours. We then took this to a partner who helped synthesize the results.”

Zander elaborated on how this represents a dramatic acceleration of traditional R&D timelines: “The meta point is, all those things took, in some cases, years or even a decade to create. Now they’ve been banned due to regulatory constraints. And the real business question companies need to answer is: you need to replace these products because you have offerings that are now banned…and it took you years to create your existing products. How do you compress that development timeline going forward?”

Industry standards create ecosystem for interoperable agents across platforms

Central to Microsoft’s vision is the advancement of open standards that enable agent interoperability across different platforms and services, with the Model Context Protocol (MCP) playing a particularly important role.

The company announced its joining of the MCP Steering Committee and introduced two new contributions to the MCP ecosystem: an updated authorization specification and a design for an MCP server registry service.

Jay Parikh, who leads Microsoft’s Core AI team, emphasized the importance of openness and interoperability: “Inside Microsoft, this is all about learning faster. Speed is essential because the world is changing so rapidly with new technologies, applications, and competitors emerging constantly.”

Microsoft also introduced NLWeb, a new open project that “can play a similar role to HTML for the agentic web,” allowing websites to provide conversational interfaces for users with the model of their choice and their own data.

Microsoft’s agent strategy positions it at center of next computing paradigm

The breadth and depth of Microsoft’s announcements at Build 2025 underscore the company’s all-in approach to AI agents as the next major computing paradigm.

“The last time that I was as excited about being a software developer or a technologist as I am now was in the 90s,” Kevin Scott said during the press conference. “One of the reasons why is I had this kid-in-a-candy-store feeling with building blocks that even someone like me could fully understand. I could grasp how each of these individual pieces worked and how they composed together, and I could just go play.”

Industry analysts note that Microsoft’s approach — combining cloud and edge AI, open standards with proprietary technologies, and developer tools with business applications — positions the company as a central player in the emerging agentic ecosystem.

For enterprise customers, the immediate impact may be most visible in increased automation of complex workflows, more intelligent responses to business events, and the ability to build custom agents that incorporate domain-specific knowledge and processes.

As we transition from a web of information to a web of agents, Microsoft’s strategy mirrors its earlier approach to cloud computing — providing comprehensive tools, platforms, and infrastructure while simultaneously advancing open standards.

The question now isn’t whether AI agents will transform business operations, but how quickly organizations can adapt to a world where machines don’t just respond to commands, but anticipate needs, make decisions, and fundamentally reshape how work gets done.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia introduces ‘ridesharing for AI’ with DGX Cloud Lepton

The platform is currently in early access but already CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Nscale, SoftBank, and Yotta have agreed to make “tens of thousands of GPUs” available for customers. Developers can utilize GPU compute capacity in specific regions for both on-demand and long-term computing, supporting strategic and

Read More »

Nvidia opens NVLink to competitive processors

Until now, NVLink has been limited to Nvidia GPUs and CPUs, but with NVLink Fusion, non-Nvidia semi-custom accelerators will be able to use it. Nvidia says there will be two configurations for NVLink Fusion: for connecting custom CPUs to Nvidia GPUs and for connecting Nvidia’s Grace and future CPUs to

Read More »

How AI changes your multicloud network architecture

As enterprises find ever more use cases for generative AI (genAI) and agentic AI, their ability to achieve optimal business outcomes from these use cases will depend on the strength of their hybrid multicloud networks. Typically, these workloads demand higher-bandwidth, low-latency connectivity for centralized application delivery (LLM development), and AI

Read More »

Newsom proposes steep cuts to California grid reliability programs

Dive Brief: California Gov. Gavin Newsom’s 2025-2026 revised budget, unveiled last week, would cut about $423 million from grid reliability programs as the state faces a $12 billion shortfall. The programs are designed to shore up the state’s energy resources by providing on-call emergency supply or load reduction resources during extreme weather events such as heatwaves or other grid emergencies. Advocates have criticized the proposed cuts. Legislators have until June 15 to make changes and finalize the budget. Dive Insight: Newsom’s office did not immediately respond to a request for comment. A spokesman for the governor pointed to a recent release from the state’s energy commission that says the state is in good shape to meet demand this summer “while remaining vigilant about ongoing risks.” In a press conference last week, Newsom blamed the budget shortfall on economic uncertainty from President Donald Trump’s on-again, off-again tariffs, stock market volatility and a reduction in global tourism. The California Solar & Storage Association has come out against cuts targeting the Demand Side Grid Support and Distributed Electricity Backup Assets programs, which state lawmakers created in 2022 to try to improve reliability and avoid blackouts during emergencies. In the January draft of the 2025-26 budget, DSGS was slated to get $75 million, plus about $18 million in backfill funding for 2023-24, while DEBA was to get $200 million in 2025-26 and another $180 million the following year. Both programs are funded primarily from California’s Greenhouse Gas Reduction Fund, which is funded by its cap and trade program. The new proposal reduces to zero all greenhouse gas reduction funds for these programs proposed for 2025-2026 and future years pending an agreement on cap and trade, which the governor wants to rename “cap and invest.” That would essentially zero out funds for DSGS. The revised budget proposes giving $50

Read More »

DOE Finalizes 2024 LNG Export Study, Paving Way for Stronger American Energy Exports

WASHINGTON— The U.S. Department of Energy (DOE) today released its Response to Comments on the 2024 LNG Export Study, marking a critical step toward returning to regular order on liquefied natural gas (LNG) exports. With this action, DOE has completed the final hurdles left over from the Biden administration’s reckless pause on LNG export permits, paving the way for the Trump Administration to fully unleash American LNG exports. “President Trump was given a mandate to unleash American energy dominance, and that includes U.S. LNG exports,” U.S. Energy Secretary Chris Wright said. “The facts are clear: expanding America’s LNG exports is good for Americans and good for the world. Today, the Department of Energy is following the facts, closing the door on the Biden administration’s failed policies, and putting America’s energy future on stronger footing.” “The 2024 Study confirms what our nation always knew—LNG supports our economy, strengthens our allies, and enhances national security. Biden’s opposition defied reason and reality and hurt American progress. We are pleased to issue the Response to Comments on the 2024 LNG Export Study, which will allow DOE to close out this chapter and fully return to regular order on LNG exports,” said Tala Goudarzi, Principal Deputy Assistant Secretary of the Office of Fossil Energy and Carbon Management. The 2024 LNG Study was released at the end of the Biden administration in December 2024 and had a public comment period through March 20th of this year. Based on the record evidence from the 2024 LNG Export Study and the public comments received, DOE makes several key findings, including: the United States has a robust natural gas supply that is sufficient to meet growing levels of exports while minimizing impacts to domestic prices; growing LNG exports increases our gross domestic product and expands jobs while improving our trade balance; and increasing

Read More »

Oil Ends Higher After Volatile Day

Oil ended the session slightly higher following a volatile day of trading as investors look for clues about Russia-Ukraine truce talks and a potential nuclear deal with Iran. Brent futures traded higher to settle near $65.50. West Texas Intermediate also rose. US President Donald Trump said that Ukraine and Russia would begin talks “immediately” on ending their war after a phone call with Russian President Vladimir Putin on Monday. “I suppose with expectations set so low, any progress toward a ceasefire is being seen as a modest positive for negotiations — and slightly bearish for crude,”  said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. “That said, I’d still argue that the potential market impact is far more significant when it comes to Iran.” Uncertainty over a deal with Iran also added to volatility in oil markets. Tehran won’t abandon its pursuit of civilian nuclear energy under any circumstances, President Masoud Pezeshkian said in remarks on state television. His comments come as the rhetoric between Iranian and US officials intensified in recent days. Investors are closely watching developments as an easing of sanctions on either Iran or Russia could potentially add even more barrels to a global market that is facing oversupply this year. Tighter sanctions, on the other hand, could boost prices. Oil had earlier declined with other risk assets after Moody’s Ratings stripped the US government of its top credit rating. The downgrade, which trailed other major agencies, risks reinforcing Wall Street’s growing worries over the US sovereign bond market and a slowing economy, which in turn clouded the outlook for oil consumption. Still, the physical market offered some bullish signs as buying interest returned in the key North Sea market on Monday, with six shipments changing hands. There was also a raft of unanswered

Read More »

India Readies Gas Power Fleet to Prevent Summer Blackouts

India has invoked emergency measures that require gas-fired power stations to produce more in order to ensure uninterrupted electricity supply during the summer season, as soaring temperatures boost demand. The nation’s grid controller will assess the periods of high demand when the gas-based plants will be required to generate electricity, according to a power ministry order seen by Bloomberg. The units will get a two-week notice, giving them enough time to arrange fuel, the order said. India’s gas-fired power facilities, with a combined capacity of more than 20 gigawatts, operate at about a fifth of their full potential as they struggle to source the fuel at rates they can afford in a price-sensitive electricity market. The nation’s peak power demand reached a record high of 250 gigawatts last summer and the ministry expects this to hit a new peak this year. The power produced under the emergency guidelines — effective from May 26 until June 30 — will have assured offtake and be covered under a payment security mechanism, according to the order. The government invoked section 11 of the electricity law that allows it to force any power station to operate as directed in extraordinary circumstances, such as a natural calamity, a threat to national security or any public order. “In public interest, it is imperative to ensure optimal utilization of all available generation resources to meet the growing power requirements,” the order said. “Despite leveraging all available resources, occasional power shortages continue to be observed in certain regions during non-solar periods.” The power ministry didn’t immediately respond to an emailed request for comment. Electricity demand in the country has outpaced generation growth, resulting in shortages when consumption surges. Over the past decade, the expansion in capacity has been led by solar power, which now accounts for more than

Read More »

Republican budget squeezes out of House committee, but deeper IRA cuts could come

Republican holdouts on the House Budget Committee allowed their party’s massive budget bill to advance Sunday night after negotiating deeper cuts to the Inflation Reduction Act, but they continue to call for even more cuts to clean energy incentives. The bill had been blocked from passing Friday night by Reps. Chip Roy, R-Texas; Ralph Norman, R-S.C.; Andrew Clyde, R-Ga.; and Josh Brecheen, R-Okla. Rep. Lloyd Smucker, R-Pa., also voted no; he said on X that he “fully support[s] the One Big Beautiful Bill” and his vote was “a procedural requirement to preserve the committee’s opportunity to reconsider the motion to advance OBBB.” Brecheen said in a Friday X post that he felt the House “cannot allow wind and solar tax credits, in current form, to continue in the ‘One Big Beautiful Bill.’ As it is currently written, Green New Scam subsidy phaseouts are delayed until 2029 — with some of these subsidies lasting until 2041!” After a weekend of negotiation, Republican lawmakers struck a deal for the bill to make deeper cuts to items including the IRA’s clean energy incentives, resulting in the legislation passing out of committee 17-16 late Sunday night. However, Roy said in a Sunday post on X that while the new bill “reduces the availability of future subsidies under the green new scam,” it “does not yet meet the moment — leaving almost half of the green new scam subsidies continuing.” The new shape of the bill is not fully known, but the legislation was advanced to the House Rules Committee, which will take it up early Wednesday morning. Breechen said after the Friday vote that he was grateful to President Trump “for leading the charge to end these Green New Scam giveaways” and thanked Speaker of the House Mike Johnson, R-La., Majority Leader Steve Scalise, R-La., and House Budget Committee

Read More »

Is your electric bill too high? Thank LNG exports.

Lt. Gen. Russel L. Honoré (Ret.) is a former commanding officer of the U.S. First Army. He is currently head of The Green Army, an organization dedicated to finding solutions to pollution. Just months after declaring a false “energy emergency,” the administration is moving to sell more American gas overseas, including to our competitors. It’s not only a disaster for the climate and our national security, but it will push American’s electricity bills through the roof. Energy prices are already skyrocketing. Electricity providers and their representatives are blaming regulators. Some elected officials, understandably under fire from their constituents, point the finger at greedy corporations. Meanwhile, apologists for fossil fuel companies are writing trendy think pieces putting the blame for high prices at the feet of green energy providers. The evidence for those claims are even thinner than the paper they’re printed on. There are plenty of factors at play that can explain rising energy costs. Some, like the huge demands placed on the grid by power thirsty data centers, crypto mining operations and AI are already widely known. But the role of LNG exports is not receiving nearly enough public scrutiny, especially since gas prices all but set electricity prices. While it’s been billed as “clean, American energy,” or “liquefied natural gas,” the product we’re talking about is a fossil fuel. It’s mostly methane, the greenhouse gas that traps 80 times more heat in the atmosphere than does carbon dioxide. Its liquefied form, which is pumped into massive supertanker ships and sold overseas, comes at an enormous cost, requiring massive outlays of energy to chill the fuel into a liquid form. It also harms our climate along every step of its journey as it leaks into the atmosphere from the well head, through the pipeline, to liquefaction, shipping and eventually

Read More »

Tariff uncertainty weighs on networking vendors

“Our guide assumes current tariffs and exemptions remain in place through the quarter. These include the following: China at 30%, partially offset by an exemption for semiconductors and certain electronic components; Mexico and Canada at 25% for the components and products that are not eligible for the current exemptions,” Cisco CFO Scott Herron told Wall Street analysts in the company’s quarterly earnings report on May 14. At this time, Cisco expects little impact from tariffs on steel and aluminum and retaliatory tariffs, Herron said. “We’ll continue to leverage our world-class supply chain team to help mitigate the impact,” he said, adding that “the flexibility and agility we have built into our operations over the last few years, the size and scale of our supply chain, provides us some unique advantages as we support our customers globally.” “Once the tariff scenario stabilizes, there [are] steps that we can take to mitigate it, as you’ve seen us do with China from the first Trump administration. And only after that would we consider price [increases],” Herron said. Similarly, Extreme Networks noted the changing tariff conditions during its earnings call on April 30. “The tariff situation is very dynamic, I think, as everybody knows and can appreciate, and it’s kind of hard to call. Yes, there was concern initially given the magnitude of tariffs,” said Extreme Networks CEO Ed Meyercord on the earnings call. “The larger question is, will all of the changes globally in trade and tariff policy have an impact on demand? And that’s hard to call at this point. And we’re going to hold as far as providing guidance or judgment on that until we have finality come July.” Financial news Meanwhile, AI is fueling high expectations and influencing investments in enterprise campus and data center environments.

Read More »

Liquid cooling becoming essential as AI servers proliferate

“Facility water loops sometimes have good water quality, sometimes bad,” says My Troung, CTO at ZutaCore, a liquid cooling company. “Sometimes you have organics you don’t want to have inside the technical loop.” So there’s one set of pipes that goes around the data center, collecting the heat from the server racks, and another set of smaller pipes that lives inside individual racks or servers. “That inner loop is some sort of technical fluid, and the two loops exchange heat across a heat exchanger,” says Troung. The most common approach today, he says, is to use a single-phase liquid — one that stays in liquid form and never evaporates into a gas — such as water or propylene glycol. But it’s not the most efficient option. Evaporation is a great way to dissipate heat. That’s what our bodies do when we sweat. When water goes from a liquid to a gas it’s called a phase change, and it uses up energy and makes everything around it slightly cooler. Of course, few servers run hot enough to boil water — but they can boil other liquids. “Two phase is the most efficient cooling technology,” says Xianming (Simon) Dai, a professor at University of Texas at Dallas. And it might be here sooner than you think. In a keynote address in March at Nvidia GTC, Nvidia CEO Jensen Huang unveiled the Rubin Ultra NVL576, due in the second half of 2027 — with 600 kilowatts per rack. “With the 600 kilowatt racks that Nvidia is announcing, the industry will have to shift very soon from single-phase approaches to two-phase,” says ZutaCore’s Troung. Another highly-efficient cooling approach is immersion cooling. According to a Castrol survey released in March, 90% of 600 data center industry leaders say that they are considering switching to immersion

Read More »

Cisco taps OpenAI’s Codex for AI-driven network coding

“If you want to ask Codex a question about your codebase, click “Ask”. Each task is processed independently in a separate, isolated environment preloaded with your codebase. Codex can read and edit files, as well as run commands including test harnesses, linters, and type checkers. Task completion typically takes between 1 and 30 minutes, depending on complexity, and you can monitor Codex’s progress in real time,” according to OpenAI. “Once Codex completes a task, it commits its changes in its environment. Codex provides verifiable evidence of its actions through citations of terminal logs and test outputs, allowing you to trace each step taken during task completion,” OpenAI wrote. “You can then review the results, request further revisions, open a GitHub pull request, or directly integrate the changes into your local environment. In the product, you can configure the Codex environment to match your real development environment as closely as possible.” OpenAI is releasing Codex as a research preview: “We prioritized security and transparency when designing Codex so users can verify its outputs – a safeguard that grows increasingly more important as AI models handle more complex coding tasks independently and safety considerations evolve. Users can check Codex’s work through citations, terminal logs and test results,” OpenAI wrote.  Internally, technical teams at OpenAI have started using Codex. “It is most often used by OpenAI engineers to offload repetitive, well-scoped tasks, like refactoring, renaming, and writing tests, that would otherwise break focus. It’s equally useful for scaffolding new features, wiring components, fixing bugs, and drafting documentation,” OpenAI stated. Cisco’s view of agentic AI Patel stated that Codex is part of the developing AI agent world, where Cisco envisions billions of AI agents will work together to transform and redefine the architectural assumptions the industry has relied on. Agents will communicate within and

Read More »

US companies are helping Saudi Arabia to build an AI powerhouse

AMD announced a five-year, $10 billion collaboration with Humain to deploy up to 500 megawatts of AI compute in Saudi Arabia and the US, aiming to deploy “multi-exaflop capacity by early 2026.” AWS, too, is expanding its data centers in Saudi Arabia to bolster Humain’s cloud infrastructure. Saudi Arabia has abundant oil and gas to power those data centers, and is growing its renewable energy resources with the goal of supplying 50% of the country’s power by 2030. “Commercial electricity rates, nearly 50% lower than in the US, offer potential cost savings for AI model training, though high local hosting costs due to land, talent, and infrastructure limit total savings,” said Eric Samuel, Associate Director at IDC. Located near Middle Eastern population centers and fiber optic cables to Asia, these data centers will offer enterprises low-latency cloud computing for real-time AI applications. Late is great There’s an advantage to being a relative latecomer to the technology industry, said Eric Samuel, associate director, research at IDC. “Saudi Arabia’s greenfield tech landscape offers a unique opportunity for rapid, ground-up AI integration, unburdened by legacy systems,” he said.

Read More »

AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers

Humain will deploy the Nvidia Omniverse platform as a multi-tenant system to drive acceleration of the new era of physical AI and robotics through simulation, optimization and operation of physical environments by new human-AI-led solutions. The AMD deal did not discuss the number of chips involved in the deal, but it is valued at $10 billion. AMD and Humain plan to develop a comprehensive AI infrastructure through a network of AMD-based AI data centers that will extend from Saudi Arabia to the US and support a wide range of AI workloads across corporate, start-up, and government markets. Think of it as AWS but only offering AI as a service. AMD will provide its AI compute portfolio – Epyc, Instinct, and FPGA networking — and the AMD ROCm open software ecosystem, while Humain will manage the delivery of the hyperscale data center, sustainable power systems, and global fiber interconnects. The partners expect to activate a multi-exaflop network by early 2026, supported by next-generation AI silicon, modular data center zones, and a software platform stack focused on developer enablement, open standards, and interoperability. Amazon Web Services also got a piece of the action, announcing a more than $5 billion investment to build an “AI zone” in the Kingdom. The zone is the first of its kind and will bring together multiple capabilities, including dedicated AWS AI infrastructure and servers, UltraCluster networks for faster AI training and inference, AWS services like SageMaker and Bedrock, and AI application services such as Amazon Q. Like the AMD project, the zone will be available in 2026. Humain only emerged this month, so little is known about it. But given that it is backed by Crown Prince Salman and has the full weight of the Kingdom’s Public Investment Fund (PIF), which ranks among the world’s largest and

Read More »

Check Point CISO: Network segregation can prevent blackouts, disruptions

Fischbein agrees 100% with his colleague’s analysis and adds that education and training can help prevent such incidents from occurring. “Simulating such a blackout is impossible, it has never been done,” he acknowledges, but he is committed to strengthening personal and team training and risk awareness. Increased defense and cybersecurity budgets In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. You have to look for a security platform that makes things easier, faster, and simpler,” he says. ” Today there are excellent tools that can stop all kinds of attacks.” “Since 2010, there have been cybersecurity systems, also from Check Point, that help prevent this type of incident from happening, but I’m not sure that [Spain’s electricity blackout] was a cyberattack.” Leading the way in email security According to Gartner’s Magic Quadrant, Check Point is the leader in email security platforms. Today email is still responsible for 88% of all malicious file distributions. Attacks that, as Fischbein explains, enter through phishing, spam, SMS, or QR codes. “There are two challenges: to stop the threats and not to disturb, because if the security tool is a nuisance it causes more harm than good. It is very important that the solution does not annoy [users],” he stresses. “As almost all attacks enter via e-mail, it is

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »