Stay Ahead, Stay ONMINE

Microsoft just launched an AI that discovered a new chemical in 200 hours instead of years

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Microsoft launched a new enterprise platform that harnesses artificial intelligence to dramatically accelerate scientific research and development, potentially compressing years of laboratory work into weeks or even days. The platform, called Microsoft Discovery, leverages specialized AI […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Microsoft launched a new enterprise platform that harnesses artificial intelligence to dramatically accelerate scientific research and development, potentially compressing years of laboratory work into weeks or even days.

The platform, called Microsoft Discovery, leverages specialized AI agents and high-performance computing to help scientists and engineers tackle complex research challenges without requiring them to write code, the company announced Monday at its annual Build developer conference.

“What we’re doing is really taking a look at how we can apply advancements in agentic AI and compute work, and then on to quantum computing, and apply it in the really important space, which is science,” said Jason Zander, Corporate Vice President of Strategic Missions and Technologies at Microsoft, in an exclusive interview with VentureBeat.

The system has already demonstrated its potential in Microsoft’s own research, where it helped discover a novel coolant for immersion cooling of data centers in approximately 200 hours — a process that traditionally would have taken months or years.

“In 200 hours with this framework, we were able to go through and screen 367,000 potential candidates that we came up with,” Zander explained. “We actually took it to a partner, and they actually synthesized it.”

How Microsoft is putting supercomputing power in the hands of everyday scientists

Microsoft Discovery represents a significant step toward democratizing advanced scientific tools, allowing researchers to interact with supercomputers and complex simulations using natural language rather than requiring specialized programming skills.

“It’s about empowering scientists to transform the entire discovery process with agentic AI,” Zander emphasized. “My PhD is in biology. I’m not a computer scientist, but if you can unlock that power of a supercomputer just by allowing me to prompt it, that’s very powerful.”

The platform addresses a key challenge in scientific research: the disconnect between domain expertise and computational skills. Traditionally, scientists would need to learn programming to leverage advanced computing tools, creating a bottleneck in the research process.

This democratization could prove particularly valuable for smaller research institutions that lack the resources to hire computational specialists to augment their scientific teams. By allowing domain experts to directly query complex simulations and run experiments through natural language, Microsoft is effectively lowering the barrier to entry for cutting-edge research techniques.

“As a scientist, I’m a biologist. I don’t know how to write computer code. I don’t want to spend all my time going into an editor and writing scripts and stuff to ask a supercomputer to do something,” Zander said. “I just wanted, like, this is what I want in plain English or plain language, and go do it.”

Inside Microsoft Discovery: AI ‘postdocs’ that can screen hundreds of thousands of experiments

Microsoft Discovery operates through what Zander described as a team of AI “postdocs” — specialized agents that can perform different aspects of the scientific process, from literature review to computational simulations.

“These postdoc agents do that work,” Zander explained. “It’s like having a team of folks that just got their PhD. They’re like residents in medicine — you’re in the hospital, but you’re still finishing.”

The platform combines two key components: foundational models that handle planning and specialized models trained for particular scientific domains like physics, chemistry, and biology. What makes this approach unique is how it blends general AI capabilities with deeply specialized scientific knowledge.

“The core process, you’ll find two parts of this,” Zander said. “One is we’re using foundational models for doing the planning. The other piece is, on the AI side, a set of models that are designed specifically for particular domains of science, that includes physics, chemistry, biology.”

According to a company statement, Microsoft Discovery is built on a “graph-based knowledge engine” that constructs nuanced relationships between proprietary data and external scientific research. This allows it to understand conflicting theories and diverse experimental results across disciplines, while maintaining transparency by tracking sources and reasoning processes.

At the center of the user experience is a Copilot interface that orchestrates these specialized agents based on researcher prompts, identifying which agents to leverage and setting up end-to-end workflows. This interface essentially acts as the central hub where human scientists can guide their virtual research team.

From months to hours: How Microsoft used its own AI to solve a critical data center cooling challenge

To demonstrate the platform’s capabilities, Microsoft used Microsoft Discovery to address a pressing challenge in data center technology: finding alternatives to coolants containing PFAS, so-called “forever chemicals” that are increasingly facing regulatory restrictions.

Current data center cooling methods often rely on harmful chemicals that are becoming untenable as global regulations push to ban these substances. Microsoft researchers used the platform to screen hundreds of thousands of potential alternatives.

“We did prototypes on this. Actually, when I owned Azure, I did a prototype eight years ago, and it works super well, actually,” Zander said. “It’s actually like 60 to 90% more efficient than just air cooling. The big problem is that coolant material that’s on market has PFAS in it.”

After identifying promising candidates, Microsoft synthesized the coolant and demonstrated it cooling a GPU running a video game. While this specific application remains experimental, it illustrates how Microsoft Discovery can compress development timelines for companies facing regulatory challenges.

The implications extend far beyond Microsoft’s own data centers. Any industry facing similar regulatory pressure to replace established chemicals or materials could potentially use this approach to accelerate their R&D cycles dramatically. What once would have been multi-year development processes might now be completed in a matter of months.

Daniel Pope, founder of Submer, a company focused on sustainable data centers, was quoted in the press release saying: “The speed and depth of molecular screening achieved by Microsoft Discovery would’ve been impossible with traditional methods. What once took years of lab work and trial-and-error, Microsoft Discovery can accomplish in just weeks, and with greater confidence.”

Pharma, beauty, and chips: The major companies already lining up to use Microsoft’s new scientific AI

Microsoft is building an ecosystem of partners across diverse industries to implement the platform, indicating its broad applicability beyond the company’s internal research needs.

Pharmaceutical giant GSK is exploring the platform for its potential to transform medicinal chemistry. The company stated an intent to partner with Microsoft to advance “GSK’s generative platforms for parallel prediction and testing, creating new medicines with greater speed and precision.”

In the consumer space, Estée Lauder plans to harness Microsoft Discovery to accelerate product development in skincare, makeup, and fragrance. “The Microsoft Discovery platform will help us to unleash the power of our data to drive fast, agile, breakthrough innovation and high-quality, personalized products that will delight our consumers,” said Kosmas Kretsos, PhD, MBA, Vice President of R&D and Innovation Technology at Estée Lauder Companies.

Microsoft is also expanding its partnership with Nvidia to integrate Nvidia’s ALCHEMI and BioNeMo NIM microservices with Microsoft Discovery, enabling faster breakthroughs in materials and life sciences. This partnership will allow researchers to leverage state-of-the-art inference capabilities for candidate identification, property mapping, and synthetic data generation.

“AI is dramatically accelerating the pace of scientific discovery,” said Dion Harris, senior director of accelerated data center solutions at Nvidia. “By integrating Nvidia ALCHEMI and BioNeMo NIM microservices into Azure Discovery, we’re giving scientists the ability to move from data to discovery with unprecedented speed, scale, and efficiency.”

In the semiconductor space, Microsoft plans to integrate Synopsys’ industry solutions to accelerate chip design and development. Sassine Ghazi, President and CEO of Synopsys, described semiconductor engineering as “among the most complex, consequential and high-stakes scientific endeavors of our time,” making it “an extremely compelling use case for artificial intelligence.”

System integrators Accenture and Capgemini will help customers implement and scale Microsoft Discovery deployments, bridging the gap between Microsoft’s technology and industry-specific applications.

Microsoft’s quantum strategy: Why Discovery is just the beginning of a scientific computing revolution

Microsoft Discovery also represents a stepping stone toward the company’s broader quantum computing ambitions. Zander explained that while the platform currently uses conventional high-performance computing, it’s designed with future quantum capabilities in mind.

“Science is a hero scenario for a quantum computer,” Zander said. “If you ask yourself, what can a quantum computer do? It’s extremely good at exploring complicated problem spaces that classic computers just aren’t able to do.”

Microsoft recently announced advancements in quantum computing with its Majorana one chip, which the company claims could potentially fit a million qubits “in the palm of your hand” — compared to competing approaches that might require “a football field worth of equipment.”

“General generative chemistry — we think the hero scenario for high-scale quantum computers is actually chemistry,” Zander explained. “Because what it can do is take a small amount of data and explore a space that would take millions of years for a classic, even the largest supercomputer, to do.”

This connection between today’s AI-driven discovery platform and tomorrow’s quantum computers reveals Microsoft’s long-term strategy: building the software infrastructure and user experience today that will eventually harness the revolutionary capabilities of quantum computing when the hardware matures.

Zander envisions a future where quantum computers design their own successors: “One of the first things that I want to do when I get the quantum computer that does that kind of work is I’m going to go give it my material stack for my chip. I’m going to basically say, ‘Okay, go simulate that sucker. Tell me how I build a new, a better, new version of you.’”

Guarding against misuse: The ethical guardrails Microsoft built into its scientific platform

With the powerful capabilities Microsoft Discovery offers, questions about potential misuse naturally arise. Zander emphasized that the platform incorporates Microsoft’s responsible AI framework.

“We have the responsible AI program, and it’s been around, actually I think we were one of the first companies to actually put that kind of framework into place,” Zander said. “Discovery absolutely is following all responsible AI guidelines.”

These safeguards include ethical use guidelines and content moderation similar to those implemented in consumer AI systems, but tailored for scientific applications. The company appears to be taking a proactive approach to identifying potential misuse scenarios.

“We already look for particular types of algorithms that could be harmful and try and flag those in content moderation style,” Zander explained. “Again, the analogy would be very similar to what a consumer kind of bot would do.”

This focus on responsible innovation reflects the dual-use nature of powerful scientific tools — the same platform that could accelerate lifesaving drug discovery could potentially be misused in other contexts. Microsoft’s approach attempts to balance innovation with appropriate safeguards, though the effectiveness of these measures will only become clear as the platform is adopted more widely.

The bigger picture: How Microsoft’s AI platform could reshape the pace of human innovation

Microsoft’s entry into scientific AI comes at a time when the field of accelerated discovery is heating up. The ability to compress research timelines could have profound implications for addressing urgent global challenges, from drug discovery to climate change solutions.

What differentiates Microsoft’s approach is its focus on accessibility for non-computational scientists and its integration with the company’s existing cloud infrastructure and future quantum ambitions. By allowing domain experts to directly leverage advanced computing without intermediaries, Microsoft could potentially remove a significant bottleneck in scientific progress.

“The big efficiencies are coming from places where, instead of me cramming additional domain knowledge, in this case, a scientist having learned to code, we’re basically saying, ‘Actually, we’ll let the genetic AI do that, you can do what you do, which is use your PhD and get forward progress,’” Zander explained.

This democratization of advanced computational methods could lead to a fundamental shift in how scientific research is conducted globally. Smaller labs and institutions in regions with less computational infrastructure might suddenly gain access to capabilities previously available only to elite research institutions.

However, the success of Microsoft Discovery will ultimately depend on how effectively it integrates into complex existing research workflows and whether its AI agents can truly understand the nuances of specialized scientific domains. The scientific community is notoriously rigorous and skeptical of new methodologies – Microsoft will need to demonstrate consistent, reproducible results to gain widespread adoption.

The platform enters private preview today, with pricing details yet to be announced. Microsoft indicates that smaller research labs will be able to access the platform through Azure, with costs structured similarly to other cloud services.

“At the end of the day, our goal, from a business perspective, is that it’s all about enabling that core platform, as opposed to you having to stand up,” Zander said. “It’ll just basically ride on top of the cloud and make it much easier for people to do.”

Accelerating the future: When AI meets scientific method

As Microsoft builds out its ambitious scientific AI platform, it positions itself at a unique juncture in the history of both computing and scientific discovery. The scientific method – a process refined over centuries – is now being augmented by some of the most advanced artificial intelligence ever created.

Microsoft Discovery represents a bet that the next era of scientific breakthroughs won’t come from either brilliant human minds or powerful AI systems working in isolation, but from their collaboration – where AI handles the computational heavy lifting while human scientists provide the creativity, intuition, and critical thinking that machines still lack.

“If you think about chemistry, materials sciences, materials actually impact about 98% of the world,” Zander noted. “Everything, the desks, the displays we’re using, the clothing that we’re wearing. It’s all materials.”

The implications of accelerating discovery in these domains extend far beyond Microsoft’s business interests or even the tech industry. If successful, platforms like Microsoft Discovery could fundamentally alter the pace at which humanity can innovate in response to existential challenges – from climate change to pandemic prevention.

The question now isn’t whether AI will transform scientific research, but how quickly and how deeply. As Zander put it: “We need to start working faster.” In a world facing increasingly complex challenges, Microsoft is betting that the combination of human scientific expertise and agentic AI might be exactly the acceleration we need.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia introduces ‘ridesharing for AI’ with DGX Cloud Lepton

The platform is currently in early access but already CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Nscale, SoftBank, and Yotta have agreed to make “tens of thousands of GPUs” available for customers. Developers can utilize GPU compute capacity in specific regions for both on-demand and long-term computing, supporting strategic and

Read More »

Nvidia opens NVLink to competitive processors

Until now, NVLink has been limited to Nvidia GPUs and CPUs, but with NVLink Fusion, non-Nvidia semi-custom accelerators will be able to use it. Nvidia says there will be two configurations for NVLink Fusion: for connecting custom CPUs to Nvidia GPUs and for connecting Nvidia’s Grace and future CPUs to

Read More »

How AI changes your multicloud network architecture

As enterprises find ever more use cases for generative AI (genAI) and agentic AI, their ability to achieve optimal business outcomes from these use cases will depend on the strength of their hybrid multicloud networks. Typically, these workloads demand higher-bandwidth, low-latency connectivity for centralized application delivery (LLM development), and AI

Read More »

Energy Secretary Chris Wright Delivers Keynote Remarks on Completion of First B61-13 Production Unit at Pantex Plant

AMARILLO— U.S. Secretary of Energy Chris Wright delivered keynote remarks today at the Department of Energy’s Pantex Plant in Amarillo, Texas, marking the completion of the first production unit of the B61-13 nuclear gravity bomb. The B61-13 is the latest modification to the B61 family of nuclear weapons and was completed nearly a year ahead of schedule and less than two years after the program was first announced, making it one of the most rapidly developed and fielded weapons since the Cold War. Under President Trump’s leadership, the Department is modernizing America’s nuclear stockpile to deliver peace through strength. The B61-13 builds on proven B61-12 production capabilities and incorporates modern safety, security, and accuracy features, with a yield tailored for hardened and large-area military targets. The B61-13 is one of seven warhead modernization programs NNSA is executing to ensure the long-term performance and credibility of the U.S. deterrent. Secretary Wright’s full remarks: It’s an honor to be here on this special day. Every time I hear our national anthem performed, I feel strong emotions. I feel first a sense of gratitude—gratitude for those that came before us and created this nation against all odds, that put their lives on the line and stuck to their principles, no matter what the pressure was. I also feel a sense of pride to be born in this country and to have the great luck to live as an American. The ideas of freedom, liberty, and justice for all—but freedom isn’t free. Freedom isn’t free. That national anthem was written over 200 years ago, the last time there were foreign troops on our soil. Most ideas or nations get taken over and they get snuffed out; they lose their way. We’re unique in history, and our nation has not. And that’s only because of

Read More »

ADNOC Gas to Join MSCI Emerging Markets Index

ADNOC Gas PLC has qualified to be included in the MSCI Emerging Markets Index, which serves as a benchmark for the performance of large and mid-cap stocks in 24 emerging market countries, effective next month. “ADNOC Gas becomes the third ADNOC company to be admitted to the Index, and its inclusion marks a significant milestone in the Company’s ongoing efforts to enhance its global investment profile”, Abu Dhabi National Oil Co.’s (ADNOC) gas processing and sales arm said in an online statement. “The development is set to increase the Company’s visibility among international institutional investors, which could improve passive cash inflows by between $300 to $500 million and facilitate a more diversified investor base”. “The Company anticipates that the inclusion should result in higher trading volumes and improved investor engagement, further solidifying its position as a leading energy player in the global market”, ADNOC Gas added. The inclusion in the MSCI index follows a $2.84 billion placement of 3.1 billion shares last February. The so-called marketed offering, the first in the UAE according to ADNOC, was priced about 43 percent higher than ADNOC Gas’ initial public offering in March 2023. The marketed offering, which attracted Gulf and international investors, represented four percent of ADNOC Gas’ issued and outstanding share capital, the parent said then. ADNOC retains an 86 percent stake in ADNOC Gas. “The inclusion supports our ambition to attract a broader and more diversified base of institutional investors and should drive greater liquidity in ADNOC Gas stock”, commented ADNOC Gas chief executive Fatema Mohamed Al Nuaimi. “The recent $2.84 billion marketed offering, which increased the Company’s free float by 80 percent, has already led to a sixfold rise in average daily trading volume, and we are confident that our continued strategic focus on growth will deliver further value for shareholders

Read More »

Russia Detains Oil Tanker That Left Estonia, Report Says

Russia detained the Liberian-flagged oil tanker Green Admire, which left the port of Sillamae and was sailing through Russian territorial waters, Estonian Public Broadcasting said Sunday, citing the Estonian Department of Transport. The vessel is owned by Aegean Shipping of Greece and carrying shale oil on board. It was following a previously agreed route and its final destination was Rotterdam, according to the report. The incident happened three days after Russia sent a plane in connection with Estonia‘s attempt to stop a Russian shadow oil tanker. Ships leaving the port of Sillamae typically pass through Russian waters as the route is safer for large vessels than the passage between shoals in Estonian waters. Estonia’s Department of Transport said that it was the first such incident and that it will direct ships arriving in and leaving Sillamae through the territorial waters of Estonia to avoid a recurrence, according to the report. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

UK, EU Agree to Link Carbon Markets in Post-Brexit Reset

The UK and European Union entered a new era of climate cooperation by pledging to link their carbon markets to avoid trade levies and reset post-Brexit relations. The move comes as part of Monday’s UK-EU summit, which aims to improve relations five years after Britain left the bloc, according to a deal laying out a “common understanding” between London and Brussels. Tying the two carbon markets together would avert border levies on imports of products like steel and cement from places with less strict CO2 policies. It’s also expected to boost liquidity in the market for carbon permits. “This would be a significant step in reducing trade frictions in steel with the EU, our biggest export market,” said Frank Aaskov, director for energy and climate change policy at UK steel. “It also eliminates the risk of costs from the EU carbon border adjustment mechanism, where the burden particularly falls on SMEs.” UK carbon futures rose as much as 8.4% to £52.40 a metric ton, before paring gains slightly. The equivalent EU contract declined 1% to €70.06. The spread between CO2 prices in the UK and the EU diverged post-Brexit after the bloc put in place strict rules to meet its goal of cutting emissions by 55% by the end of the decade from 1990 levels. Earlier this year, the gap between the two widened to over €40 ($44.74) per ton of CO2, before shrinking to a two-year low of less than €10 as UK permits rallied on speculation of linking markets with the EU. For both sides, the clear benefit is easier trade of carbon-intensive goods as they put in place levies to shield their companies from cheap, CO2-intensive competition from abroad. For the EU, linking carbon markets is a relatively easy way to boost post-Brexit cooperation, and could help the continent

Read More »

‘Disappointing’ hydrogen sector needs government backing

Intervention needed to support the UK’s nascent green hydrogen industry is at least a year behind schedule, which has resulted in “disappointing” results for the sector. Statkraft’s vice president of origination for the UK and Ireland called for change to UK policy at an industry event in Glasgow Speaking on the ‘The Hydrogen Proposition: Make It’ panel at All-Energy, Duncan Dale argued that despite the progress being made in the UK’s hydrogen market, “it’s a little too little and too late” as he added that “we’re about a year behind schedule”. He added: “The good news is we’re better than Europe, but Europe is a pretty low bar.” Dale explained that “supply chain cost increases” have also hit the industry, however, “we’re getting better test cases now for electrolysers, which is good”. The Statkraft UK boss discussed what government needs to do in order to support the UK’s hydrogen sector with his fellow panellists. © Supplied by Kintore HydrogenAn overhead visualisation of the Kintore Hydrogen project, one of the largest green hydrogen projects in development in Scotland. Statera’s policy manager Phoebe Finn discussed her firm’s massive 3GW Kintore Hydrogen project, which aims to create jobs in the north-east of Scotland. She called for major changes to the current UK energy system to accommodate hydrogen, including blending the molecule with the gas network and the creation of a dedicated hydrogen transmission system. ‘A critical moment’ for hydrogen in the UK Finn argued that hydrogen policy needs to “evolve” in order to “unlock projects like Kintore”. “We’re at a bit of a critical moment where costs need to come down and scale needs to go up and, in my view, the biggest barrier to that is location,” she added. Location was also a point raised by Dale who argued: “We’ve also got these

Read More »

Decision on ‘off-the-shelf’ nuclear reactors due within weeks, says minister

A key decision paving the way for the rollout of smaller, and easier to build nuclear power plants is expected “within the next few weeks”, an energy minister has said amid concerns over delays. Lord Hunt of Kings Heath said he understood the “frustration” felt at the hold-ups faced by the small modular reactor (SMR) programme, but insisted an announcement would be made shortly. Great British Nuclear (GBN), the Government’s expert delivery body, is in the final stages of considering plans by four short-listed companies – GE Hitachi, Holtec, Rolls Royce SMR and Westinghouse. Earlier this year, Prime Minister Sir Keir Starmer pledged to put Britain back in the global race for nuclear energy, and to reform planning rules to make it easier to build SMRs in England and Wales. Unlike traditional nuclear plants that are built on site, the reactors are smaller and can be made in factories with “off-the-shelf” components – making construction faster and less expensive. They could also be built in more locations, including alongside energy-intensive industrial sites such as AI data centres. © Supplied by GE Hitachi Nuclear EA visualisation of the GE Hitachi BWRX-300 small modular nuclear reactor design. Nuclear power forms a vital part of the Government’s plans to secure the UK’s energy independence and hit the target of net-zero by 2050. Questioned at Westminster over when an announcement could be expected on awarding SMR contracts, Lord Hunt said: “Great British Nuclear is overseeing the small modular reactor competition for UK deployment. “Following a period of detailed negotiation, bidders have now submitted final tenders which Great British Nuclear is evaluating. “Final decisions will be taken shortly.” But he was pressed by Labour former minister Lord Spellar for a “clearer date” on when decisions would be taken, as he warned of “bureaucratic inertia, dither

Read More »

Tariff uncertainty weighs on networking vendors

“Our guide assumes current tariffs and exemptions remain in place through the quarter. These include the following: China at 30%, partially offset by an exemption for semiconductors and certain electronic components; Mexico and Canada at 25% for the components and products that are not eligible for the current exemptions,” Cisco CFO Scott Herron told Wall Street analysts in the company’s quarterly earnings report on May 14. At this time, Cisco expects little impact from tariffs on steel and aluminum and retaliatory tariffs, Herron said. “We’ll continue to leverage our world-class supply chain team to help mitigate the impact,” he said, adding that “the flexibility and agility we have built into our operations over the last few years, the size and scale of our supply chain, provides us some unique advantages as we support our customers globally.” “Once the tariff scenario stabilizes, there [are] steps that we can take to mitigate it, as you’ve seen us do with China from the first Trump administration. And only after that would we consider price [increases],” Herron said. Similarly, Extreme Networks noted the changing tariff conditions during its earnings call on April 30. “The tariff situation is very dynamic, I think, as everybody knows and can appreciate, and it’s kind of hard to call. Yes, there was concern initially given the magnitude of tariffs,” said Extreme Networks CEO Ed Meyercord on the earnings call. “The larger question is, will all of the changes globally in trade and tariff policy have an impact on demand? And that’s hard to call at this point. And we’re going to hold as far as providing guidance or judgment on that until we have finality come July.” Financial news Meanwhile, AI is fueling high expectations and influencing investments in enterprise campus and data center environments.

Read More »

Liquid cooling becoming essential as AI servers proliferate

“Facility water loops sometimes have good water quality, sometimes bad,” says My Troung, CTO at ZutaCore, a liquid cooling company. “Sometimes you have organics you don’t want to have inside the technical loop.” So there’s one set of pipes that goes around the data center, collecting the heat from the server racks, and another set of smaller pipes that lives inside individual racks or servers. “That inner loop is some sort of technical fluid, and the two loops exchange heat across a heat exchanger,” says Troung. The most common approach today, he says, is to use a single-phase liquid — one that stays in liquid form and never evaporates into a gas — such as water or propylene glycol. But it’s not the most efficient option. Evaporation is a great way to dissipate heat. That’s what our bodies do when we sweat. When water goes from a liquid to a gas it’s called a phase change, and it uses up energy and makes everything around it slightly cooler. Of course, few servers run hot enough to boil water — but they can boil other liquids. “Two phase is the most efficient cooling technology,” says Xianming (Simon) Dai, a professor at University of Texas at Dallas. And it might be here sooner than you think. In a keynote address in March at Nvidia GTC, Nvidia CEO Jensen Huang unveiled the Rubin Ultra NVL576, due in the second half of 2027 — with 600 kilowatts per rack. “With the 600 kilowatt racks that Nvidia is announcing, the industry will have to shift very soon from single-phase approaches to two-phase,” says ZutaCore’s Troung. Another highly-efficient cooling approach is immersion cooling. According to a Castrol survey released in March, 90% of 600 data center industry leaders say that they are considering switching to immersion

Read More »

Cisco taps OpenAI’s Codex for AI-driven network coding

“If you want to ask Codex a question about your codebase, click “Ask”. Each task is processed independently in a separate, isolated environment preloaded with your codebase. Codex can read and edit files, as well as run commands including test harnesses, linters, and type checkers. Task completion typically takes between 1 and 30 minutes, depending on complexity, and you can monitor Codex’s progress in real time,” according to OpenAI. “Once Codex completes a task, it commits its changes in its environment. Codex provides verifiable evidence of its actions through citations of terminal logs and test outputs, allowing you to trace each step taken during task completion,” OpenAI wrote. “You can then review the results, request further revisions, open a GitHub pull request, or directly integrate the changes into your local environment. In the product, you can configure the Codex environment to match your real development environment as closely as possible.” OpenAI is releasing Codex as a research preview: “We prioritized security and transparency when designing Codex so users can verify its outputs – a safeguard that grows increasingly more important as AI models handle more complex coding tasks independently and safety considerations evolve. Users can check Codex’s work through citations, terminal logs and test results,” OpenAI wrote.  Internally, technical teams at OpenAI have started using Codex. “It is most often used by OpenAI engineers to offload repetitive, well-scoped tasks, like refactoring, renaming, and writing tests, that would otherwise break focus. It’s equally useful for scaffolding new features, wiring components, fixing bugs, and drafting documentation,” OpenAI stated. Cisco’s view of agentic AI Patel stated that Codex is part of the developing AI agent world, where Cisco envisions billions of AI agents will work together to transform and redefine the architectural assumptions the industry has relied on. Agents will communicate within and

Read More »

US companies are helping Saudi Arabia to build an AI powerhouse

AMD announced a five-year, $10 billion collaboration with Humain to deploy up to 500 megawatts of AI compute in Saudi Arabia and the US, aiming to deploy “multi-exaflop capacity by early 2026.” AWS, too, is expanding its data centers in Saudi Arabia to bolster Humain’s cloud infrastructure. Saudi Arabia has abundant oil and gas to power those data centers, and is growing its renewable energy resources with the goal of supplying 50% of the country’s power by 2030. “Commercial electricity rates, nearly 50% lower than in the US, offer potential cost savings for AI model training, though high local hosting costs due to land, talent, and infrastructure limit total savings,” said Eric Samuel, Associate Director at IDC. Located near Middle Eastern population centers and fiber optic cables to Asia, these data centers will offer enterprises low-latency cloud computing for real-time AI applications. Late is great There’s an advantage to being a relative latecomer to the technology industry, said Eric Samuel, associate director, research at IDC. “Saudi Arabia’s greenfield tech landscape offers a unique opportunity for rapid, ground-up AI integration, unburdened by legacy systems,” he said.

Read More »

AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers

Humain will deploy the Nvidia Omniverse platform as a multi-tenant system to drive acceleration of the new era of physical AI and robotics through simulation, optimization and operation of physical environments by new human-AI-led solutions. The AMD deal did not discuss the number of chips involved in the deal, but it is valued at $10 billion. AMD and Humain plan to develop a comprehensive AI infrastructure through a network of AMD-based AI data centers that will extend from Saudi Arabia to the US and support a wide range of AI workloads across corporate, start-up, and government markets. Think of it as AWS but only offering AI as a service. AMD will provide its AI compute portfolio – Epyc, Instinct, and FPGA networking — and the AMD ROCm open software ecosystem, while Humain will manage the delivery of the hyperscale data center, sustainable power systems, and global fiber interconnects. The partners expect to activate a multi-exaflop network by early 2026, supported by next-generation AI silicon, modular data center zones, and a software platform stack focused on developer enablement, open standards, and interoperability. Amazon Web Services also got a piece of the action, announcing a more than $5 billion investment to build an “AI zone” in the Kingdom. The zone is the first of its kind and will bring together multiple capabilities, including dedicated AWS AI infrastructure and servers, UltraCluster networks for faster AI training and inference, AWS services like SageMaker and Bedrock, and AI application services such as Amazon Q. Like the AMD project, the zone will be available in 2026. Humain only emerged this month, so little is known about it. But given that it is backed by Crown Prince Salman and has the full weight of the Kingdom’s Public Investment Fund (PIF), which ranks among the world’s largest and

Read More »

Check Point CISO: Network segregation can prevent blackouts, disruptions

Fischbein agrees 100% with his colleague’s analysis and adds that education and training can help prevent such incidents from occurring. “Simulating such a blackout is impossible, it has never been done,” he acknowledges, but he is committed to strengthening personal and team training and risk awareness. Increased defense and cybersecurity budgets In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. You have to look for a security platform that makes things easier, faster, and simpler,” he says. ” Today there are excellent tools that can stop all kinds of attacks.” “Since 2010, there have been cybersecurity systems, also from Check Point, that help prevent this type of incident from happening, but I’m not sure that [Spain’s electricity blackout] was a cyberattack.” Leading the way in email security According to Gartner’s Magic Quadrant, Check Point is the leader in email security platforms. Today email is still responsible for 88% of all malicious file distributions. Attacks that, as Fischbein explains, enter through phishing, spam, SMS, or QR codes. “There are two challenges: to stop the threats and not to disturb, because if the security tool is a nuisance it causes more harm than good. It is very important that the solution does not annoy [users],” he stresses. “As almost all attacks enter via e-mail, it is

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »