Stay Ahead, Stay ONMINE

Nvidia unveils GeForce RTX 50 Series graphics cards with big performance gains

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Nvidia launched its much-awaited Nvidia GeForce RTX 50 series graphics processing units (GPUs), based on the Blackwell RTX tech. Jensen Huang, CEO of Nvidia, disclosed the news during his opening keynote speech at CES 2025, the […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Nvidia launched its much-awaited Nvidia GeForce RTX 50 series graphics processing units (GPUs), based on the Blackwell RTX tech.

Jensen Huang, CEO of Nvidia, disclosed the news during his opening keynote speech at CES 2025, the big tech trade show in Las Vegas this week.

“Blackwell, the engine of AI, has arrived for PC gamers, developers and creatives,” said Huang. “Fusing AI-driven neural rendering and ray tracing, Blackwell is the most significant computer graphics innovation since we introduced programmable shading 25 years ago.”

The new RTX Blackwell Neural Rendering Architecture comes with about 92 billion transistors. It has 125 Shader Teraflops of performance 380 RT TFLOPS, 4,000 AI TOPS, 1.8 terabytes per second of memory bandwidth, G7 memory (from Micron) and an AI-management processor. The top SKU has basically over 3,352 trillion AI operations per second (TOPS) of computing power.

“The programmable shader is also able to carry neural networks,” Huang said.

A neural face rendering.

Among the new technologies in this generation are RTX Neural Shaders, DLSS 4, RTX Neural Face rendering to create more realistic human faces, RTX Mega Geometry for rendering environments, and Reflex 2.

The DLSS 4 now can generate multiple frames at once thanks to advanced AI technology. That makes for much better frame rates.

Nvidia showed that one scene could be rendered at 27 frames per second with the DLSS turned off, with a 71 millisecond PC latency. DLSS 2 can do that scene with its super resolution tech at 71 FPS and PC latency of 34 milliseconds. DLSS 3.5 can do the scene at 140 FPS and 33 milliseconds. But DLSS 4 comes in at a whopping 247 FPS and 34 milliseconds. DLSS 4 is more than eight times better performance than systems that aren’t using AI for the predictive processing.

Nvidia’s SKUs include the GeForce RTX 50 Series Desktop Family. It includes the top of the line GPU, the GeForce RTX 5090 coming in at 3,404 AI TOPS and 32GB of G7 memory for $1,999. It also includes the GeForce RTX 5080 at 1,800 AI TOPS and 16GB of G7 memory for $999. The GeForce RTX 5070 Ti (the performance of a 4090) has 1,406 AI TOPS, 16GB of G7 memory for $749 and the GeForce RTX 5070 has 1117 AI TOPS, 12GB of G7 and costs $549.

Nvidia also said the GeForce RTX 50 Series will come to laptops with two times efficiency with more performance at half the power compared to the previous generation. It has 40% more battery life with Black Max-Q, two times larger generative AI models, and it is as thin as 14.9 millimeters in terms of laptop thickness.

As far as pricing goes, the laptops will come as follows: RTX 5090 at 1,824 AI TOPS and 24GB at $2,899. The RTX 5080 laptops will be at 1,334 AI TOPS, 16GB and $2,199. The RTX 5070 Ti will be 992 AI TOPS, 12GB and $1,599 and the RTX 5070 will be 798 AI TOPS, eight GB and $1,299.

Those are steep prices, but they represent the high end of value in GPUs for gaming.

Nvidia unveiled its Nvidia GeForce RTX 50 Series graphics chips.
Nvidia unveiled its Nvidia GeForce RTX 50 Series graphics chips.

Justin Walker, senior director of GeForce products, said in press briefing that Nvidia’s GeForce graphics card brand just celebrated its 25-year anniversary. It was the hit product that helped cement the company’s dominance in the ultra-competitive graphics processing unit (GPU) market and it enabled the company to use graphics as a springboard to AI processing, which is why Nvidia is the most valuable company in the world with a market capitalization of $3.65 trillion.

Now, it turns out, Walker said, AI can be used to help accelerate the performance of GPUs.

“The great thing about that is that while we are now an AI company, as well as gaming, our gaming side still benefits tremendously from the fact that we are doing AI,” Walker said.

And that’s the root of one of the announcements: Nvidia took the wraps of DLSS 4, which uses AI to predict the next pixel that needs to be drawn and then preemptively renders the pixel based on that prediction. The AI TOPS (a measure of AI performance) will be up to 4,000.

The new architecture of the 5000 series will have 1.8 terabytes per second of memory bandwidth, and it’s also tapping the Blackwell architecture that is the foundation of Nvidia’s latest AI processors.

The new GPU also has neural rendering technologies such as neural shaders.

“This is probably the biggest thing to happen in the graphics since programming for shaders, we are actually going to be embedding small neural networks within the shaders itself, and these neural networks can do certain things much more effectively and efficiently than traditional shaders,” Walker said.

The tech will enable Nvidia to compress textures eight times to maximize use of memory.

The Reflex 2 tech will use predictive shading to reduce the latency between when a gamer creates a movement and it shows up on the screen, so it will be 75% more responsive for gamers.

The 5090 series is likely to ship in January and the rest of the systems are going to ship in the March time frame, and the company will say which companies are shipping with the technology later. A number of games like Cyberpunk 2077 can play in 4K resolution at over 200 frames per second.

Walker said the company will have a list of games that take advantage of the various features.

Nvidia DLSS 4 Boosts Performance by Up to 8 times

Nvidia’s DLSS 4 AI tech is paying off.

DLSS 4 debuts Multi Frame Generation to boost frame rates by using AI to generate up to three frames per rendered frame. It works in unison with the suite of DLSS technologies to increase performance by up to 8x over traditional rendering, while maintaining responsiveness with Nvidia Reflex technology.

DLSS 4 also introduces the graphics industry’s first real-time application of the transformer model architecture. Transformer-based DLSS Ray Reconstruction and Super Resolution models use 2x more parameters and 4x more compute to provide greater stability, reduced ghosting, higher details and enhanced anti-aliasing in game scenes. DLSS 4 will be supported on GeForce RTX 50 Series GPUs in over 75 games and applications the day of launch.

Nvidia Reflex 2 introduces Frame Warp, an innovative technique to reduce latency in games by updating a rendered frame based on the latest mouse input just before it is sent to the display. Reflex 2 can reduce latency by up to 75%. This gives gamers a competitive edge in multiplayer games and makes single-player titles more responsive.

Blackwell Brings AI to Shaders

DLSS 4

Twenty-five years ago, Nvidia introduced GeForce 3 and programmable shaders, which set the stage for two decades of graphics innovation, from pixel shading to compute shading to real-time ray tracing. Alongside GeForce RTX 50 Series GPUs, NVIDIA is introducing RTX Neural Shaders, which brings small AI networks into programmable shaders, unlocking film-quality materials, lighting and more in real-time games.

Rendering game characters is one of the most challenging tasks in real-time graphics, as people are prone to notice the smallest errors or artifacts in digital humans. RTX Neural Faces takes a simple rasterized face and 3D pose data as input, and uses generative AI to render a temporally stable, high-quality digital face in real time.

RTX Neural Faces is complemented by new RTX technologies for ray-traced hair and skin. Along with the new RTX Mega Geometry, which enables up to 100 times more ray-traced triangles in a scene, these advancements are poised to deliver a massive leap in realism for game characters and environments.

The power of neural rendering, DLSS 4 and the new DLSS transformer model is showcased on GeForce RTX 50 Series GPUs with Zorah, a groundbreaking new technology demo from Nvidia.

Autonomous Game Characters

Nvidia 5070 has the performance of a 4090.

GeForce RTX 50 Series GPUs bring industry-leading AI TOPS to power autonomous game characters in parallel with game rendering.

Nvidia is introducing a suite of new Nvidia ACE technologies that enable game characters to perceive, plan and act like human players. ACE-powered autonomous characters are being integrated into Krafton’s PUBG: Battlegrounds and InZOI, the publisher’s upcoming life simulation game, as well as Wemade Next’s
MIR5.

In PUBG, companions powered by NVIDIA ACE plan and execute strategic actions, dynamically working with human players to ensure survival. InZOI features Smart Zoi characters that autonomously adjust behaviors based on life goals and in-game events. In MIR5, large language model (LLM)-driven raid bosses adapt tactics based on player behavior, creating more dynamic, challenging encounters.

AI Foundation Models for RTX AI PCs

Nvidia’s RTX Blackwell

Showcasing how RTX enthusiasts and developers can use NVIDIA NIM microservices to build AI agents and assistants, NVIDIA will release a pipeline of NIM microservices and AI Blueprints for RTX AI PCs from top model developers such as Black Forest Labs, Meta, Mistral and Stability AI.

Use cases span LLMs, vision language models, image generation, speech, embedding models for retrieval-augmented generation, PDF extraction and computer vision. The NIM microservices include all the necessary components for running AI on PCs and are optimized for deployment across all NVIDIA GPUs.

To demonstrate how enthusiasts and developers can use NIM to build AI agents and assistants, NVIDIA today previewed Project R2X, a vision-enabled PC avatar that can put information at a user’s fingengertips, assist with desktop apps and video conference calls, read and summarize documents, and more.

Jensen Huang, CEO of Nvidia.
Jensen Huang, CEO of Nvidia.

The GeForce RTX 50 Series GPUs supercharge creative work flows. RTX 50 Series GPUs are the first consumer GPUs to support FP4 precision, boosting AI image generation performance for models such as FLUX by 2x and enabling generative AI models to run locally in a smaller memory footprint, compared with previous-generation hardware.

The NVIDIA Broadcast app gains two AI-powered beta features for livestreamers: Studio Voice, which upgrades microphone audio, and Virtual Key light, which relights faces for polished streams. Streamlabs is introducing the Intelligent Streaming Assistant, powered by NVIDIA ACE and Inworld AI, which acts as a
cohost, producer and technical assistant to enhance livestreams.

The NvidiaFounders Editions of the GeForce RTX 5090, RTX 5080 and RTX 5070 GPUs will be available directly from nvidia.com and select retailers worldwide.

Stock-clocked and factory-overclocked models will be available from top add-in card providers such as ASUS, Colorful, Gainward, GALAX, GIGABYTE, INNO3D, KFA2, MSI, Palit, PNY and ZOTAC, and in desktops from system builders including Falcon Northwest, Inniarc, MAINGEAR, Mifcom, ORIGIN PC, PC Specialist and Scan Computers.

Laptops with GeForce RTX 5090, RTX 5080 and RTX 5070 Ti Laptop GPUs will be available starting in March, and RTX 5070 Laptop GPUs will be available starting in April from the world’s top manufacturers, including Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MECHREVO, MSI and Razer.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cloudflare problems hit websites around the world

Ominously, 31 minutes before Cloudflare acknowledged the problems with its global network, it had also reported problems with its support portal. “Our support portal provider is currently experiencing issues, and as such customers might encounter errors viewing or responding to support cases. Responses on customer inquiries are not affected, and

Read More »

Azure blocks record 15 Tbps DDoS attack as IoT botnets gain new firepower

Varkey added that modern DDoS attacks increasingly resemble hit-and-run incidents, striking suddenly, lasting only minutes, and disappearing before defenses fully engage. He said their speed and intensity require always-on protection and preemptive resilience rather than reactive mitigation. The attack shows how millions of consumer devices have effectively become strategic weapons capable of

Read More »

Atlantic LNG Freight Rates at Highest in Nearly 2 Years

The cost of transporting liquefied natural gas across the Atlantic Ocean surged to the highest in almost two years, as expanding exports from North America boosted demand for tankers. The spot rate to hire an LNG vessel for delivery from the US to Europe jumped 19 percent to $98,250 per day on Monday, the highest since January 2024, according to Spark Commodities, which tracks shipping prices. Costs to hire a tanker in the Pacific Ocean also jumped 15 percent to the highest in over a year, the data show. This is a stark turnaround for the market, which had languished at rock-bottom prices for most of the year amid a glut of available ships. Output from North America has increased steadily as new projects ramp up, requiring more vessels to deliver the fuel to customers in Europe and Asia. The 30-day moving average for LNG exports from North America has climbed nearly 40 percent year-to-date, according to ship-tracking data compiled by Bloomberg.  Higher freight rates threaten to widen the spread between Asian and European gas prices, as it will be more expensive to send US shipments to the Pacific. A company booked a vessel for December in the Atlantic for about $100,000 per day, traders said. Likewise, when freight rates were lower, companies sent some vessels to Asia, further exacerbating a shortage of ships in the Atlantic, they added. Still, the surge in charter rates is likely to have peaked and has “limited potential to run much higher,” according to Han Wei, a BloombergNEF analyst. “On the LNG tanker supply side, we’ll continue to see strong new build deliveries, which should keep spot charter rates in check,” he said. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social

Read More »

Insights: What’s next for Permian basin electrification?

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } This Insights episode of the Oil & Gas Journal ReEnterprised podcast examines the rapidly growing power demands in the Permian basin region and the implications for operators, utilities, and adjacent industries. OGJ Editor-in-Chief Chris Smith interviews Will Kernan, Power Solutions Strategy Manager for Caterpillar Oil & Gas, on why electricity demand has surged by multiple gigawatts since 2021 and why traditional reliance on the grid is no longer sufficient to ensure timely project development and stable operations. Kernan outlines how accelerating electricity demand from both oil and gas operations and new industrial entrants—particularly data centers—has strained transmission capacity, driving greater interest in on-site natural-gas-fired generation and microgrid models. The episode closes with a look at major grid-expansion proposals under consideration in Texas, their long lead-times, and how distributed generation, waste-gas utilization, and field-scale microgrids will shape a more flexible and resilient power ecosystem for the Permian in the years ahead. Highlights  1:50 – Permian electricity demand surgingUp ~4 Gw since 2021 to 7.5 Gw total—driven by upstream electrification, compression, midstream growth, and residential/commercial load. 3:13 – Grid is no longer the “easy button.” Utility interconnection timelines of 3–5+ years can’t

Read More »

Venture Global CEO: CP2 capacity could grow to 30 million tpy

The CP2 LNG plant Venture Global Inc. is building out in Cameron Parish, La., will be able to supply 30 million tonnes/year (tpy) versus its currently permitted capacity of 28 million tpy, Mike Sabel, the company’s chief executive officer and executive co-chairman said Nov. 10. Speaking after Virginia-based Venture Global reported its third-quarter results as well as the signing of a 1-million tpy supply agreement with Spain’s Naturgy, Sabel said teams have been applying learnings from the company’s Calcasieu Pass and Plaquemines plants. That includes from tens of thousands data points those plants are generating every minute. “We have a dedicated team of data scientists and process engineers and AI programmers that have been incorporating that data into our current operations, but also into design changes as we’ve learned some very surprising interactions of different parts of the facilities […] that we expect will carry over into CP2,” Sabel said. “We’ll have to go back and get the export authorization moved from 28 up to 30 but we think CP2 will be doing even better than Plaquemines, which is doing the best that any project has ever done.” As of Oct. 31, eight of the 26 planned liquefaction trains at CP2—which is forecast to cost a total of $29 billion—had been completed. Sabel said more than 3,500 construction workers are active at the site, which spans 700 acres. The Venture Global team this summer took final investment decision on the project and during the third quarter won final authorization from the US Department of Energy to export LNG to non-free trade agreement nations. During the 3 months that ended Sept. 30, Venture Global exported 100 LNG cargos, up from 89 in the spring and 31 in third-quarter 2024. That translated into net income of $429 million on more than $3.3 billion in

Read More »

TotalEnergies signs exploration license as operator of block offshore Guyana

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } TotalEnergies has become operator of a new oil and gas exploration license offshore Guyana. Following signing of a production sharing contract for Block S4 with Guyana’s Ministry of National Resoruces, TotalEnergies will hold 40% operated interest in the shallow-water block, alongside partners QatarEnergy (35%) and Petronas (25%). The signing follows the block’s 2023 award in the Guyana 2022 Licensing Round. Block S4 covers 1,788 sq km and lies about 50-100 km from shore. The initial work program consists of a 2,000 sq km 3D seismic acquisition.

Read More »

Equinor drills dry well in North Sea Snorre area

Equinor Energy AS has plugged a North Sea well drilled in the Snorre area. Wildcat well 34/6-9 S (Avbitertang) was drilled by the COSL Innovator drilling rig in production license 554, 35 km northeast of Snorre field and 125 km west of Florø. It was drilled to respective measured and vertical depths of 4,042 and 4,001 m subsea and was terminated in the Burton formation in the Lower Jurassic. Water depth at the site is 387 m. The well is the ninth wildcat well drilled within the license acreage and is the third exploration well drilled in this area this year. Like wells 34/8-20 S (Narvi Nord) and 34/6-8 S (Garantiana NV), the well was dry, the Norwegian Offshore Directorate said in a release Nov. 11. Geological information The objective of the well was to prove petroleum in Lower Jurassic reservoir rocks in the Cook formation, which it encountered in a total of about 106 m, 39 m of which with moderate to good reservoir quality. Data was collected, including pressure points in the Cook formation. Equinor is operator of the license with 40% interest. Partners are Aker BP ASA (30%) and Var Energi ASA (30%). 

Read More »

Hollub says Occidental ready for ‘harvesting’ phase after OxyChem deal

Occidental Petroleum Corp., Houston, will emphasize using existing infrastructure to get more from its reserves while building its unconventional enhanced oil recovery work in coming years. The greater focus on US assets—with the Permian basin at the core—comes after a 2-year span in which Vicki Hollub, president and chief executive officer, and her team spent $12 billion to buy CrownRock LP and announced a deal to sell the company’s OxyChem petrochemicals subsidiary to Berkshire Hathaway Inc. for $9.7 billion and use $6.5 billion of that amount to pay down debt.  Asked on a conference call Nov. 11 by Melius Research analyst James West if she is ready for “a quieter period, maybe a harvesting-type of a period,” Hollub chuckled and said, “Absolutely.” “I’m thankful to be at this point, finally,” she added. “This is where we wanted to be and this is where we needed to be. We’ve done everything that we set out to do with respect to being mostly a US company and with very high-quality, high-margin assets and assets that can sustain over the long term.” In the 3 months that ended Sept. 30, Occidental produced nearly 1.47 MMboe/d globally, which was an increase of almost 5% from the second quarter and up 4% from the same period in 2024. US oil production was 634,000 b/d—which was also up 4% year over year—while total output was 1.23 MMboe/d.  Production growth from US assets came predominantly from the Permian basin, where oil production rose to 422,000 b/d and total output rose to a record 800,000 boe/d. Helping drive the company’s results in the Permian was a 14% improvement from a year ago in shale well costs.

Read More »

Nvidia’s first exascale system is the 4th fastest supercomputer in the world

The world’s fourth exascale supercomputer has arrived, pitting Nvidia’s proprietary chip technologies against the x86 systems that have dominated supercomputing for decades. For the 66th edition of the TOP500, El Capitan holds steady at No. 1 while JUPITER Booster becomes the fourth exascale system on the list. The JUPITER Booster supercomputer, installed in Germany, uses Nvidia CPUs and GPUs and delivers a peak performance of exactly 1 exaflop, according to the November TOP500 list of supercomputers, released on Monday. The exaflop measurement is considered a major milestone in pushing computing performance to the limits. Today’s computers are typically measured in gigaflops and teraflops—and an exaflop translates to 1 billion gigaflops. Nvidia’s GPUs dominate AI servers installed in data centers as computing shifts to AI. As part of this shift, AI servers with Nvidia’s ARM-based Grace CPUs are emerging as a high-performance alternative to x86 chips. JUPITER is the fourth-fastest supercomputer in the world, behind three systems with x86 chips from AMD and Intel, according to TOP500. The top three supercomputers on the TOP500 list are in the U.S. and owned by the U.S. Department of Energy. The top two supercomputers—the 1.8-exaflop El Capitan at Lawrence Livermore National Laboratory and the 1.35-exaflop Frontier at Oak Ridge National Laboratory—use AMD CPUs and GPUs. The third-ranked 1.01-exaflop Aurora at Argonne National Laboratory uses Intel CPUs and GPUs. Intel scrapped its GPU roadmap after the release of Aurora and is now restructuring operations. The JUPITER Booster, which was assembled by France-based Eviden, has Nvidia’s GH200 superchip, which links two Nvidia Hopper GPUs with CPUs based on ARM designs. The CPU and GPU are connected via Nvidia’s proprietary NVLink interconnect, which is based on InfiniBand and provides bandwidth of up to 900 gigabytes per second. JUPITER first entered the Top500 list at 793 petaflops, but

Read More »

Samsung’s 60% memory price hike signals higher data center costs for enterprises

Industry-wide price surge driven by AI Samsung is not alone in raising prices. In October, TrendForce reported that Samsung and SK Hynix raised DRAM and NAND flash prices by up to 30% for Q4. Similarly, SK Hynix said during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, with the company posting record quarterly operating profit exceeding $8 billion, driven by surging AI demand. Industry analysts attributed the price increases to manufacturers redirecting production capacity. HBM production for AI accelerators consumes three times the wafer capacity of standard DRAM, according to a TrendForce report, citing remarks from Micron’s Chief Business Officer. After two years of oversupply, memory inventories have dropped to approximately eight weeks from over 30 weeks in early 2023. “The memory industry is tightening faster than expected as AI server demand for HBM, DDR5, and enterprise SSDs far outpaces supply growth,” said Manish Rawat, semiconductor analyst at TechInsights. “Even with new fab capacity coming online, much of it is dedicated to HBM, leaving conventional DRAM and NAND undersupplied. Memory is shifting from a cyclical commodity to a strategic bottleneck where suppliers can confidently enforce price discipline.” This newfound pricing power was evident in Samsung’s approach to contract negotiations. “Samsung’s delayed pricing announcement signals tough behind-the-scenes negotiations, with Samsung ultimately securing the aggressive hike it wanted,” Rawat said. “The move reflects a clear power shift toward chipmakers: inventories are normalized, supply is tight, and AI demand is unavoidable, leaving buyers with little room to negotiate.” Charlie Dai, VP and principal analyst at Forrester, said the 60% increase “signals confidence in sustained AI infrastructure growth and underscores memory’s strategic role as the bottleneck in accelerated computing.” Servers to cost 10-25% more For enterprises building AI infrastructure, these supply dynamics translate directly into

Read More »

Arista, Palo Alto bolster AI data center security

“Based on this inspection, the NGFW creates a comprehensive, application-aware security policy. It then instructs the Arista fabric to enforce that policy at wire speed for all subsequent, similar flows,” Kotamraju wrote. “This ‘inspect-once, enforce-many’ model delivers granular zero trust security without the performance bottlenecks of hairpinning all traffic through a firewall or forcing a costly, disruptive network redesign.” The second capability is a dynamic quarantine feature that enables the Palo Alto NGFWs to identify evasive threats using Cloud-Delivered Security Services (CDSS). “These services, such as Advanced WildFire for zero-day malware and Advanced Threat Prevention for unknown exploits, leverage global threat intelligence to detect and block attacks that traditional security misses,” Kotamraju wrote. The Arista fabric can intelligently offload trusted, high-bandwidth “elephant flows” from the firewall after inspection, freeing it to focus on high-risk traffic. When a threat is detected, the NGFW signals Arista CloudVision, which programs the network switches to automatically quarantine the compromised workload at hardware line-rate, according to Kotamraju: “This immediate response halts the lateral spread of a threat without creating a performance bottleneck or requiring manual intervention.” The third feature is unified policy orchestration, where Palo Alto Networks’ management plane centralizes zone-based and microperimeter policies, and CloudVision MSS responds with the offload and enforcement of Arista switches. “This treats the entire geo-distributed network as a single logical switch, allowing workloads to be migrated freely across cloud networks and security domains,” Srikanta and Barbieri wrote. Lastly, the Arista Validated Design (AVD) data models enable network-as-a-code, integrating with CI/CD pipelines. AVDs can also be generated by Arista’s AVA (Autonomous Virtual Assist) AI agents that incorporate best practices, testing, guardrails, and generated configurations. “Our integration directly resolves this conflict by creating a clean architectural separation that decouples the network fabric from security policy. This allows the NetOps team (managing the Arista

Read More »

AMD outlines ambitious plan for AI-driven data centers

“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.” One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms. One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara. And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.” AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors. Zen 6 processors will be built on 2 nanometer design generated by (you guessed

Read More »

Building the Regional Edge: DartPoints CEO Scott Willis on High-Density AI Workloads in Non-Tier-One Markets

When DartPoints CEO Scott Willis took the stage on “the Distributed Edge” panel at the 2025 Data Center Frontier Trends Summit, his message resonated across a room full of developers, operators, and hyperscale strategists: the future of AI infrastructure will be built far beyond the nation’s tier-one metros. On the latest episode of the Data Center Frontier Show, Willis expands on that thesis, mapping out how DartPoints has positioned itself for a moment when digital infrastructure inevitably becomes more distributed, and why that moment has now arrived. DartPoints’ strategy centers on what Willis calls the “regional edge”—markets in the Midwest, Southeast, and South Central regions that sit outside traditional cloud hubs but are increasingly essential to the evolving AI economy. These are not tower-edge micro-nodes, nor hyperscale mega-campuses. Instead, they are regional data centers designed to serve enterprises with colocation, cloud, hybrid cloud, multi-tenant cloud, DRaaS, and backup workloads, while increasingly accommodating the AI-driven use cases shaping the next phase of digital infrastructure. As inference expands and latency-sensitive applications proliferate, Willis sees the industry’s momentum bending toward the very markets DartPoints has spent years cultivating. Interconnection as Foundation for Regional AI Growth A key part of the company’s differentiation is its interconnection strategy. Every DartPoints facility is built to operate as a deeply interconnected environment, drawing in all available carriers within a market and stitching sites together through a regional fiber fabric. Willis describes fiber as the “nervous system” of the modern data center, and for DartPoints that means creating an interconnection model robust enough to support a mix of enterprise cloud, multi-site disaster recovery, and emerging AI inference workloads. The company is already hosting latency-sensitive deployments in select facilities—particularly inference AI and specialized healthcare applications—and Willis expects such deployments to expand significantly as regional AI architectures become more widely

Read More »

Key takeaways from Cisco Partner Summit

Brian Ortbals, senior vice president from World Wide Technology, which is one of Cisco’s biggest and most important partners stated: “Cisco engaged partners early in the process and took our feedback along the way. We believe now is the right time for these changes as it will enable us to capitalize on the changes in the market.” The reality is, the more successful its more-than-half-a-million partners are, the more successful Cisco will be. Platform approach is coming together When Jeetu Patel took the reigns as chief product officer, one of his goals was to make the Cisco portfolio a “force multiple.” Patel has stated repeatedly that, historically, Cisco acted more as a technology holding company with good products in networking, security, collaboration, data center and other areas. In this case, product breadth was not an advantage, as everything must be sold as “best of breed,” which is a tough ask of the salesforce and partner community. Since then, there have been many examples of the coming together of the portfolio to create products that leverage the breadth of the platform. The latest is the Unified Edge appliance, an all-in-one solution that brings together compute, networking, storage and security. Cisco has been aggressive with AI products in the data center, and Cisco Unified Edge compliments that work with a device designed to bring AI to edge locations. This is ideally suited for retail, manufacturing, healthcare, factories and other industries where it’s more cost effecting and performative to run AI where the data lives.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »