Stay Ahead, Stay ONMINE

Nvidia launches AI-first DGX Personal Computing Systems

Nvidia announced that Taiwan’s leading system manufacturers are set to build Nvidia DGX Spark and DGX Station systems. Growing partnerships with Acer, Gigabyte and MSI will extend the availability of DGX Spark and DGX Station personal AI supercomputers — empowering a global ecosystem of developers, data scientists and researchers with unprecedented performance and efficiency. Enterprises, software providers, government agencies, startups and research institutions need robust systems that can deliver the performance and capabilities of an AI server in a desktop form factor without compromising data size, proprietary model privacy or the speed of scalability. The rise of agentic AI systems capable of autonomous decision-making and task execution amplifies these demands. Powered by the Nvidia Grace Blackwell platform, DGX Spark and DGX Station will enable developers to prototype, fine-tune and inference models from the desktop to the data center. “AI has revolutionized every layer of the computing stack — from silicon to software,” said Jensen Huang, CEO of Nvidia, in a keynote talk at Computex 2025 in Taiwan. “Direct descendants of the DGX-1 system that ignited the AI revolution, DGX Spark and DGX Station are created from the ground up to power the next generation of AI research and development.” DGX Spark fuels innovation DGX Spark is equipped with the Nvidia GB10 Grace Blackwell Superchip and fifth-generation Tensor Cores. It delivers up to 1 petaflop of AI compute and 128GB of unified memory, and enables seamless exporting of models to Nvidia DGX Cloud or any accelerated cloud or data center infrastructure. Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI and accelerate workloads across industries. DGX Station advances AI innovation Built for the most demanding AI workloads, DGX Station features the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip, which offers up to 20 petaflops of AI performance and 784GB of unified system memory. The system also includes the Nvidia ConnectX-8 SuperNIC, supporting networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling. DGX Station can serve as an individual desktop for one user running advanced AI models using local data, or as an on-demand, centralized compute node for multiple users. The system supports Nvidia Multi-Instance GPU technology to partition into as many as seven instances — each with its own high-bandwidth memory, cache and compute cores — serving as a personal cloud for data science and AI development teams. To give developers a familiar user experience, DGX Spark and DGX Station mirror the software architecture that powers industrial-strength AI factories. Both systems use the Nvidia DGX operating system, preconfigured with the latest Nvidia AI software stack, and include access to Nvidia NIM microservices and Nvidia Blueprints. Developers can use common tools, such as PyTorch, Jupyter and Ollama, to prototype, fine-tune and perform inference on DGX Spark and seamlessly deploy to DGX Cloud or any accelerated data center or cloud infrastructure. Dell Technologies is among the first global system builders to develop DGX Spark and DGX Station — helping address the rising enterprise demand for powerful, localized AI computing solutions. “There’s a clear shift among consumers and enterprises to prioritize systems that can handle the next generation of intelligent workloads,” said Michael Dell, chairman and CEO of Dell Technologies, in a statement. “The interest in Nvidia DGX Spark and Nvidia DGX Station signals a new era of desktop computing, unlocking the full potential of local AI performance. Our portfolio is designed to meet these needs. Dell Pro Max with GB10 and Dell Pro Max with Nvidia GB300 give organizations the infrastructure to integrate and tackle large AI workloads.” HP Inc. is bolstering the future of AI computing by offering these new solutions that enable businesses to unlock the full potential of AI performance. “Through our collaboration with Nvidia, we are delivering a new set of AI-powered devices and experiences to further advance HP’s future-of-work ambitions to enable business growth and professional fulfillment,” said Enrique Lores, president and CEO of HP, in a statement. “With the HP ZGX, we are redefining desktop computing — bringing data-center-class AI performance to developers and researchers to iterate and simulate faster, unlocking new opportunities.” Expanded availability and partner ecosystem DGX Spark will be available from Acer, ASUS, Dell Technologies, GIGABYTE, HP, Lenovo and MSI, as well as global channel partners, starting in July. Reservations for DGX Spark are now open on nvidia.com and through Nvidia partners. DGX Station is expected to be available from ASUS, Dell Technologies, Gigabyte, HP and MSI later this year.

Nvidia announced that Taiwan’s leading system manufacturers are set to build Nvidia DGX Spark and DGX Station systems.

Growing partnerships with Acer, Gigabyte and MSI will extend the availability of DGX Spark and DGX Station personal AI supercomputers — empowering a global ecosystem of developers, data scientists and researchers with unprecedented performance and efficiency.

Enterprises, software providers, government agencies, startups and research institutions need robust systems that can deliver the performance and capabilities of an AI server in a desktop form factor without compromising data size, proprietary model privacy or the speed of scalability.

The rise of agentic AI systems capable of autonomous decision-making and task execution amplifies these demands. Powered by the Nvidia Grace Blackwell platform, DGX Spark and DGX Station will enable developers to prototype, fine-tune and inference models from the desktop to the data center.

“AI has revolutionized every layer of the computing stack — from silicon to software,” said Jensen Huang, CEO of Nvidia, in a keynote talk at Computex 2025 in Taiwan. “Direct descendants of the DGX-1 system that ignited the AI revolution, DGX Spark and DGX Station are created from the ground up to power the next generation of AI research and development.”

DGX Spark fuels innovation

DGX Spark is equipped with the Nvidia GB10 Grace Blackwell Superchip and fifth-generation Tensor Cores. It delivers up to 1 petaflop of AI compute and 128GB of unified memory, and enables seamless exporting of models to Nvidia DGX Cloud or any accelerated cloud or data center infrastructure.

Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI and accelerate workloads across industries.

DGX Station advances AI innovation

Built for the most demanding AI workloads, DGX Station features the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip, which offers up to 20 petaflops of AI performance and 784GB of unified system memory. The system also includes the Nvidia ConnectX-8 SuperNIC, supporting networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling.

DGX Station can serve as an individual desktop for one user running advanced AI models using local data, or as an on-demand, centralized compute node for multiple users. The system supports Nvidia Multi-Instance GPU technology to partition into as many as seven instances — each with its own high-bandwidth memory, cache and compute cores — serving as a personal cloud for data science and AI development teams.

To give developers a familiar user experience, DGX Spark and DGX Station mirror the software architecture that powers industrial-strength AI factories. Both systems use the Nvidia DGX operating system, preconfigured with the latest Nvidia AI software stack, and include access to Nvidia NIM microservices and Nvidia Blueprints.

Developers can use common tools, such as PyTorch, Jupyter and Ollama, to prototype, fine-tune and perform inference on DGX Spark and seamlessly deploy to DGX Cloud or any accelerated data center or cloud infrastructure.

Dell Technologies is among the first global system builders to develop DGX Spark and DGX Station — helping address the rising enterprise demand for powerful, localized AI computing solutions.

“There’s a clear shift among consumers and enterprises to prioritize systems that can handle the next generation of intelligent workloads,” said Michael Dell, chairman and CEO of Dell Technologies, in a statement. “The interest in Nvidia DGX Spark and Nvidia DGX Station signals a new era of desktop computing, unlocking the full potential of local AI performance. Our portfolio is designed to meet these needs. Dell Pro Max with GB10 and Dell Pro Max with Nvidia GB300 give organizations the infrastructure to integrate and tackle large AI workloads.”

HP Inc. is bolstering the future of AI computing by offering these new solutions that enable businesses to unlock the full potential of AI performance.

“Through our collaboration with Nvidia, we are delivering a new set of AI-powered devices and experiences to further advance HP’s future-of-work ambitions to enable business growth and professional fulfillment,” said Enrique Lores, president and CEO of HP, in a statement. “With the HP ZGX, we are redefining desktop computing — bringing data-center-class AI performance to developers and researchers to iterate and simulate faster, unlocking new opportunities.”

Expanded availability and partner ecosystem

DGX Spark will be available from Acer, ASUS, Dell Technologies, GIGABYTE, HP, Lenovo and MSI, as well as global channel partners, starting in July. Reservations for DGX Spark are now open on nvidia.com and through Nvidia partners.

DGX Station is expected to be available from ASUS, Dell Technologies, Gigabyte, HP and MSI later this year.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

How AI changes your multicloud network architecture

As enterprises find ever more use cases for generative AI (genAI) and agentic AI, their ability to achieve optimal business outcomes from these use cases will depend on the strength of their hybrid multicloud networks. Typically, these workloads demand higher-bandwidth, low-latency connectivity for centralized application delivery (LLM development), and AI

Read More »

6 trends that will shape the future of the cloud: Gartner

For this reason, Gartner recommends identifying specific use cases and planning the applications and data distributed across the organization that could benefit from a cross-cloud deployment model. This allows workloads to operate collaboratively across different cloud platforms, as well as different on-premises and co-location facilities. 4. Industry solutions According to

Read More »

New England Patriots kick off network upgrade

The longer-term roadmap with NWN includes a refresh of the stadium’s 1,800 Extreme Networks Wi-Fi 6 access points to either Wi-Fi 6E or 7, a refresh of the network’s 80 Cisco physical and virtual firewalls, followed by a network consolidation project. On top of all that, the Kraft Group is

Read More »

Aberdeen’s Centurion makes 21st acquisition of Manchester-based Aerial

Dyce-headquartered Centurion Group has acquired Manchester-based Aerial Platforms Ltd (APL), a rental provider of powered access lifting equipment for working-at-height. Headquartered in Leigh, with additional hubs in Newcastle and Carlisle, APL is an owner managed business that delivers safety-driven scissor lifts, boom lifts and telehandlers across the construction, infrastructure and retail sectors. Through its new purchase, Centurion has gained three new operating locations in England and access to major industrial customers in the lifting equipment space. The company said that APL will benefit from its financial scale and additional backing to invest in new rental assets, infrastructure and capabilities – supporting accelerated growth across the UK. Centurion Group revealed this year that has prepared a $100-million (£81m) acquisition fund to help drive the next stage of its growth. Prior to its purchase of APL, Centurion had bought 20 businesses around the world since it was formed in 2017. The company has previously said that acquisitions are central to its strategy to grow its market share in renewables, minerals, infrastructure, environmental, defence and government. Former chief financial officer Euan Leask took over the company starting in February after Houston-based CEO Fernando Assing retired. Leask said APL “has a strong reputation across the UK in the powered access market, making it a great addition to our UK & Europe operations. “The acquisition not only expands our footprint across England, but also strengthens our position in the construction, infrastructure and retail markets, and grows the range of lifting services we can offer our customers.” Backed by private equity firm SCF Partners, Centurion has grown organically and by acquisition from around $200m in revenue when it was formed by the merger of SCF and ATR to approximately $500m at the end of 2024. Centurion added that it has an active pipeline of acquisition opportunities

Read More »

Electric utilities must disclose PJM votes under new Maryland law

Dive Brief: Maryland public electric utilities must disclose how they vote at PJM Interconnection stakeholder meetings under a law signed May 13 by Democratic Gov. Wes Moore. On or before February 1 each year, electric companies covered by the bill, or their local affiliates, must file a comprehensive report with the Maryland Public Service Commission that include both public and nonpublic votes on matters before PJM. The first filing deadline comes next year. Similar bills are under consideration in Delaware, Pennsylvania and Illinois, all of which are partially or wholly within PJM territory. PJM is responsible for North America’s largest transmission grid. Dive Insight: Maryland HB 121 passed both houses of the state legislature with overwhelming bipartisan support.  A similar transparency requirement appeared last year in broader utility legislation that would have required all Maryland utilities to seek PJM membership and refrain from ratebasing lobbying expenses.  The 2024 bill did not make it to Moore’s desk, but the prohibition on lobbying-expense ratebasing appeared in a comprehensive package this year that also requires separate rate structures for large-load customers, imposes new restrictions on gas infrastructure investments and authorizes solicitations for nearly 5 GW of dispatchable generation and energy storage resources. The 2025 bill now awaits the governor’s signature. HB 121 is a “common-sense bill” that prevents utilities from operating “in the dark” while “Maryland families struggle with soaring electric bills,” sponsor Lorig Charkoudian, a Democratic state lawmaker who represents parts of Maryland’s Washington, D.C. suburbs, said in a May 14 op-ed in the Baltimore Sun. “Until now, there was no requirement that utilities tell the public — or even state regulators — how they’re voting on transmission policies or market rules that can add hundreds of millions of dollars to our monthly bills,” Charkoudian said. A PJM representative indicated the new

Read More »

South Bow Sees Sequential Rise in Earnings

South Bow Corp. has reported rising figures for the first quarter of 2025, including a record throughput. The company said in its quarterly report that its net income was $88 million, up from $55 million for the previous quarter, but well below the $112 million reported for the corresponding quarter a year prior. The company recorded normalized earnings before interest, taxes, depreciation and amortization (EBITDA) of $266 million. A decline in demand for uncommitted capacity on South Bow’s pipeline systems led to an 8 percent reduction in normalized EBITDA compared to the fourth quarter of 2024. South Bow said its throughput in the first quarter reached 613,000 barrels per day (bbl/d) on the Keystone pipeline, with a System Operating Factor (SOF) of 98 percent, and approximately 726,000 bbl/d on the U.S. Gulf Coast segment of the Keystone pipeline system. In April, the company activated emergency response protocols due to an oil release at MP-171 of the Keystone pipeline near Fort Ransom, North Dakota. The company was able to restart the pipeline within days. South Bow’s Q1 revenue was $498 million, $10 million above that of the fourth quarter of 2024. However, the figure was below the $544 million reported for the corresponding quarter a year prior. In the Western Canadian Sedimentary Basin, pipeline capacity for crude oil remains greater than the supply, South Bow said. Looking ahead, the uncommitted capacity demand on South Bow’s Keystone Pipeline is anticipated to stay low in the short term. Moreover, swiftly evolving global trade policies and tariffs have created economic and geopolitical instability, resulting in considerable fluctuations in commodity prices and pricing differentials, the company said. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments

Read More »

North America Adds Rigs for First Time in Months

North America added five rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on May 16. Although the U.S. dropped a total of two rigs week on week, Canada added a total of seven rigs during the same period, taking the total North America rig count up to 697, comprising 576 rigs from the U.S. and 121 from Canada, the count outlined. Of the total U.S. rig count of 576, 563 rigs are categorized as land rigs, 11 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 473 oil rigs, 100 gas rigs, and three miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 520 horizontal rigs, 41 directional rigs, and 15 vertical rigs. Week on week, the U.S. land and inland water rig counts each dropped by one, and the country’s offshore rig count remained unchanged, the count highlighted. The U.S. oil and gas rig counts each decreased by one week on week, and its miscellaneous rig count remained unchanged, the count showed. Baker Hughes revealed that the U.S. horizontal rig count dropped by two week on week, while its directional and vertical rig counts remained unchanged during the period. A major state variances subcategory included in the rig count showed that, week on week, New Mexico and Texas each dropped two rigs, and Wyoming and Ohio each added one rig. A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Permian basin dropped three rigs and the Utica basin added one rig. Canada’s total rig count of 121 is made up of 74 oil rigs and 47 gas rigs, Baker Hughes pointed out. The country’s

Read More »

Hopes rise for EU and UK cooperation on energy trading and carbon capture and storage

The UK and the European Union (EU) are set to forge closer ties on energy trading and emerging technologies such as carbon capture and storage (CCS) following a landmark summit in London. UK Prime Minister Sir Keir Starmer hosted EU leaders as the two sides reached a post-Brexit deal which also covered fishing rights and defence. According to a joint UK-EU statement, policymakers will explore UK participation in the EU’s internal electricity market, including participating in EU trading platforms. The statement also outlined continued regulatory cooperation on emerging energy transition sectors such as hydrogen, CCS and biomethane. The deal could have implications for the future of interconnector projects between the UK and EU countries, as well as offshore carbon storage projects in the North Sea. UK and EU cooperation on carbon capture and storage Many of the UK’s CCS developments, including the track-2 Acorn and Viking projects, are aiming to eventually import captured CO2 from mainland European nations such as Germany. © Supplied by Viking CCSA map showing major industrial emitters in the UK in relation to the proposed Viking CCS project from Harbour Energy. However, since the UK left the EU in 2016, the country has so far failed to secure any bilateral agreements with EU nations on cross-border CO2 transport and storage. Meanwhile, North Sea neighbour Norway has secured CO2 deals with EU members Denmark, Sweden, Belgium and the Netherlands, despite not being part of the EU. As a result, Norway’s Northern Lights CCS project is set to begin receiving international CO2 shipments later this year. © Supplied by Northern LightsThe Northern Lights carbon capture and storage project in Norway. Similarly, Denmark’s Greensand CCS project has already signed deals with a Swedish firm covering imported CO2 volumes. If the UK and the EU can align their regulatory schemes

Read More »

North Sea firms underestimating financial risks from net zero transition, study finds

Many UK oil and gas companies are underestimating the financial risks posed by the energy transition and are potentially exposing investors to significant losses, according to a study. Led by academics from the UK and France, the study explored how well transition risks were being accounted for by offshore firms. The study found the net zero shift was likely to reduce access to capital for fossil fuel companies, push up borrowing costs, and trigger large-scale write-downs – leading to some assets being stranded. Loughborough University lecturer Dr Freeman Owusu said these pressures “could have put the future viability of some companies in question”. “Our findings show that the transition to net zero presents significant risks for oil and gas companies in the UK,” Dr Owusu said. “These risks include rising operational costs, reduced access to finance, and increased financial pressure. “Together, these risks threaten the going concern of some oil and gas companies, lower market value, and have knock-on effects on the wider energy supply chain and government revenues.” Smaller firms ‘most exposed’ to net zero risks According to the study, smaller firms with higher emissions and fewer alternative business streams were seen as “most exposed” to these risks. The research identified issues surrounding the financial risks tied to the energy transition, as well as the need for clearer, more tailored company disclosures. The study found existing reporting frameworks do not fully capture the unique financial and accounting risks facing oil and gas firms during the energy transition. © Supplied by ShutterstockAn oil rig in the North Sea. Participants in the study called for greater transparency around environmental, social and governance (ESG) performance, alongside remaining reserves, plans for asset write-downs and evolving business models. Without these changes, oil and gas firms risked losing stakeholder trust and weakening their long-term prospects,

Read More »

Liquid cooling becoming essential as AI servers proliferate

“Facility water loops sometimes have good water quality, sometimes bad,” says My Troung, CTO at ZutaCore, a liquid cooling company. “Sometimes you have organics you don’t want to have inside the technical loop.” So there’s one set of pipes that goes around the data center, collecting the heat from the server racks, and another set of smaller pipes that lives inside individual racks or servers. “That inner loop is some sort of technical fluid, and the two loops exchange heat across a heat exchanger,” says Troung. The most common approach today, he says, is to use a single-phase liquid — one that stays in liquid form and never evaporates into a gas — such as water or propylene glycol. But it’s not the most efficient option. Evaporation is a great way to dissipate heat. That’s what our bodies do when we sweat. When water goes from a liquid to a gas it’s called a phase change, and it uses up energy and makes everything around it slightly cooler. Of course, few servers run hot enough to boil water — but they can boil other liquids. “Two phase is the most efficient cooling technology,” says Xianming (Simon) Dai, a professor at University of Texas at Dallas. And it might be here sooner than you think. In a keynote address in March at Nvidia GTC, Nvidia CEO Jensen Huang unveiled the Rubin Ultra NVL576, due in the second half of 2027 — with 600 kilowatts per rack. “With the 600 kilowatt racks that Nvidia is announcing, the industry will have to shift very soon from single-phase approaches to two-phase,” says ZutaCore’s Troung. Another highly-efficient cooling approach is immersion cooling. According to a Castrol survey released in March, 90% of 600 data center industry leaders say that they are considering switching to immersion

Read More »

Cisco taps OpenAI’s Codex for AI-driven network coding

“If you want to ask Codex a question about your codebase, click “Ask”. Each task is processed independently in a separate, isolated environment preloaded with your codebase. Codex can read and edit files, as well as run commands including test harnesses, linters, and type checkers. Task completion typically takes between 1 and 30 minutes, depending on complexity, and you can monitor Codex’s progress in real time,” according to OpenAI. “Once Codex completes a task, it commits its changes in its environment. Codex provides verifiable evidence of its actions through citations of terminal logs and test outputs, allowing you to trace each step taken during task completion,” OpenAI wrote. “You can then review the results, request further revisions, open a GitHub pull request, or directly integrate the changes into your local environment. In the product, you can configure the Codex environment to match your real development environment as closely as possible.” OpenAI is releasing Codex as a research preview: “We prioritized security and transparency when designing Codex so users can verify its outputs – a safeguard that grows increasingly more important as AI models handle more complex coding tasks independently and safety considerations evolve. Users can check Codex’s work through citations, terminal logs and test results,” OpenAI wrote.  Internally, technical teams at OpenAI have started using Codex. “It is most often used by OpenAI engineers to offload repetitive, well-scoped tasks, like refactoring, renaming, and writing tests, that would otherwise break focus. It’s equally useful for scaffolding new features, wiring components, fixing bugs, and drafting documentation,” OpenAI stated. Cisco’s view of agentic AI Patel stated that Codex is part of the developing AI agent world, where Cisco envisions billions of AI agents will work together to transform and redefine the architectural assumptions the industry has relied on. Agents will communicate within and

Read More »

US companies are helping Saudi Arabia to build an AI powerhouse

AMD announced a five-year, $10 billion collaboration with Humain to deploy up to 500 megawatts of AI compute in Saudi Arabia and the US, aiming to deploy “multi-exaflop capacity by early 2026.” AWS, too, is expanding its data centers in Saudi Arabia to bolster Humain’s cloud infrastructure. Saudi Arabia has abundant oil and gas to power those data centers, and is growing its renewable energy resources with the goal of supplying 50% of the country’s power by 2030. “Commercial electricity rates, nearly 50% lower than in the US, offer potential cost savings for AI model training, though high local hosting costs due to land, talent, and infrastructure limit total savings,” said Eric Samuel, Associate Director at IDC. Located near Middle Eastern population centers and fiber optic cables to Asia, these data centers will offer enterprises low-latency cloud computing for real-time AI applications. Late is great There’s an advantage to being a relative latecomer to the technology industry, said Eric Samuel, associate director, research at IDC. “Saudi Arabia’s greenfield tech landscape offers a unique opportunity for rapid, ground-up AI integration, unburdened by legacy systems,” he said.

Read More »

AMD, Nvidia partner with Saudi startup to build multi-billion dollar AI service centers

Humain will deploy the Nvidia Omniverse platform as a multi-tenant system to drive acceleration of the new era of physical AI and robotics through simulation, optimization and operation of physical environments by new human-AI-led solutions. The AMD deal did not discuss the number of chips involved in the deal, but it is valued at $10 billion. AMD and Humain plan to develop a comprehensive AI infrastructure through a network of AMD-based AI data centers that will extend from Saudi Arabia to the US and support a wide range of AI workloads across corporate, start-up, and government markets. Think of it as AWS but only offering AI as a service. AMD will provide its AI compute portfolio – Epyc, Instinct, and FPGA networking — and the AMD ROCm open software ecosystem, while Humain will manage the delivery of the hyperscale data center, sustainable power systems, and global fiber interconnects. The partners expect to activate a multi-exaflop network by early 2026, supported by next-generation AI silicon, modular data center zones, and a software platform stack focused on developer enablement, open standards, and interoperability. Amazon Web Services also got a piece of the action, announcing a more than $5 billion investment to build an “AI zone” in the Kingdom. The zone is the first of its kind and will bring together multiple capabilities, including dedicated AWS AI infrastructure and servers, UltraCluster networks for faster AI training and inference, AWS services like SageMaker and Bedrock, and AI application services such as Amazon Q. Like the AMD project, the zone will be available in 2026. Humain only emerged this month, so little is known about it. But given that it is backed by Crown Prince Salman and has the full weight of the Kingdom’s Public Investment Fund (PIF), which ranks among the world’s largest and

Read More »

Check Point CISO: Network segregation can prevent blackouts, disruptions

Fischbein agrees 100% with his colleague’s analysis and adds that education and training can help prevent such incidents from occurring. “Simulating such a blackout is impossible, it has never been done,” he acknowledges, but he is committed to strengthening personal and team training and risk awareness. Increased defense and cybersecurity budgets In 2025, industry watchers expect there will be an increase in the public budget allocated to defense. In Spain, one-third of the budget will be allocated to increasing cybersecurity. But for Fischbein, training teams is much more important than the budget. “The challenge is to distribute the budget in a way that can be managed,” he notes, and to leverage intuitive and easy-to-use platforms, so that organizations don’t have to invest all the money in training. “When you have information, management, users, devices, mobiles, data centers, clouds, cameras, printers… the security challenge is very complex. You have to look for a security platform that makes things easier, faster, and simpler,” he says. ” Today there are excellent tools that can stop all kinds of attacks.” “Since 2010, there have been cybersecurity systems, also from Check Point, that help prevent this type of incident from happening, but I’m not sure that [Spain’s electricity blackout] was a cyberattack.” Leading the way in email security According to Gartner’s Magic Quadrant, Check Point is the leader in email security platforms. Today email is still responsible for 88% of all malicious file distributions. Attacks that, as Fischbein explains, enter through phishing, spam, SMS, or QR codes. “There are two challenges: to stop the threats and not to disturb, because if the security tool is a nuisance it causes more harm than good. It is very important that the solution does not annoy [users],” he stresses. “As almost all attacks enter via e-mail, it is

Read More »

HPE ‘morphs’ private cloud portfolio with improved virtualization, storage and data protection

What do you get when combining Morpheus with Aruba? As part of the extensible platform message that HPE is promoting with Morpheus, it’s also working in some capabilities from the broader HPE portfolio. One integration is with HPE Aruba for networking microsegmentation. Bhardwaj noted that a lot of HPE Morpheus users are looking for microsegmentation in order to make sure that the traffic between two virtual machines on a server is secure. “The traditional approach of doing that is on the hypervisor, but that costs cycles on the hypervisor,” Bhardwaj said. “Frankly, the way that’s being delivered today, customers have to pay extra cost on the server.” With the HPE Aruba plugin that now works with HPE Morpheus, the microsegmentation capability can be enabled at the switch level. Bhardwaj said that by doing the microsegmentation in the switch and not the hypervisor, costs can be lowered and performance can be increased. The integration brings additional capabilities, including the ability to support VPN and network address translation (NAT) in an integrated way between the switch and the hypervisor. VMware isn’t the only hypervisor supported by HPE  The HPE Morpheus VM Essentials Hypervisor is another new element in the HPE cloud portfolio. The hypervisor is now being integrated into HPE’s private cloud offerings for both data center as well as edge deployments.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »