Stay Ahead, Stay ONMINE

Cloud quantum computing: A trillion-dollar opportunity with dangerous hidden risks

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Quantum computing (QC) brings with it a mix of groundbreaking possibilities and significant risks. Major tech players like IBM, Google, Microsoft and Amazon have already rolled out commercial QC cloud services, while […]

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Quantum computing (QC) brings with it a mix of groundbreaking possibilities and significant risks. Major tech players like IBM, Google, Microsoft and Amazon have already rolled out commercial QC cloud services, while specialized firms like Quantinuum and PsiQuantum have quickly achieved unicorn status. Experts predict that the global QC market could add more than $1 trillion to the world’s economy between 2025 and 2035. However, can we say with certainty that the benefits outweigh the risks?

On the one hand, these cutting-edge systems hold the promise of revolutionizing areas such as drug discovery, climate modeling, AI and maybe even artificial general intelligence (AGI) development. On the other hand, they also introduce serious cybersecurity challenges that should be addressed right now, even though fully functional quantum computers capable of breaking today’s encryption standards are still several years away.

Understanding the QC threat landscape

The main cybersecurity fear tied to QC is its potential to break encryption algorithms that have been deemed unbreakable. A survey by KPMG revealed that around 78% of U.S. companies and 60% of Canadian companies anticipate that quantum computers will become mainstream by 2030. More alarmingly, 73% of U.S. respondents and 60% of Canadian respondents believe it’s just a matter of time before cybercriminals start using QC to undermine current security measures.

Modern encryption methods rely heavily on mathematical problems that are virtually unsolvable by classical computers, at least within a reasonable timeframe. For instance, factoring the large prime numbers used in RSA encryption would take such a computer around 300 trillion years. However, with Shor’s algorithm (developed in 1994 to help quantum computers factor large numbers quickly), a sufficiently powerful quantum computer could potentially solve this exponentially faster.

Grover’s algorithm, designed for unstructured search, is a real game-changer when it comes to symmetric encryption methods, as it effectively cuts their security strength in half. For instance, AES-128 encryption would only offer the same level of security as a 64-bit system, leaving it open to quantum attacks. This situation calls for a push towards more robust encryption standards, such as AES-256, which can stand firm against potential quantum threats in the near future.

Harvesting now, decrypting later

The most concerning is the “harvest now, decrypt later” (HNDL) attack strategy, which involves adversaries gathering encrypted data today, only to decrypt it once QC technology becomes sufficiently advanced. It poses a significant risk to data that holds long-term value, like health records, financial details, classified government documents and military intelligence.

Given the potentially dire consequences of HNDL attacks, many organizations responsible for vital systems around the world must adopt “crypto agility.” This means they should be ready to swiftly swap out cryptographic algorithms and implementations whenever new vulnerabilities come to light. This concern is also reflected in the U.S. National Security Memorandum on Promoting U.S. Leadership in Quantum Computing While Mitigating Risk to Vulnerable Cryptographic Systems, which specifically points out this threat and calls for proactive measures to counter it.

The threat timeline

When it comes to predicting the timeline for quantum threats, expert opinions are all over the map. A recent report from MITRE suggests that we probably won’t see a quantum computer powerful enough to crack RSA-2048 encryption until around 2055 to 2060, based on the current trends in quantum volume – a metric used to compare the quality of different quantum computers. 

At the same time, some experts are feeling more optimistic. They believe that recent breakthroughs in quantum error correction and algorithm design could speed things up, possibly allowing for quantum decryption capabilities as early as 2035. For instance, researchers Jaime Sevilla and Jess Riedel released a report in late 2020, expressing a 90% confidence that RSA-2048 could be factored before 2060. 

While the exact timeline is still up in the air, one thing is clear: Experts agree that organizations need to start preparing right away, no matter when the quantum threat actually arrives.

Quantum machine learning – the ultimate black box?

Apart from the questionable crypto agility of today’s organizations, security researchers and futurists have been also worrying about the seemingly inevitable future merging of AI and QS. Quantum technology has the potential to supercharge AI development because it can handle complex calculations at lightning speed. It can play a crucial role in reaching AGI, as today’s AI systems need trillions of parameters to become smarter, which leads to some serious computational hurdles. However, this synergy also opens up scenarios that might be beyond our ability to predict. 

You don’t need AGI to grasp the essence of the problem. Imagine if quantum computing were to be integrated into machine learning (ML). We could be looking at what experts call the ultimate black box problem. Deep neural networks (DNNs) are already known for being quite opaque, with hidden layers that even their creators struggle to interpret. While tools for understanding how classical neural networks make decisions already exist, quantum ML would lead to a more confusing situation.

The root of the issue lies in the very nature of QC, namely the fact that it uses superposition, entanglement and interference to process information in ways that don’t have any classical equivalents. When these quantum features are applied to ML algorithms, the models that emerge might involve processes that are tough to translate into reasoning that humans can grasp. This raises some rather obvious concerns for vital areas like healthcare, finance and autonomous systems, where understanding AI decisions is crucial for safety and compliance.

Will post-quantum cryptography be enough?

To tackle the rising threats posed by QC, the U.S. National Institute of Standards and Technology (NIST) kicked off its Post-Quantum Cryptography Standardization project back in 2016. This involved conducting a thorough review of 69 candidate algorithms from cryptographers around the globe. Upon completing the review, NIST chose several promising methods that rely on structured lattices and hash functions. These are mathematical challenges thought capable of withstanding attacks from both classical and quantum computers. 

In 2024, NIST rolled out detailed post-quantum cryptographic standards, and major tech companies have been taking steps to implement early protections ever since. For instance, Apple unveiled PQ3 — a post-quantum protocol — for its iMessage platform, aimed at safeguarding against advanced quantum attacks. On a similar note, Google has been experimenting with post-quantum algorithms in Chrome since 2016 and is steadily integrating them into its various services. 

Meanwhile, Microsoft is making strides in enhancing qubit error correction without disturbing the quantum environment, marking a significant leap forward in the reliability of QC. For instance, earlier this year, the company announced that it has created a “new state of matter” (one in addition to solid, liquid and gas) dubbed “topological qubit,” which could lead to fully realized QCs in years, rather than decades.

Key transition challenges 

Still, the shift to post-quantum cryptography comes with a host of challenges that must be tackled head-on:

  • The implementation timeframe: U.S. officials are predicting it could take anywhere from 10 to 15 years to roll out new cryptographic standards across all systems. This is especially tricky for hardware that’s located in hard-to-reach places like satellites, vehicles and ATMs. 
  • The performance impact: Post-quantum encryption usually demands larger key sizes and more complex mathematical operations, which could slow down both encryption and decryption processes. 
  • A shortage of technical expertise. To successfully integrate quantum-resistant cryptography into existing systems, organizations need highly skilled IT professionals who are well-versed in both classical and quantum concepts. 
  • Vulnerability discovery: Even the most promising post-quantum algorithms might have hidden weaknesses, as we’ve seen with the NIST-selected CRYSTALS-Kyber algorithm. 
  • Supply chain concerns: Essential quantum components, like cryocoolers and specialized lasers, could be affected by geopolitical tensions and supply disruptions.

Last but certainly not least, being tech-savvy is going to be crucial in the quantum era. As companies rush to adopt post-quantum cryptography, it’s important to remember that encryption alone won’t shield them from employees who click on harmful links, open dubious email attachments or misuse their access to data. 

A recent example is when Microsoft found two applications that unintentionally revealed their private encryption keys — while the underlying math was solid, human error made that protection ineffective. Mistakes in implementation often compromise systems that are theoretically secure. 

Preparing for the quantum future

Organizations need to take a few important steps to get ready for the challenges posed by quantum security threats. Here’s what they should do, in very broad terms: 

  • Conduct a cryptographic inventory — take stock of all systems that use encryption and might be at risk from quantum attacks. 
  • Assess the lifetime value of data — figure out which pieces of information need long-term protection, and prioritize upgrading those systems. 
  • Develop migration timelines — set up realistic schedules for moving to post-quantum cryptography across all systems. 
  • Allocate appropriate resources — make sure to budget for the significant costs that come with implementing quantum-resistant security measures. 
  • Enhance monitoring capabilities – put systems in place to spot potential HNDL attacks. 

Michele Mosca has come up with a theorem to help organizations plan for quantum security: If X (the time data needs to stay secure) plus Y (the time it takes to upgrade cryptographic systems) is greater than Z (the time until quantum computers can crack current encryption), organizations must take action right away.

Conclusion

We’re stepping into an era of quantum computing that brings with it some serious cybersecurity challenges, and we all need to act fast, even if we’re not entirely sure when these challenges will fully materialize. It might be decades before we see quantum computers that can break current encryption, but the risks of inaction are simply too great. 

Vivek Wadhwa of Foreign Policy magazine puts it bluntly: “The world’s failure to rein in AI — or rather, the crude technologies masquerading as such — should serve to be a profound warning. There is an even more powerful emerging technology with the potential to wreak havoc, especially if it is combined with AI: Quantum computing.” 

To get ahead of this technological wave, organizations should start implementing post-quantum cryptography, keep an eye on adversarial quantum programs and secure quantum supply chain. It’s crucial to prepare now — before quantum computers suddenly make our current security measures entirely obsolete.

Julius Černiauskas is CEO at Oxylabs.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Essential commands for Linux server management

Any Linux systems administrator needs to be proficient with a wide range of commands for user management, file handling, system monitoring, networking, security and more. This article covers a range of commands that are essential for managing a Linux server. Keep in mind that some commands will depend on the

Read More »

EU Abandons Proposal to Lower Price Cap on Russian Oil

The European Union shelved a plan for a stricter price cap on Russian oil exports over concerns that the US won’t back tougher sanctions due to rising crude prices. The EU has proposed lowering the cap to $45 per barrel from the current $60, but oil’s surge following Israel’s attack on Iran has complicated efforts to find unanimity among the bloc’s 27 members. Several EU states argue that a lower oil-price cap can only work if the US supports the measure, according to officials familiar with the discussions, who asked not to be identified. It has become clear following this week’s Group of Seven summit that the US isn’t willing to back tougher Russia sanctions, they added. President Donald Trump told reporters at the meeting in Alberta, Canada, that sanctions “cost us a lot of money.” The EU package – the bloc’s 18th since Russia invaded Ukraine – is still expected to include a ban on the Nord Stream gas pipelines linking Germany and Russia, as well as an extension of SWIFT sanctions against additional banks. EU foreign ministers will discuss the measures in Brussels on Monday. European Commission President Ursula von der Leyen this week appeared to draw back from the EU’s proposal to lower the price cap on Russian oil, which is designed to limit Russian President Vladimir Putin’s ability to fund his war on Ukraine. “In the last days, we have seen the price has risen so the oil price cap does serve its function,” von der Leyen said on the sidelines of the G-7, where the proposal was discussed. “At the moment, there’s little pressure on lowering the oil price cap.” WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review.

Read More »

Oil Gains on Volatile Week

Oil fell after President Donald Trump signaled a decision on whether to strike Iran will be made within two weeks, easing concerns about an imminent attack from the US. Global benchmark Brent slid about 2.3% to settle as just above $77 a barrel after the White House comments on Thursday and a Reuters report that Tehran is ready to discuss limitations on uranium enrichment. Trading volumes for West Texas Intermediate were muted because of the US holiday and a contract expiration on Friday. The US benchmark for August delivery settled just below $74 a barrel. Iranian Foreign Minister Abbas Araghchi met with counterparts from the UK, France and Germany on Friday in Geneva. Araghchi said Iran is ready to hold another meeting with the Europeans in the near future, the state-run IRNA news agency reported. But, he warned, “As long as the attacks continue, we will not negotiate with any party.” “We are keen to continue ongoing discussions and negotiations with Iran, and we urge Iran to continue their talks with the US,” UK Foreign Secretary David Lammy told reporters after the meeting. “This is a perilous moment, and it’s hugely important that we don’t see regional escalation of this conflict.” Trump’s decision on Iran will take some time due to the “substantial chance of negotiations,” he said in a message through White House spokeswoman Karoline Leavitt. The UK, meanwhile, is withdrawing embassy staff from the Middle Eastern nation, AFP reported. The oil market has had a turbulent week, with futures swinging in a range of around $8, volatility spiking to the highest since 2022, key premiums significantly widening and options at one stage more bullish than after Russia’s invasion of Ukraine. Still, both Brent and WTI notched their third straight weekly gains. Brent crude closed almost 3% higher on

Read More »

DOE Approves Fourth Loan Disbursement to Restart the Palisades Nuclear Plant

WASHINGTON— U.S. Department of Energy (DOE) Secretary Chris Wright today announced the release of the fourth loan disbursement to Holtec to help fund the restart of the Palisades Nuclear Plant. Today’s action disburses $100,451,904 of the up to $1.52 billion loan guarantee to Holtec for the Palisades Nuclear Plant, which will be America’s first restart of a commercial nuclear reactor that ceased operations, subject to U.S. Nuclear Regulatory Commission (NRC) approvals. “Under President Trump’s leadership, the Department of Energy is taking a leading role in unleashing the American nuclear renaissance,” said Secretary Chris Wright. “The Palisades Nuclear Plant will help to reinvigorate our nuclear industrial base and will reestablish the United States as the world’s nuclear energy leader.” This disbursement marks Holtec’s fourth disbursement of funds from the Loan Programs Office (LPO) since the September 2024 announcement of the loan’s financial close. To date, $251,878,038 of DOE guaranteed loan funds have been disbursed to Holtec to help fund the reopening as it continues to make important milestones toward plant restart, including the NRC’s issuance of the final environmental assessment and finding of no significant impact. Today’s announcement highlights the Energy Department’s leading role in advancing President Trump’s Executive Order 14302, Reinvigorating the Nuclear Industrial Base, through funding the restart of nuclear plants. DOE remains committed to fulfill this mission in order to maximize the speed and scale of nuclear capacity in the United States, ensuring American’s have access to reliable, abundant, and affordable energy. ###

Read More »

Germany Holds Early Talks on How to Exit SEFE

Germany’s economy ministry is studying options for how to exit nationalized energy company Securing Energy for Europe GmbH, people with knowledge of the matter said.  Some officials have been holding early-stage deliberations as they evaluate a range of possible ways to exit SEFE, which could include a sale or breakup of the business, or a potential merger with fellow nationalized energy company Uniper SE, according to the people. The economy ministry department overseeing SEFE is tasked with drawing up a plan by mid-2025. SEFE, a former Gazprom unit, has itself been speaking with Boston Consulting Group as it seeks to examine the economic rationale for various options including a potential merger with Uniper, the people said. It previously studied that possibility during the energy crisis in 2022.  A spokesperson for Germany’s economy ministry, which manages the SEFE holding, said it “cannot confirm” whether it is studying exit options, adding that it’s not currently considering a merger with Uniper. A SEFE spokesperson said it works with diverse consultancies including BCG, declining to comment on the nature of the work. The finance ministry, which oversees the Uniper stake, declined to comment. Uniper and BCG also declined to comment.  Combining the two nationalized energy companies would only be considered as a potential fallback option in case the government’s sale of Uniper fails to gain traction, some of the people said. While at least three suitors – including Equinor ASA, Czech billionaire Daniel Kretinsky’s EPH and Brookfield Asset Management Ltd. – are considering bids for Uniper, talks to sell the company in one piece are complicated by the utility’s diverse portfolio. Some involved parties are skeptical of the merits of a potential merger between Uniper and SEFE, since any tie-up would have to pass rigorous antitrust hurdles and could delay the government’s exit, according to the

Read More »

European Nations Lead Push for De-Escalation in Israel-Iran War

Talks aimed at de-escalating the week-long war between Israel and Iran got under way in Geneva on Friday after US President Donald Trump signaled he would give diplomacy a chance before deciding whether to intervene militarily. Iranian Foreign Minister Abbas Araghchi is meeting counterparts from the UK, France and Germany to discuss what he called “nuclear and regional issues” around the ongoing conflict. French President Emmanuel Macron is among those leaders urging Iran to return to negotiations over its nuclear program. Oil prices fell following a report from Reuters that Iran is ready to discuss limitations on uranium enrichment, but won’t consider stopping entirely while it’s under military attacks. Before negotiations with the US were suspended, Tehran had signaled its willingness to accept some restrictions on its enrichment activities, while Israel and US have said the Islamic Republic shouldn’t be allowed to enrich uranium at all. Araghchi on Friday accused Israel of derailing the diplomacy with its strikes, telling the United Nations Human Rights Council that Iranian officials were scheduled to hold a next round of indirect talks with their US counterparts to “craft a promising agreement” that would make progress in resolving the nuclear issue. Israel launched its surprise attack on Iran last week, saying the threat of its sworn enemy acquiring nuclear weapons had to be neutralized. Iran responded with waves of missiles and drones of its own, and there have been heavy casualties on both sides.  Trump, who is scheduled to attend a national security meeting in the Oval Office on Friday, has publicly mused for days about the US joining the fray, but appears to have taken a step back after a run of tough rhetoric, including demands for Tehran residents to relocate and threats toward Iran’s Supreme Leader Ayatollah Ali Khamenei. “Based on the fact

Read More »

CenterPoint sends mobile generators to San Antonio, to support Texas grid

CenterPoint Energy will deploy 15 large mobile generators to the San Antonio area to help reduce the risk of energy shortfalls this summer, the Texas utility said Monday. Installation of the first five units has begun and the remaining 10 will be installed in the coming weeks. The move “will immediately lower monthly bills for our Houston-area customers,” Jason Ryan, CenterPoint’s executive vice president of regulatory services and government affairs, said in a statement. The deployment is expected to reduce bills for Houston-area customers by approximately $2/month by 2027, the utility said. The generators range from 27 MW to 32 MW and were acquired following Winter Storm Uri, which devastated the Texas grid in 2021. CenterPoint subsequently came under fire when the mobile units were not deployed after Hurricane Beryl last summer.  The utility opted to forego some profits associated with the mobile generation lease, amid the controversy. “CenterPoint will receive no revenue or profit from the 15 large units based on the agreement” with the Electric Reliability Council of Texas, the utility said Monday. According to the June agreement, ERCOT determined that the proposed retirement of CPS Energy’s V.H. Braunig Units 1, 2, and 3 “poses a significant risk” to its system because those units “help to mitigate risk of cascading outages associated with the post-contingency overload of certain transmission lines importing power into the greater San Antonio area.” ERCOT recommended Unit 3 continue operating under a reliability must-run agreement, but said CenterPoint’s mobile fleet “could help further mitigate the identified reliability risks more cost-effectively than committing V.H. Braunig Units 1 and 2 through RMR agreements.” According to CenterPoint, it’s generators will deliver approximately $200 million of value to the state’s grid. The agreement with ERCOT indicates the generators could run until March 2027. In the aftermath of Hurricane Beryl,

Read More »

Can Intel cut its way to profit with factory layoffs?

Matt Kimball, principal analyst at Moor Insights & Strategy, said, “While I’m sure tariffs have some impact on Intel’s layoffs, this is actually pretty simple — these layoffs are largely due to the financial challenges Intel is facing in terms of declining revenues.” The move, he said, “aligns with what the company had announced some time back, to bring expenses in line with revenues. While it is painful, I am confident that Intel will be able to meet these demands, as being able to produce quality chips in a timely fashion is critical to their comeback in the market.”  Intel, said Kimball, “started its turnaround a few years back when ex-CEO Pat Gelsinger announced its five nodes in four years plan. While this was an impressive vision to articulate, its purpose was to rebuild trust with customers, and to rebuild an execution discipline. I think the company has largely succeeded, but of course the results trail a bit.” Asked if a combination of layoffs and the moving around of jobs will affect the cost of importing chips, Kimball predicted it will likely not have an impact: “Intel (like any responsible company) is extremely focused on cost and supply chain management. They have this down to a science and it is so critical to margins. Also, while I don’t have insights, I would expect Intel is employing AI and/or analytics to help drive supply chain and manufacturing optimization.” The company’s number one job, he said, “is to deliver the highest quality chips to its customers — from the client to the data center. I have every confidence it will not put this mandate at risk as it considers where/how to make the appropriate resourcing decisions. I think everybody who has been through corporate restructuring (I’ve been through too many to count)

Read More »

Intel appears stuck between ‘a rock and a hard place’

Intel, said Kimball, “started its turnaround a few years back when ex-CEO Pat Gelsinger announced its five nodes in four years plan. While this was an impressive vision to articulate, its purpose was to rebuild trust with customers, and to rebuild an execution discipline. I think the company has largely succeeded, but of course the results trail a bit.” Asked if a combination of layoffs and the moving around of jobs will affect the cost of importing chips, Kimball predicted it will likely not have an impact: “Intel (like any responsible company) is extremely focused on cost and supply chain management. They have this down to a science and it is so critical to margins. Also, while I don’t have insights, I would expect Intel is employing AI and/or analytics to help drive supply chain and manufacturing optimization.” The company’s number one job, he said, “is to deliver the highest quality chips to its customers — from the client to the data center. I have every confidence it will not put this mandate at risk as it considers where/how to make the appropriate resourcing decisions. I think everybody who has been through corporate restructuring (I’ve been through too many to count) realizes that, when planning for these, ensuring the resilience of these mission critical functions is priority one.”  Added Bickley, “trimming the workforce, delaying construction of the US fab plants, and flattening the decision structure of the organization are prudent moves meant to buy time in the hopes that their new chip designs and foundry processes attract new business.”

Read More »

Next-gen AI chips will draw 15,000W each, redefining power, cooling, and data center design

“Dublin imposed a 2023 moratorium on new data centers, Frankfurt has no new capacity expected before 2030, and Singapore has just 7.2 MW available,” said Kasthuri Jagadeesan, Research Director at Everest Group, highlighting the dire situation. Electricity: the new bottleneck in AI RoI As AI modules push infrastructure to its limits, electricity is becoming a critical driver of return on investment. “Electricity has shifted from a line item in operational overhead to the defining factor in AI project feasibility,” Gogia noted. “Electricity costs now constitute between 40–60% of total Opex in modern AI infrastructure, both cloud and on-prem.” Enterprises are now forced to rethink deployment strategies—balancing control, compliance, and location-specific power rates. Cloud hyperscalers may gain further advantage due to better PUE, renewable access, and energy procurement models. “A single 15,000-watt module running continuously can cost up to $20,000 annually in electricity alone, excluding cooling,” said Manish Rawat, analyst at TechInsights. “That cost structure forces enterprises to evaluate location, usage models, and platform efficiency like never before.” The silicon arms race meets the power ceiling AI chip innovation is hitting new milestones, but the cost of that performance is no longer just measured in dollars or FLOPS — it’s in kilowatts. The KAIST TeraLab roadmap demonstrates that power and heat are becoming dominant factors in compute system design. The geography of AI, as several experts warn, is shifting. Power-abundant regions such as the Nordics, the Midwest US, and the Gulf states are becoming magnets for data center investments. Regions with limited grid capacity face a growing risk of becoming “AI deserts.”

Read More »

Edge reality check: What we’ve learned about scaling secure, smart infrastructure

Enterprises are pushing cloud resources back to the edge after years of centralization. Even as major incumbents such as Google, Microsoft, and AWS pull more enterprise workloads into massive, centralized hyperscalers, use cases at the edge increasingly require nearby infrastructure—not a long hop to a centralized data center—to take advantage of the torrents of real-time data generated by IoT devices, sensor networks, smart vehicles, and a panoply of newly connected hardware. Not long ago, the enterprise edge was a physical one. The central data center was typically located in or very near the organization’s headquarters. When organizations sought to expand their reach, they wanted to establish secure, speedy connections to other office locations, such as branches, providing them with fast and reliable access to centralized computing resources. Vendors initially sold MPLS, WAN optimization, and SD-WAN as “branch office solutions,” after all. Lesson one: Understand your legacy before locking in your future The networking model that connects centralized cloud resources to the edge via some combination of SD-WAN, MPLS, or 4G reflects a legacy HQ-branch design. However, for use cases such as facial recognition, gaming, or video streaming, old problems are new again. Latency, middle-mile congestion, and the high cost of bandwidth all undermine these real-time edge use cases.

Read More »

Cisco capitalizes on Isovalent buy, unveils new load balancer

The customer deploys the Isovalent Load Balancer control plane via automation and configures the desired number of virtual load-balancer appliances, Graf said. “The control plane automatically deploys virtual load-balancing appliances via the virtualization or Kubernetes platform. The load-balancing layer is self-healing and supports auto-scaling, which means that I can replace unhealthy instances and scale out as needed. The load balancer supports powerful L3-L7 load balancing with enterprise capabilities,” he said. Depending on the infrastructure the load balancer is deployed into, the operator will deploy the load balancer using familiar deployment methods. In a data center, this will be done using a standard virtualization automation installation such as Terraform or Ansible. In the public cloud, the load balancer is deployed as a public cloud service. In Kubernetes and OpenShift, the load balancer is deployed as a Kubernetes Deployment/Operator, Graf said.  “In the future, the Isovalent Load Balancer will also be able to run on top of Cisco Nexus smart switches,” Graf said. “This means that the Isovalent Load Balancer can run in any environment, from data center, public cloud, to Kubernetes while providing a consistent load-balancing layer with a frictionless cloud-native developer experience.” Cisco has announced a variety of smart switches over the past couple of months on the vendor’s 4.8T capacity Silicon One chip. But the N9300, where Isovalent would run, includes a built-in programmable data processing unit (DPU) from AMD to offload complex data processing work and free up the switches for AI and large workload processing. For customers, the Isovalent Load Balancer provides consistent load balancing across infrastructure while being aligned with Kubernetes as the future for infrastructure. “A single load-balancing solution that can run in the data center, in public cloud, and modern Kubernetes environments. This removes operational complexity, lowers cost, while modernizing the load-balancing infrastructure in preparation

Read More »

Oracle’s struggle with capacity meant they made the difficult but responsible decisions

IDC President Crawford Del Prete agreed, and said that Oracle senior management made the right move, despite how difficult the situation is today. “Oracle is being incredibly responsible here. They don’t want to have a lot of idle capacity. That capacity does have a shelf life,” Del Prete said. CEO Katz “is trying to be extremely precise about how much capacity she puts on.” Del Prete said that, for the moment, Oracle’s capacity situation is unique to the company, and has not been a factor with key rivals AWS, Microsoft, and Google. During the investor call, Katz said that her team “made engineering decisions that were much different from the other hyperscalers and that were better suited to the needs of enterprise customers, resulting in lower costs to them and giving them deployment flexibility.” Oracle management certainly anticipated a flurry of orders, but Katz said that she chose to not pay for expanded capacity until she saw finalized “contracted noncancelable bookings.” She pointed to a huge capex line of $9.1 billion and said, “the vast majority of our capex investments are for revenue generating equipment that is going into data centers and not for land or buildings.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »