Stay Ahead, Stay ONMINE

AI’s promise of opportunity masks a reality of managed displacement

Cognitive migration is underway. The station is crowded. Some have boarded while others hesitate, unsure whether the destination justifies the departure.Future of work expert and Harvard University Professor Christopher Stanton commented recently that the uptake of AI has been tremendous and observed that it is an “extraordinarily fast-diffusing technology.” That speed of adoption and impact is a critical part of what differentiates the AI revolution from previous technology-led transformations, like the PC and the internet. Demis Hassabis, CEO of Google DeepMind, went further, predicting that AI could be “10 times bigger than the Industrial Revolution, and maybe 10 times faster.”Intelligence, or at least thinking, is increasingly shared between people and machines. Some people have begun to regularly use AI in their workflows. Others have gone further, integrating it into their cognitive routines and creative identities. These are the “willing,” including the consultants fluent in prompt design, the product managers retooling systems and those building their own businesses that do everything from coding to product design to marketing. For them, the terrain feels new but navigable. Exciting, even. But for many others, this moment feels strange, and more than a little unsettling. The risk they face is not just being left behind. It is not knowing how, when and whether to invest in AI, a future that seems highly uncertain, and one that is difficult to imagine their place in. That is the double risk of AI readiness, and it is reshaping how people interpret the pace, promises and pressure of this transition.

Cognitive migration is underway. The station is crowded. Some have boarded while others hesitate, unsure whether the destination justifies the departure.

Future of work expert and Harvard University Professor Christopher Stanton commented recently that the uptake of AI has been tremendous and observed that it is an “extraordinarily fast-diffusing technology.” That speed of adoption and impact is a critical part of what differentiates the AI revolution from previous technology-led transformations, like the PC and the internet. Demis Hassabis, CEO of Google DeepMind, went further, predicting that AI could be “10 times bigger than the Industrial Revolution, and maybe 10 times faster.”

Intelligence, or at least thinking, is increasingly shared between people and machines. Some people have begun to regularly use AI in their workflows. Others have gone further, integrating it into their cognitive routines and creative identities. These are the “willing,” including the consultants fluent in prompt design, the product managers retooling systems and those building their own businesses that do everything from coding to product design to marketing. 

For them, the terrain feels new but navigable. Exciting, even. But for many others, this moment feels strange, and more than a little unsettling. The risk they face is not just being left behind. It is not knowing how, when and whether to invest in AI, a future that seems highly uncertain, and one that is difficult to imagine their place in. That is the double risk of AI readiness, and it is reshaping how people interpret the pace, promises and pressure of this transition.


AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO


Is it real?

Across industries, new roles and teams are forming, and AI tools are reshaping workflows faster than norms or strategies can keep up. But the significance is still hazy, the strategies unclear. The end game, if there is one, remains uncertain. Yet the pace and scope of change feels portentous. Everyone is being told to adapt, but few know exactly what that means or how far the changes will go. Some AI industry leaders claim huge changes are coming, and soon, with superintelligent machines emerging possibly within a few years. 

But maybe this AI revolution will go bust, as others have before, with another “AI winter” to follow. There have been two notable winters. The first was in the 1970s, brought about by computational limits. The second began in the late 1980s after a wave of unmet expectations with high-profile failures and under-delivery of “expert systems.” These winters were characterized by a cycle of lofty expectations followed by profound disappointment, leading to significant reductions in funding and interest in AI. 

Should the excitement around AI agents today mirror the failed promise of expert systems, this could lead to another winter. However, there are major differences between then and now. Today, there is far greater institutional buy-in, consumer traction and cloud computing infrastructure compared to the expert systems of the 1980s. There is no guarantee that a new winter will not emerge, but if the industry fails this time, it will not be for lack of money or momentum. It will be because trust and reliability broke first.

A major retrenchment occurred in 1988 after the AI industry failed to meet its promises. The New York Times

Cognitive migration has started

If “the great cognitive migration” is real, this remains the early part of the journey. Some have boarded the train while others still linger, unsure about whether or when to get onboard. Amidst the uncertainty, the atmosphere at the station has grown restless, like travelers sensing a trip itinerary change that no one has announced. 

Most people have jobs, but they wonder about the degree of risk they face. The value of their work is shifting. A quiet but mounting anxiety hums beneath the surface of performance reviews and company town halls.

Already, AI can accelerate software development by 10 to 100X, generate the majority of client-facing code and compress project timelines dramatically. Managers are now able to use AI to create employee performance evaluations. Even classicists and archaeologists have found value in AI, having used the technology to understand ancient Latin inscriptions.

The “willing” have an idea of where they are going and may find traction. But for the “pressured,” the “resistant” and even those not yet touched by AI, this moment feels like something between anticipation and grief. These groups have started to grasp that they may not be staying in their comfort zones for long. 

For many, this is not just about tools or a new culture, but whether that culture has space for them at all. Waiting too long is akin to missing the train and could lead to long-term job displacement. Even those I have spoken with who are senior in their careers and have begun using AI wonder if their positions are threatened.

The narrative of opportunity and upskilling hides a more uncomfortable truth. For many, this is not a migration. It is a managed displacement. Some workers are not choosing to opt out of AI. They are discovering that the future being built does not include them. Belief in the tools is different from belonging in the system tools are reshaping. And without a clear path to participate meaningfully, “adapt or be left behind” begins to sound less like advice and more like a verdict.

These tensions are precisely why this moment matters. There is a growing sense that work, as they have known it, is beginning to recede. The signals are coming from the top. Microsoft CEO Satya Nadella acknowledged as much in a July 2025 memo following a reduction in force, noting that the transition to the AI era “might feel messy at times, but transformation always is.” But there is another layer to this unsettling reality: The technology driving this urgent transformation remains fundamentally unreliable.

The power and the glitch: Why AI still cannot be trusted

And yet, for all the urgency and momentum, this increasingly pervasive technology itself remains glitchy, limited, strangely brittle and far from dependable. This raises a second layer of doubt, not only about how to adapt, but about whether the tools we are adapting to can deliver. Perhaps these shortcomings should not be a surprise, considering that it was only several years ago when the output from large language models (LLMs) was barely coherent. Now, however, it is like having a PhD in your pocket; the idea of on-demand ambient intelligence once science fiction almost realized.  

Beneath their polish, however, chatbots built atop these LLMs remain fallible, forgetful and often overconfident. They still hallucinate, meaning that we cannot entirely trust their output. AI can answer with confidence, but not accountability. This is probably a good thing, as our knowledge and expertise are still needed. They also do not have persistent memory and have difficulty carrying forward a conversation from one session to another. 

They can also get lost. Recently, I had a session with a leading chatbot, and it answered a question with a complete non-sequitur. When I pointed this out, it responded again off-topic, as if the thread of our conversation had simply vanished.

They also do not learn, at least not in any human sense. Once a model is released, whether by Google, Anthropic, OpenAI or DeepSeek, its weights are frozen. Its “intelligence” is fixed. Instead, continuity of a conversation with a chatbot is limited to the confines of its context window, which is, admittedly, quite large. Within that window and conversation, the chatbots can absorb knowledge and make connections that serve as learning in the moment, and they appear increasingly like savants. 

These gifts and flaws add up to an intriguing, beguiling presence. But can we trust it? Surveys such as the 2025 Edelman Trust Barometer show that AI trust is divided. In China, 72% of people express trust in AI. But in the U.S., that number drops to 32%. This divergence underscores how public faith in AI is shaped as much by culture and governance as by technical capability. If AI did not hallucinate, if it could remember, if it learned, if we understood how it worked, we would likely trust it more. But trust in the AI industry itself remains elusive. There are widespread fears that there will be no meaningful regulation of AI technology, and that ordinary people will have little say in how it is developed or deployed.

Without trust, will this AI revolution flounder and bring about another winter? And if so, what happens to those who have invested time, energy and their careers? Will those who have waited to embrace AI be better off for having done so? Will cognitive migration be a flop?

Some notable AI researchers have warned that AI in its current form — based primarily on deep learning neural networks upon which LLMs are built — will fall short of optimistic projections. They claim that additional technical breakthroughs will be needed for this approach to advance much further. Others do not buy into the optimistic AI projections. Novelist Ewan Morrison views the potential of superintelligence as a fiction dangled to attract investor funding. “It’s a fantasy,” he said, “a product of venture capital gone nuts.”

Perhaps Morrison’s skepticism is warranted. However, even with their shortcomings, today’s LLMs are already demonstrating huge commercial utility. If the exponential progress of the last few years stops tomorrow, the ripples from what has already been created will have an impact for years to come. But beneath this movement lies something more fragile: The reliability of the tools themselves.

The gamble and the dream

For now, exponential advances continue as companies pilot and increasingly deploy AI. Whether driven by conviction or fear of missing out, the industry is determined to move forward. It could all fall apart if another winter arrives, especially if AI agents fail to deliver. Still, the prevailing assumption is that today’s shortcomings will be solved through better software engineering. And they might be. In fact, they probably will, at least to a degree.

The bet is that the technology will work, that it will scale and that the disruption it creates will be outweighed by the productivity it enables. Success in this adventure assumes that what we lose in human nuance, value and meaning will be made up for in reach and efficiency. This is the gamble we are making. And then there is the dream: AI will become a source of abundance widely shared, will elevate rather than exclude, and expand access to intelligence and opportunity rather than concentrate it. 

The unsettling lies in the gap between the two. We are moving forward as if taking this gamble will guarantee the dream. It is the hope that acceleration will land us in a better place, and the faith that it will not erode the human elements that make the destination worth reaching. But history reminds us that even successful bets can leave many behind. The “messy” transformation now underway is not just an inevitable side effect. It is the direct result of speed overwhelming human and institutional capacity to adapt effectively and with care. For now, cognitive migration continues, as much on faith as belief.

The challenge is not just to build better tools, but to ask harder questions about where they are taking us. We are not just migrating to an unknown destination; we are doing it so fast that the map is changing while we run, moving across a landscape that is still being drawn. Every migration carries hope. But hope, unexamined, can be risky. It is time to ask not just where we are going, but who will get to belong when we arrive.

Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Trump meets with Intel CEO after calling for his resignation

The call for Tan’s resignation coincided with an Aug. 6 letter Sen. Tom Cotton (R-AK) sent to Intel Chairman Frank Yeary, in which he expressed concerns about “Intel’s operations and its potential impact on U.S. national security,” citing a report alleging Tan’s links to Chinese firms and the fact Cadence

Read More »

US to maintain lower tariff rates on China imports for 90 more days

The U.S. is extending its pause on additional retaliatory tariffs for imports from China until Nov. 10, according to an executive order signed by President Donald Trump on Monday. The order said the extension is appropriate following “significant steps” from China on addressing U.S. trade concerns in ongoing discussions between the

Read More »

Critical SSH vulnerabilities expose enterprise network infrastructure as patching lags

RegreSSHion (CVE-2024-6387) proved particularly dangerous, enabling unauthenticated remote code execution through a signal reentrance vulnerability in OpenSSH. The vulnerability affected countless Linux systems and network appliances running vulnerable OpenSSH versions, though exploitation proved challenging due to modern memory protections. The MOVEit vulnerability (CVE-2024-5806) demonstrated how third-party SSH libraries could introduce

Read More »

Weaker Chinese Demand for Saudi Oil Signals Shift to Urals, EA Says

Chinese refiners are asking for less oil from Saudi Arabia, with the drop possibly pointing to a reshuffle of global flows as more Russian crude becomes available, according to Energy Aspects Ltd. A decline in so-called nominations for term cargoes from Saudi Aramco for September loading, led by trading-giant Unipec, indicated some Chinese refineries were holding back from purchases given the greater availability of Russia’s Urals, as well as comfortable stockpiles, the London-based consultant said in an Aug. 11 note, without saying how it got the information.  Indian nominations for September, meanwhile, increased from a month earlier as the country seeks alternatives to Russian crude following Western pushback. The global oil market has zeroed in on a possible reordering of some crude flows after the US and European Union ramped up pressure against India over its imports of Russian energy. Given there’s been no comparable move against China, that’s raised the possibility that more of Moscow’s oil will be taken by mainland refiners, including Urals, which ships from Russia’s west. Saudi Aramco is set to sell 43 million barrels of contractual supplies of September-loading crude to China, traders informed by the producer told Bloomberg. That compares with 51 million barrels a month ago, and a monthly average of about 45 million so far this year. Chinese interest in Urals is picking up given it remains the “most competitive” compared with similar Middle Eastern crudes, Energy Aspects said. Still, there’s a limit to China’s appetite given Russian imports account for 17 percent of overseas supplies, with 20 percent seen as a cap for a single country, it added. Sinopec, the Beijing-based parent company of Unipec, didn’t reply to an email seeking comment outside working hours. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network.

Read More »

Shell Loses Arbitration in LNG Row with Venture Global

Venture Global Inc., one of the largest US exporters of liquefied natural gas, has prevailed over oil giant Shell Plc in an arbitration case over the sale of cargoes from its first export plant, capping a two-year fight. The Virginia-based company welcomed the favorable tribunal ruling in a statement Tuesday and said the decision reaffirmed the “plain language” in its contracts. Its shares rose 6.7 percent after the close of regular trading in New York.  The dispute between the two energy players hinged on deals that Venture Global negotiated to sell fuel from its first export plant in Louisiana, named Calcasieu Pass. The facility began producing LNG in 2022. But instead of providing cargoes to customers with long-term contracts, Venture Global sold them directly into the spot market where prices were at a record high. Venture Global said the move was justified as contracts permitted it to sell LNG into the spot market before the plant was fully operational and still in its “commissioning phase.” It nonetheless outraged companies that had signed 20-year deals. The win is a crucial one for the company – the first resolution in a series of arbitration cases that have pitted the LNG upstart against some of the world’s biggest energy companies. Other cases are pending. It already has implications for trades around the globe.  “It means that every LNG contract in the world was probably rewritten since this case began, to make sure this situation will be avoided in the future.” said Ira Joseph, a senior research associate at the Center on Global Energy Policy at Columbia University. Other arbitration cases against Venture Global were filed by Shell, BP Plc, Polish utility Orlen SA, Portugal’s Galp Energia SGPS SA, Spain’s Repsol SA, Edison International and China’s Sinopec. The initial claims against Venture Global totaled nearly $6

Read More »

Eni, LG Chem Start Building South Korea’s First HVO-SAF Facility

LG Chem Ltd. and Eni SpA’s mobility unit Enilive have broken ground on South Korea’s first hydrotreated vegetable oil (HVO) and sustainable aviation fuel (SAF) production plant. The facility will rise at LG Chem’s Daesan chemical complex in Seosan, Chungcheongnam-do, 50 miles southwest of Seoul. The plant will be constructed by the LG Chem and Enilive joint venture, LG-Eni BioRefining. It is scheduled for completion in 2027 and will annually process approximately 400,000 tons of renewable bio-feedstock, the companies said in a joint statement. The facility’s HVO and SAF, whose demand is expected to rise due to renewable fuel mandates, will be produced by hydrogenating more sustainable vegetable oils like used cooking oil (UCO) and other waste residues through Ecofining, a technology developed by Eni in partnership with Honeywell UOP, the statement said. “LG Chem is transforming its portfolio to build a low-carbon foundation that ensures both a progressively more sustainable growth and profitability”, Shin Hak-cheol, CEO of LG Chem, said. “By advancing innovation in renewable fuels and bio-based feedstocks like HVO, we aim to strengthen our global competitiveness and meet our customers’ evolving needs efficiently”. By incorporating HVO into its supply chain, LG Chem aims to grow its range of ISCC PLUS-certified bio-circular balanced (BCB) products. These products are intended for use in sectors such as electronics, automotive, sporting goods, and hygiene products. “The Seosan biorefinery breaking ground reaffirms Enilive’s strategy in offering increasingly sustainable products and our company’s position as a leader in biofuels production”, Stefano Ballista, Enilive’s Chief Executive Officer, added. “Together with the plants that are already operational in Italy and in the United States of America, and with new biorefining plants under construction in Italy and Malaysia, the upcoming biorefining plant in Daesan will contribute to reach our 2030 target to increase our biorefining capacity

Read More »

$1.9T Norway Wealth Fund Trims Big Oil Stocks

Norway’s $1.9 trillion sovereign wealth fund pared back its holdings in oil and gas majors including Exxon Mobil Corp and Chevron Corp in the first half of the year. The fund trimmed its position in Exxon to 1.32% from 1.46% at the end of 2024. Its second-biggest energy holding, Shell Plc, was reduced to 2.55% from 2.78% during the same period. Positions in Chevron, BP Plc and TotalEnergies SE were all cut, according to holdings posted on its website on Tuesday. Oil prices retreated in the first half, despite a spike in June due to the conflict between Israel and Iran. President Donald Trump’s trade policies and moves by OPEC+ nations to keep increasing output quotas weighed on the outlook for energy demand and raised concerns around oversupply. Energy stock holdings as a whole for Norges Bank Investment Management — the official name of the fund — returned 6.3% in the first half of 2025, the fund said in its first half report Tuesday. The sector accounts for 2.9% of the equity portfolio. Norway’s wealth fund, the world’s largest, owns about 1.5% of listed stocks globally. More than two thirds of the fund is in equities, all outside of Norway. The manager of the Nordic country’s oil and gas wealth is to a large degree an index tracker. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Oil Falls as China Tariff Truce Extended

Oil slipped as investors weighed US President Donald Trump’s deferral of lofty tariffs on China against possible outcomes from his planned meeting with Russia’s Vladimir Putin. West Texas Intermediate dropped near $63 a barrel in muted summer trading, near last week’s two-month low. Trump extended for another 90 days a truce that was set to expire Tuesday. US inflation data, meanwhile, bolstered speculation the Federal Reserve will soon be able to cut interest rates. Absent any major drivers, traders are looking to the summit between Trump and Putin later this week for signs sanctions on the major oil producer will be eased, although the US president on Monday downplayed expectations for a deal to end the war in Ukraine. The aggregate trading volume of global benchmark Brent is well below its daily average — suggesting traders are exercising caution as they seek further insight into the oil market’s outlook. Prices are down by more than 8% this month after trade and geopolitical tensions eased, while many analysts anticipate a supply glut later this year. Meanwhile, the US government now expects domestic oil production to fall next year, reversing years of output growth. The EIA also estimated the supply glut would increase to 1.7 million barrels a day in 2026. Looking ahead, the International Energy Agency will release its report on Wednesday. Oil Prices WTI for September delivery fell 79 cents to settle at $63.17 a barrel in New York. Brent for October settlement dropped 51 cents to settle at $66.12 a barrel. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders

Read More »

Portland General energizes 1.9 GWh of lithium batteries in major storage expansion

Portland General Electric announced Thursday it has brought a trio of battery projects online totaling 475 MW/1.9 GWh to maintain reliability and limit price volatility in the metropolitan area. The three projects bring PGE’s large-scale battery storage capacity to 492 MW, representing a significant expansion. Energy storage plays “an important role in helping PGE build a more flexible, reliable and diverse generation portfolio,” Darrington Outama, PGE’s senior director of energy supply, said in a statement. Along with providing energy during hours of grid stress, batteries “enhance our ability to respond to sudden changes in the grid and help keep energy supply and demand balanced,” he said. The four-hour batteries are “strategically located at key substations” in North Portland, Troutdale and Hillsboro, Oregon, PGE said. They will reduce the utility’s need for expensive short-term electricity purchases and support the integration of intermittent sources like wind and solar, the utility said. The projects include: The 200-MW Seaside project, located in North Portland and developed for the utility by Eolian under a fixed-cost build-transfer agreement. The project began commercial operations in July. The 200-MW Sundial project in Troutdale, developed by Eolian and operated by NextEra Energy Resources under a 20-year storage capacity agreement with PGE. Sundial came online in December. The 75-MW Constable facility in Hillsboro, which was constructed for PGE under an engineering, procurement and construction agreement with Mortenson. The facility achieved commercial operation in December. Eolian won the Seaside and Sundial projects as part of an all-source request for proposals PGE held in 2021. “Battery energy storage systems sited at major substations radically improve the use of existing high voltage transmission lines, avoiding expensive or challenging new grid upgrades and providing a low-cost load growth solution through existing infrastructure,” Eolian CEO Aaron Zubot said in a statement. PGE also has a 17 MW Coffee

Read More »

New Compute Exchange service answers GPU pricing queries

Compute Exchange and Silicon Data, Bochev added “are also working on developing clearer benchmarks for the compute market, and will have more details to share on that in the coming weeks.” PIC ‘should serve to keep suppliers honest ..’ Scott Bickley, an advisory fellow at Info-Tech Research Group, said he views the offering “as a way for enterprises to source short-term GPU capacity and possibly get a deal, especially if it is stranded capacity from the neocloud providers.” This, he said, “would also help to benchmark costs when purchasing this capacity in general, so it’s good, but it is also straightforward in terms of the value proposition.” He also noted that most companies are not buying GPU capacity directly; “This is for those that are building their own models or deploying their own AI applications atop existing models.” Bickley added, “it should serve to keep suppliers honest to some degree in terms of the floors and ceilings of the price to access GPU capacity.” Soon after Compute Exchange first launched in February, Matt Kimball, VP and principal analyst for data center compute and storage at Moor Insights & Strategy, described the GPU compute situation as “pretty dire. This is driven by what most view as a single supplier (Nvidia) selling GPUs before they can even be made to a market that has an insatiable thirst.” On Tuesday, following the announcement, he said that the concept of PIC is appealing: “I really like the idea of PIC as a tool for customers and seeing the compute exchange become an arbitrageur of sorts. This delivers a real value to [anyone] who is looking to utilize AI infrastructure,” he said.

Read More »

Data center sustainability efforts stall slightly in 2025

Data center operators reported limited advances—and even some declines—in energy efficiency, carbon tracking, and water usage due in part to rising power demand and easing regulatory pressure in some regions, according to the recently released results of the Uptime Institute’s 15th Annual Global Data Center Survey 2025. As artificial intelligence workloads continue to grow and legacy data centers remain operational, sustainability initiatives have stalled, according to the Uptime Institute, which attributes this in part to reporting challenges. Uptime Institute’s 2025 data center survey was conducted online from April 2025 to May 2025 and collected responses from more than 800 data center owners and operators and more than 1,000 vendors and consultants.  “What’s interesting this year is that we have seen a far from startling increase over the last few years of the data being collected, but this year it actually fell. And this obviously led to some speculation that there is a backing off of sustainability, and that it is no longer a high priority,” said Andy Lawrence, executive director of research at Uptime Institute, during a webinar sharing the survey results. “I think that the data center industry has not yet adapted to being very good at sustainability reporting.”

Read More »

Arista’s latest networking results: 4 critical takeaways

“We also think UALink is another spec that’s coming out, and that may run as an overlay on top of an Ethernet underlay. There needs to be some firm standards there because today, scale-up is frankly all proprietary NV Link. And we’re encouraged by—just like we worked hard to found the Ultra Ethernet Consortium as a member for some of the back-end Ethernet, and the migration from InfiniBand to Ethernet is literally happening in 3 to 5 years. We expect the same phenomenon on scale-up,” Ullal said. “The rise in Agentic AI ensures any-to-any conversations with bidirectional bandwidth utilization. Such AI agents are pushing the envelope of LAN and WAN traffic patterns in the enterprise,” Ullal said. Work to do on VeloCloud integration The recent acquisition of VeloCloud was also a hot topic of the second quarter results that included the introduction of former Cisco exec and industry veteran Todd Nightingale, as its newly appointed President & COO.  “It’s only been a month, but I can’t tell you how impressed I am with the passion and focus of the team, the trust that Arista customers have in the technology and the enormous opportunity we have ahead of us in data center, AI, and in the campus,” Nightingale said. “VeloCloud’s secure AI optimized WAN portfolio offers seamless application-aware solutions to connect customer branch sites, complementing Arista’s leading spines in the data center and campus,” Ullal said.  “In a classic leaf-spine atomic identifier, we are enabling multipathing, encryption, in-band network telemetry, segmentation, application identification, and traffic engineering across distributed enterprise sites. We are so excited to fill this missing void in our distributed enterprise puzzle to bring that holistic branch solution.” “We also intend to work closely with best-of-breed security partners to enable SASE overlays. Please do note that VeloCloud is not

Read More »

Enterprise tips for cloud success

The remaining tips were cited by roughly two-thirds of the enterprises. Tip number three is to look especially at applications whose users are widely dispersed. And by “widely” here, they mean on different continents, not just different neighborhoods. The reason is that quality of experience and even availability can be compromised when work has to transit a lot of networks just to get to where it’s processed. This can lead to user dissatisfaction, and dispersing resources closer to the users may be the only solution. If an enterprise doesn’t already have their own data center located close to each user concentration, chances are that putting a new hosting point in themselves couldn’t achieve reasonable economy of scale in capex, power and cooling, and operations costs. The cloud would be cheaper. A qualifying comment here is to take great care in evaluating the real impact of dispersion of application users. In some cases, there may not be enough of a difference in QoE or availability to require dispersing hosting points, and in fact it may be that where the application is hosted isn’t even the problem. “The cloud may look like the easy way out,” one enterprise said, “but it may not be the economical way.” See where your QoE issues really lie before you go to the cloud’s distributed hosting to fix them. Tip four is to examine the user-to-application interaction model carefully, to see if there’s a large non-transactional component. Mission-critical business systems, and business core databases, are almost always in the data center. The stuff that changes them are the transactions that add, update, and delete records. If an application’s user interaction is tightly coupled to the creation of transactions, then its processing is tied to those data center resources. That makes it harder to move the user-interface

Read More »

Stargate’s slow start reveals the real bottlenecks in scaling AI infrastructure

The CFO emphasized that SoftBank remains committed to its original target of $346 billion (JPY 500 billion) over 4 years for the Stargate project, noting that major sites have been selected in the US and preparations are taking place simultaneously across multiple fronts. Requests for comment to Stargate partners Nvidia, OpenAI, and Oracle remain unanswered. Infrastructure reality check for CIOs These challenges offer important lessons for enterprise IT leaders facing similar AI infrastructure decisions. Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, said that Goto’s confirmation of delays “reflects a challenge CIOs see repeatedly” in partner onboarding delays, service activation slips, and revised delivery commitments from cloud and datacenter providers. Oishi Mazumder, senior analyst at Everest Group, noted that “SoftBank’s Stargate delays show that AI infrastructure is not constrained by compute or capital, but by land, energy, and stakeholder alignment.” The analyst emphasized that CIOs must treat AI infrastructure “as a cross-functional transformation, not an IT upgrade, demanding long-term, ecosystem-wide planning.” “Scaling AI infrastructure depends less on the technical readiness of servers or GPUs and more on the orchestration of distributed stakeholders — utilities, regulators, construction partners, hardware suppliers, and service providers — each with their own cadence and constraints,” Gogia said.

Read More »

Incentivizing the Digital Future: Inside America’s Race to Attract Data Centers

Across the United States, states are rolling out a wave of new tax incentives aimed squarely at attracting data centers, one of the country’s fastest-growing industries. Once clustered in only a handful of industry-friendly regions, today’s data-center boom is rapidly spreading, pushed along by profound shifts in federal policy, surging demand for artificial intelligence, and the drive toward digital transformation across every sector of the economy. Nowhere is this transformation more visible than in the intensifying state-by-state competition to land massive infrastructure investments, advanced technology jobs, and the alluring prospect of long-term economic growth. The past year alone has seen a record number of states introducing or expanding incentives for data centers, from tax credits to expedited permitting, reflecting a new era of proactive, tech-focused economic development policy. Behind these moves, federal initiatives and funding packages underscore the essential role of digital infrastructure as a national priority, encouraging states to lower barriers for data center construction and operation. As states watch their neighbors reap direct investment and job creation benefits, a real “domino effect” emerges: one state’s success becomes another’s blueprint, heightening the pressure and urgency to compete. Yet, this wave of incentives also exposes deeper questions about the local impact, community costs, and the evolving relationship between public policy and the tech industry. From federal levels to town halls, there are notable shifts in both opportunities and challenges shaping the landscape of digital infrastructure advancement. Industry Drivers: the Federal Push and Growth of AI The past year has witnessed a profound federal policy shift aimed squarely at accelerating U.S. digital infrastructure, especially for data centers in direct response both to the explosive growth of artificial intelligence and to intensifying international competition. In July 2025, the administration unveiled “America’s AI Action Plan,” accompanied by multiple executive orders that collectively redefined

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »