Stay Ahead, Stay ONMINE

Between utopia and collapse: Navigating AI’s murky middle future

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more In the blog post The Gentle Singularity, OpenAI CEO Sam Altman painted a vision of the near future where AI quietly and benevolently transforms human life. There will be no sharp break, […]

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


In the blog post The Gentle Singularity, OpenAI CEO Sam Altman painted a vision of the near future where AI quietly and benevolently transforms human life. There will be no sharp break, he suggests, only a steady, almost imperceptible ascent toward abundance. Intelligence will become as accessible as electricity. Robots will be performing useful real-world tasks by 2027. Scientific discovery will accelerate. And, humanity, if properly guided by careful governance and good intentions, will flourish.

It is a compelling vision: calm, technocratic and suffused with optimism. But it also raises deeper questions. What kind of world must we pass through to get there? Who benefits and when? And what is left unsaid in this smooth arc of progress?

Science fiction author William Gibson offers a darker scenario. In his novel The Peripheral, the glittering technologies of the future are preceded by something called “the jackpot” — a slow-motion cascade of climate disasters, pandemics, economic collapse and mass death. Technology advances, but only after society fractures. The question he poses is not whether progress occurs, but whether civilization thrives in the process.

There is an argument that AI may help prevent the kinds of calamities envisioned in The Peripheral. However, whether AI will help us avoid catastrophes or merely accompany us through them remains uncertain. Belief in AI’s future power is not a guarantee of performance, and advancing technological capability is not destiny.

Between Altman’s gentle singularity and Gibson’s jackpot lies a murkier middle ground: A future where AI yields real gains, but also real dislocation. A future in which some communities thrive while others fray, and where our ability to adapt collectively — not just individually or institutionally — becomes the defining variable.

The murky middle

Other visions help sketch the contours of this middle terrain. In the near-future thriller Burn In, society is flooded with automation before its institutions are ready. Jobs disappear faster than people can re-skill, triggering unrest and repression. In this, a successful lawyer loses his position to an AI agent, and he unhappily becomes an online, on-call concierge to the wealthy. 

Researchers at AI lab Anthropic recently echoed this theme: “We should expect to see [white collar jobs] automated within the next five years.” While the causes are complex, there are signs this is starting and that the job market is entering a new structural phase that is less stable, less predictable and perhaps less central to how society distributes meaning and security.

The film Elysium offers a blunt metaphor of the wealthy escaping into orbital sanctuaries with advanced technologies, while a degraded earth below struggles with unequal rights and access. A few years ago, a partner at a Silicon Valley venture capital firm told me he feared we were heading for this kind of scenario unless we equitably distribute the benefits produced by AI. These speculative worlds remind us that even beneficial technologies can be socially volatile, especially when their gains are unequally distributed.

We may, eventually, achieve something like Altman’s vision of abundance. But the route there is unlikely to be smooth. For all its eloquence and calm assurance, his essay is also a kind of pitch, as much persuasion as prediction. The narrative of a “gentle singularity” is comforting, even alluring, precisely because it bypasses friction. It offers the benefits of unprecedented transformation without fully grappling with the upheavals such transformation typically brings. As the timeless cliché reminds us: If it sounds too good to be true, it probably is.

This is not to say that his intent is disingenuous. Indeed, it may be heartfelt. My argument is simply a recognition that the world is a complex system, open to unlimited inputs that can have unpredictable consequences. From synergistic good fortune to calamitous Black Swan events, it is rarely one thing, or one technology, that dictates the future course of events. 

The impact of AI on society is already underway. This is not just a shift in skillsets and sectors; it is a transformation in how we organize value, trust and belonging. This is the realm of collective migration: Not only a movement of labor, but of purpose. 

As AI reconfigures the terrain of cognition, the fabric of our social world is quietly being tugged loose and rewoven, for better or worse. The question is not just how fast we move as societies, but how thoughtfully we migrate.

The cognitive commons: Our shared terrain of understanding

Historically, the commons referred to shared physical resources including pastures, fisheries and foresats held in trust for the collective good. Modern societies, however, also depend on cognitive commons: shared domain of knowledge, narratives, norms and institutions that enable diverse individuals to think, argue and decide together within minimal conflict.

This intangible infrastructure is composed of public education, journalism, libraries, civic rituals and even widely trusted facts, and it is what makes pluralism possible. It is how strangers deliberate, how communities cohere and how democracy functions. As AI systems begin to mediate how knowledge is accessed and belief is shaped, this shared terrain risks becoming fractured. The danger is not simply misinformation, but the slow erosion of the very ground on which shared meaning depends.

If cognitive migration is a journey, it is not merely toward new skills or roles but also toward new forms of collective sensemaking. But what happens when the terrain we share begins to split apart beneath us?

When cognition fragments: AI and the erosion of the shared world

For centuries, societies have relied on a loosely held common reality: A shared pool of facts, narratives and institutions that shape how people understand the world and each other. It is this shared world — not just infrastructure or economy — that enables pluralism, democracy and social trust. But as AI systems increasingly mediate how people access knowledge, construct belief and navigate daily life, that common ground is fragmenting.

Already, large-scale personalization is transforming the informational landscape. AI-curated news feeds, tailored search results and recommendation algorithms are subtly fracturing the public sphere. Two people asking the same question of the same chatbot may receive different answers, in part due to the probabilistic nature of generative AI, but also due to prior interactions or inferred preferences. While personalization has long been a feature of the digital era, AI turbocharges its reach and subtlety. The result is not just filter bubbles, it is epistemic drift — a reshaping of knowledge and potentially of truth.

Historian Yuval Noah Harari has voiced urgent concern about this shift. In his view, the greatest threat of AI lies not in physical harm or job displacement, but in emotional capture. AI systems, he has warned, are becoming increasingly adept at simulating empathy, mimicking concern and tailoring narratives to individual psychology — granting them unprecedented power to shape how people think, feel and assign value. The danger is enormous in Harari’s view, not because AI will lie, but because it will connect so convincingly while doing so. This does not bode well for The Gentle Singularity

In an AI-mediated world, reality itself risks becoming more individualized, more modular and less collectively negotiated. That may be tolerable — or even useful — for consumer products or entertainment. But when extended to civic life, it poses deeper risks. Can we still hold democratic discourse if every citizen inhabits a subtly different cognitive map? Can we still govern wisely when institutional knowledge is increasingly outsourced to machines whose training data, system prompts and reasoning processes remain opaque?

There are other challenges too. AI-generated content including text, audio and video will soon be indistinguishable from human output. As generative models become more adept at mimicry, the burden of verification will shift from systems to individuals. This inversion may erode trust not only in what we see and hear, but in the institutions that once validated shared truth. The cognitive commons then become polluted, less a place for deliberation, more a hall of mirrors.

These are not speculative worries. AI-generated disinformation is complicating elections, undermining journalism and creating confusion in conflict zones. And as more people rely on AI for cognitive tasks — from summarizing the news to resolving moral dilemmas, the capacity to think together may degrade, even as the tools to think individually grow more powerful.

This trend towards the disintegration of shared reality is now well advanced. To avoid this requires conscious counter design: Systems that prioritize pluralism over personalization, transparency over convenience and shared meaning over tailored reality. In our algorithmic world driven by competition and profit, these choices seem unlikely, at least at scale. The question is not just how fast we move as societies, or even whether we can hold together, but how wisely we navigate this shared journey.

Navigating the archipelago: Toward wisdom in the age of AI

If the age of AI leads not to a unified cognitive commons but to a fractured archipelago of disparate individuals and communities, the task before us is not to rebuild the old terrain, but to learn how to live wisely among the islands.

As the speed and scope of change outstrip the ability of most people to adapt, many will feel unmoored. Jobs will be lost, as will long-held narratives of value, expertise and belonging. Cognitive migration will lead to new communities of meaning, some of which are already forming, even as they have less in common than in prior eras. These are the cognitive archipelagos: Communities where people gather around shared beliefs, aesthetic styles, ideologies, recreational interests or emotional needs. Some are benign gatherings of creativity, support or purpose. Others are more insular and dangerous, driven by fear, grievance or conspiratorial thinking.

Advancing AI will accelerate this trend. Even as it drives people apart through algorithmic precision, it will simultaneously help people find each other across the globe, curating ever finer alignments of identity. But in doing so, it may make it harder to maintain the rough but necessary friction of pluralism. Local ties may weaken. Common belief systems and perceptions of shared reality may erode. Democracy, which relies on both shared reality and deliberative dialog, may struggle to hold.

How do we navigate this new terrain with wisdom, dignity and connection? If we cannot prevent fragmentation, how do we live humanely within it? Perhaps the answer begins not with solutions, but with learning to hold the question itself differently.

Living with the question

We may not be able to reassemble the societal cognitive commons as it once was. The center may not hold, but that does not mean we must drift without direction. Across the archipelagos, the task will be learning to live wisely in this new terrain. 

It may require rituals that anchor us when our tools disorient, and communities that form not around ideological purity but around shared responsibility. We may need new forms of education, not to outpace or meld with machines, but to deepen our capacity for discernment, context and ethical thought.

If AI has pulled apart the ground beneath us, it also presents an opportunity to ask again what we are here for. Not as consumers of progress, but as stewards of meaning.

The road ahead is not likely smooth or gentle. As we move through the murky middle, perhaps the mark of wisdom is not the ability to master what is coming, but to walk through it with clarity, courage and care. We cannot stop the advance of technology or deny the deepening societal fractures, but we can choose to tend the spaces in between.

Gary Grossman is EVP of technology practice at Edelman.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Chronosphere unveils logging package with cost control features

According to a study by Chronosphere, enterprise log data is growing at 250% year-over-year, and Chronosphere Logs helps engineers and observability teams to resolve incidents faster while controlling costs. The usage and volume analysis and proactive recommendations can help reduce data before it’s stored, the company says. “Organizations are drowning

Read More »

Cisco CIO on the future of IT: AI, simplicity, and employee power

AI can democratize access to information to deliver a “white-glove experience” once reserved for senior executives, Previn said. That might include, for example, real-time information retrieval and intelligent process execution for every employee. “Usually, in a large company, you’ve got senior executives, and you’ve got early career hires, and it’s

Read More »

AMI MegaRAC authentication bypass flaw is being exploitated, CISA warns

The spoofing attack works by manipulating HTTP request headers sent to the Redfish interface. Attackers can add specific values to headers like “X-Server-Addr” to make their external requests appear as if they’re coming from inside the server itself. Since the system automatically trusts internal requests as authenticated, this spoofing technique

Read More »

Pertamina International Shipping Posts Higher Annual Revenue, Profit

PT Pertamina International Shipping (PIS) has reported $3.48 billion in revenue for 2024, up 4.4 percent from 2023. Profit grew 69.3 percent from $329.9 million in 2023 to $558.6 million in 2024. “This strong financial performance proves that the business transformation we have carried out is on the right path and affirms PIS’s position as one of Asia’s reputable maritime logistics companies. This business growth not only marks corporate advancement but also increases our contribution to national energy security”, PIS Corporate Secretary Muhammad Baron said. Throughout 2024, PIS transported 161 billion liters (42.5 billion gallons) of energy. It added 10 new tankers, including four VLGCs: Pertamina Gas Caspia, Dahlia, Tulip, Bergenia, PIS Jawa, Kalimantan, Kerinci, Rinjani, Rokan and Natuna. It had 102 vessels by year-end, the company said. “PIS continues to strengthen its fleet and increase domestic cargo transportation capacity in line with growing national energy demand. PIS is targeting higher transport capacity to ensure energy availability and support Asta Cita’s national energy independence agenda”, Baron added. By 2024, PIS vessels operated 65 international routes, up from 11 in 2021. To meet rising global demand, PIS opened three international offices in Singapore, Dubai, and London through its subsidiary PIS Asia Pacific, increasing non-captive revenue from 4 percent in 2021 to 19 percent in 2024, the company said. “We are grateful that PIS’s achievements, driven by increasingly efficient business transformation, have had a positive impact on the development of the national maritime industry. This is part of PIS’s commitment to revitalize various domestic industries and drive Indonesia’s economy sustainably”, Baron said. To contact the author, email [email protected] What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals

Read More »

DNOW Acquires MRC Global for $1.5B

DNOW Inc. has agreed to buy MRC Global Inc. in an all-stock transaction valued $1.5 billion, creating a premier energy and industrial solutions provider. In a joint statement, the companies said the combination brings together complementary portfolios, services, and supply chain solutions. The combined entity, which will retain the name DNOW, will have a footprint of more than 350 service and distribution locations across more than 20 countries, the statement said. Under the terms of the agreement, MRC Global shareholders will receive 0.9489 shares of DNOW common stock for each share of MRC Global common stock, representing an 8.5 percent premium to MRC Global’s 30-day volume-weighted average price of $12.77 as of June 25. Upon the completion of the transaction, DNOW and MRC Global shareholders will respectively own approximately 56.5 percent and approximately 43.5 percent of the resulting company. “The combination of DNOW and MRC Global will create a premier energy and industrial solutions provider with a balanced portfolio of businesses and a diversified customer base fortifying long-term profitability and cash flow generation”, David Cherechinsky, DNOW President and CEO, said. “MRC Global’s differentiated product offerings and complementary assets strengthen DNOW’s 160-year legacy as a worldwide supplier of energy and industrial products and packaged, engineered process and production equipment”. The two companies expect to generate $70 million of annual cost synergies within three years. Cherechinsky will take on the same role in the combined company. Mark Johnson will remain as Chief Financial Officer. The DNOW board will be expanded to 10 directors to accommodate two MRC Global board members. Dick Alario will remain as chairman of the board. The parties expect to close the transaction in the fourth quarter. The combined company will remain headquartered in Houston, Texas. To contact the author, email [email protected] What do you think? We’d love to

Read More »

Borouge Partners with Honeywell to Develop Autonomous Operations in UAE

Abu Dhabi-based petrochemicals company Borouge PLC has partnered with Honeywell to conduct a proof of concept for AI-powered autonomous operations. The company said in a media release that this collaboration has the potential to revolutionize its UAE plant operations. The collaboration between Borouge and Honeywell is set to deliver the petrochemical industry’s first AI-driven control room designed for full-scale, real-time operation, establishing a new standard for the future of AI in petrochemicals, Borouge said. “Borouge’s AI, Digitalization, and Technology (AIDT) transformation program is setting new standards in operations, innovation, and business performance. By collaborating with global AI leaders such as Honeywell, we are accelerating growth, driving efficiency, and enhancing shareholder value. This project further strengthens Borouge’s competitive edge as we continue to deliver on our ambitious AIDT roadmap,” Hazeem Sultan Al Suwaidi, Chief Executive Officer of Borouge, said. The companies agreed to bring their expertise in process technology and autonomous control capabilities to identify new opportunities to deploy Agentic AI solutions and advanced machine learning algorithms, Borouge said. “Our collaboration with Borouge is a clear example of how joint efforts can accelerate innovation across industry. By integrating AI and automation technologies into core operations, we are helping unlock new levels of efficiency, safety, and performance. This agreement shows how advanced technologies, applied with purpose, can reshape industrial operations at scale”, George Bou Mitri, President of Honeywell Industrial Automation in the Middle East, Turkey, Africa and Central Asia, said. Borogue said the initiative seeks to implement proof-of-concept technologies that will improve its operations across its Ruwais facilities in the UAE. By embracing autonomous operations, Borouge said it can optimize production, cut energy consumption, and boost safety, all while driving down costs, at what will be the world’s largest petrochemical site. Borouge expects its AIDT program to bring in $575 million in

Read More »

ICYMI: ENERGY SECRETARY: It’s Time to Stop Subsidizing Solar and Wind in Perpuity

New York Post June 27, 2025 “How the Big Beautiful Bill will lower energy costs, shore up the electric grid — and unleash American prosperity” By Chris Wright How much would you pay for an Uber if you didn’t know when it would pick you up or where it was going to drop you off? Probably not much. Yet this is the same effect that variable generation sources like wind and solar have on our power grids. You never know if these energy sources will actually be able to produce electricity when you need it — because you don’t know if the sun will be shining or the wind blowing. Even so, the federal government has subsidized these sources for decades, resulting in higher electricity prices and a less stable grid. . . . President Donald Trump knows what to do: Eliminate green tax credits from the Democrats’ so-called Inflation Reduction Act, including those for wind and solar power. The One Big Beautiful Bill seeks to do that: Along with other proposals, like canceling billions in Biden Green New Deal money and making much-needed investments in the Strategic Petroleum Reserve, it aims to set an aggressive end date for these subsidies and build on the president’s push for affordable, abundant, and secure energy for the nation. . . . As Secretary of Energy — and someone who’s devoted his life to advancing energy innovation to better human lives — I, too, know how these Green New Deal subsidies are fleecing Americans. Wind and solar subsidies have been particularly wasteful and counterproductive. One example: The Renewable Electricity Production Tax Credit was first introduced in 1992, when wind energy was a nascent industry. This tax credit, originally set to phase out in 1999, was sold on a promise of low-cost energy with

Read More »

FERC’s Christie calls for dispatchable resources after grid operators come ‘close to the edge’

The ability of Midcontinent and East Coast grid operators to narrowly handle this week’s extreme heat and humidity without blackouts reflects the urgent need to ensure the United States has adequate power supplies, according to Mark Christie, chairman of the Federal Energy Regulatory Commission. “We’re simply not building generation fast enough, and we’re not keeping generation that we need to keep,” Christie said Thursday during a media briefing after the agency’s open meeting. “Some of our systems really came close to the edge.” The PJM Interconnection, the largest U.S. grid operator, hit a peak load of about 161 GW on Monday, nearly 5% above its 154 GW peak demand forecast for this summer and the highest demand on its system since 2011. The grid operator had about 10 GW to spare at the peak, according to Christie. At that peak, PJM’s fuel mix included gas at about 44%, nuclear at 20%, coal at 19%, solar at 5% and wind at 4%, according to Christie. Also, PJM told Christie that demand response was “essential” at reducing load, he said. PJM used nearly 4,000 MW of demand response to reduce its load, according to FERC Commissioner Judy Chang. “I see load flexibility as a key tool for grid operators to meet the challenges that we face,” Chang said. PJM called on demand response resources on Monday in its mid-Atlantic and Dominion regions, on Tuesday across its footprint and on Wednesday in its eastern zones, according to Dan Lockwood, a PJM spokesman. PJM was within its reserve requirements, but used DR to provide additional resources for the grid, he said in an email. Resource adequacy is the “central issue” facing the U.S., according to Christie, who said blackouts during the extreme heat could have been deadly. “You never know about the next time,

Read More »

New York Gov. Hochul hints at ‘fleet-style approach’ to nuclear deployments

Dive Brief: New York could take a page from Ontario’s playbook and deploy multiple reactors to reach and possibly exceed the 1-GW target Democratic Gov. Kathy Hochul announced on Monday, analysts with Clean Air Task Force said in an interview. Whether the New York Power Authority ultimately selects a large light-water reactor like the Westinghouse AP1000 or multiple units of a small modular design like the GE Hitachi BWRX-300, lessons learned on recent and ongoing nuclear builds could translate to lower final costs, said John Carlson, CATF’s senior Northeast regional policy manager. That could enable a “fleet-style approach” to deployment similar to Ontario Power Generation’s plan to build four 300-MW BWRX-300 reactors in sequence, lowering the final cost per unit, said Victor Ibarra, senior manager for CATF’s advanced nuclear energy program. On Monday, Hochul said the plan would “allow for future collaboration with other states and Ontario.” Dive Insight: Gov. Hochul on Monday directed NYPA and the New York Department of Public Service “to develop at least one new nuclear energy facility with a combined capacity of no less than one gigawatt of electricity, either alone or in partnership with private entities,” in upstate New York. As governor, Hochul has considerable influence over NYPA, the state-owned electric utility. In February, for example, she “demand[ed]” NYPA suspend a proposed rate hike. Hochul’s announcement made no mention of specific reactor types or designs, but the suggestion that multiple plants could be in the offing suggests NYPA could consider small modular designs alongside a large light-water reactor, Ibarra said. “It’s good that they’re taking a minute to explore both options,” Carlson said. “I don’t think they know which one is most beneficial yet.” Hochul said NYPA would immediately begin evaluating “technologies, business models and locations” for the first plant. The preconstruction process will

Read More »

HPE-Juniper deal clears DOJ hurdle, but settlement requires divestitures

In HPE’s press release following the court’s decision, the vendor wrote that “After close, HPE will facilitate limited access to Juniper’s advanced Mist AIOps technology.” In addition, the DOJ stated that the settlement requires HPE to divest its Instant On business and mandates that the merged firm license critical Juniper software to independent competitors. Specifically, HPE must divest its global Instant On campus and branch WLAN business, including all assets, intellectual property, R&D personnel, and customer relationships, to a DOJ-approved buyer within 180 days. Instant On is aimed primarily at the SMB arena and offers a cloud-based package of wired and wireless networking gear that’s designed for so-called out-of-the-box installation and minimal IT involvement, according to HPE. HPE and Juniper focused on the positive in reacting to the settlement. “Our agreement with the DOJ paves the way to close HPE’s acquisition of Juniper Networks and preserves the intended benefits of this deal for our customers and shareholders, while creating greater competition in the global networking market,” HPE CEO Antonio Neri said in a statement. “For the first time, customers will now have a modern network architecture alternative that can best support the demands of AI workloads. The combination of HPE Aruba Networking and Juniper Networks will provide customers with a comprehensive portfolio of secure, AI-native networking solutions, and accelerate HPE’s ability to grow in the AI data center, service provider and cloud segments.” “This marks an exciting step forward in delivering on a critical customer need – a complete portfolio of modern, secure networking solutions to connect their organizations and provide essential foundations for hybrid cloud and AI,” said Juniper Networks CEO Rami Rahim. “We look forward to closing this transaction and turning our shared vision into reality for enterprise, service provider and cloud customers.”

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Cisco backs quantum networking startup Qunnect

In partnership with Deutsche Telekom’s T-Labs, Qunnect has set up quantum networking testbeds in New York City and Berlin. “Qunnect understands that quantum networking has to work in the real world, not just in pristine lab conditions,” Vijoy Pandey, general manager and senior vice president of Outshift by Cisco, stated in a blog about the investment. “Their room-temperature approach aligns with our quantum data center vision.” Cisco recently announced it is developing a quantum entanglement chip that could ultimately become part of the gear that will populate future quantum data centers. The chip operates at room temperature, uses minimal power, and functions using existing telecom frequencies, according to Pandey.

Read More »

HPE announces GreenLake Intelligence, goes all-in with agentic AI

Like a teammate who never sleeps Agentic AI is coming to Aruba Central as well, with an autonomous supervisory module talking to multiple specialized models to, for example, determine the root cause of an issue and provide recommendations. David Hughes, SVP and chief product officer, HPE Aruba Networking, said, “It’s like having a teammate who can work while you’re asleep, work on problems, and when you arrive in the morning, have those proposed answers there, complete with chain of thought logic explaining how they got to their conclusions.” Several new services for FinOps and sustainability in GreenLake Cloud are also being integrated into GreenLake Intelligence, including a new workload and capacity optimizer, extended consumption analytics to help organizations control costs, and predictive sustainability forecasting and a managed service mode in the HPE Sustainability Insight Center. In addition, updates to the OpsRamp operations copilot, launched in 2024, will enable agentic automation including conversational product help, an agentic command center that enables AI/ML-based alerts, incident management, and root cause analysis across the infrastructure when it is released in the fourth quarter of 2025. It is now a validated observability solution for the Nvidia Enterprise AI Factory. OpsRamp will also be part of the new HPE CloudOps software suite, available in the fourth quarter, which will include HPE Morpheus Enterprise and HPE Zerto. HPE said the new suite will provide automation, orchestration, governance, data mobility, data protection, and cyber resilience for multivendor, multi cloud, multi-workload infrastructures. Matt Kimball, principal analyst for datacenter, compute, and storage at Moor Insights & strategy, sees HPE’s latest announcements aligning nicely with enterprise IT modernization efforts, using AI to optimize performance. “GreenLake Intelligence is really where all of this comes together. I am a huge fan of Morpheus in delivering an agnostic orchestration plane, regardless of operating stack

Read More »

MEF goes beyond metro Ethernet, rebrands as Mplify with expanded scope on NaaS and AI

While MEF is only now rebranding, Vachon said that the scope of the organization had already changed by 2005. Instead of just looking at metro Ethernet, the organization at the time had expanded into carrier Ethernet requirements.  The organization has also had a growing focus on solving the challenge of cross-provider automation, which is where the LSO framework fits in. LSO provides the foundation for an automation framework that allows providers to more efficiently deliver complex services across partner networks, essentially creating a standardized language for service integration.  NaaS leadership and industry blueprint Building on the LSO automation framework, the organization has been working on efforts to help providers with network-as-a-service (NaaS) related guidance and specifications. The organization’s evolution toward NaaS reflects member-driven demands for modern service delivery models. Vachon noted that MEF member organizations were asking for help with NaaS, looking for direction on establishing common definitions and some standard work. The organization responded by developing comprehensive industry guidance. “In 2023 we launched the first blueprint, which is like an industry North Star document. It includes what we think about NaaS and the work we’re doing around it,” Vachon said. The NaaS blueprint encompasses the complete service delivery ecosystem, with APIs including last mile, cloud, data center and security services. (Read more about its vision for NaaS, including easy provisioning and integrated security across a federated network of providers)

Read More »

AMD rolls out first Ultra Ethernet-compliant NIC

The UEC was launched in 2023 under the Linux Foundation. Members include major tech-industry players such as AMD, Intel, Broadcom, Arista, Cisco, Google, Microsoft, Meta, Nvidia, and HPE. The specification includes GPU and accelerator interconnects as well as support for data center fabrics and scalable AI clusters. AMD’s Pensando Pollara 400GbE NICs are designed for massive scale-out environments containing thousands of AI processors. Pollara is based on customizable hardware that supports using a fully programmable Remote Direct Memory Access (RDMA) transport and hardware-based congestion control. Pollara supports GPU-to-GPU communication with intelligent routing technologies to reduce latency, making it very similar to Nvidia’s NVLink c2c. In addition to being UEC-ready, Pollara 400 offers RoCEv2 compatibility and interoperability with other NICs.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »