Stay Ahead, Stay ONMINE

Weaving reality or warping it? The personalization trap in AI systems

AI represents the greatest cognitive offloading in the history of humanity. We once offloaded memory to writing, arithmetic to calculators and navigation to GPS. Now we are beginning to offload judgment, synthesis and even meaning-making to systems that speak our language, learn our habits and tailor our truths.AI systems are growing increasingly adept at recognizing our preferences, our biases, even our peccadillos. Like attentive servants in one instance or subtle manipulators in another, they tailor their responses to please, to persuade, to assist or simply to hold our attention. While the immediate effects may seem benign, in this quiet and invisible tuning lies a profound shift: The version of reality each of us receives becomes progressively more uniquely tailored. Through this process, over time, each person becomes increasingly their own island. This divergence could threaten the coherence and stability of society itself, eroding our ability to agree on basic facts or navigate shared challenges.AI personalization does not merely serve our needs; it begins to reshape them. The result of this reshaping is a kind of epistemic drift. Each person starts to move, inch by inch, away from the common ground of shared knowledge, shared stories and shared facts, and further into their own reality. 

AI represents the greatest cognitive offloading in the history of humanity. We once offloaded memory to writing, arithmetic to calculators and navigation to GPS. Now we are beginning to offload judgment, synthesis and even meaning-making to systems that speak our language, learn our habits and tailor our truths.

AI systems are growing increasingly adept at recognizing our preferences, our biases, even our peccadillos. Like attentive servants in one instance or subtle manipulators in another, they tailor their responses to please, to persuade, to assist or simply to hold our attention. 

While the immediate effects may seem benign, in this quiet and invisible tuning lies a profound shift: The version of reality each of us receives becomes progressively more uniquely tailored. Through this process, over time, each person becomes increasingly their own island. This divergence could threaten the coherence and stability of society itself, eroding our ability to agree on basic facts or navigate shared challenges.

AI personalization does not merely serve our needs; it begins to reshape them. The result of this reshaping is a kind of epistemic drift. Each person starts to move, inch by inch, away from the common ground of shared knowledge, shared stories and shared facts, and further into their own reality. 


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


This is not simply a matter of different news feeds. It is the slow divergence of moral, political and interpersonal realities. In this way, we may be witnessing the unweaving of collective understanding. It is an unintended consequence, yet deeply significant precisely because it is unforeseen. But this fragmentation, while now accelerated by AI, began long before algorithms shaped our feeds.

The unweaving

This unweaving did not begin with AI. As David Brooks reflected in The Atlantic, drawing on the work of philosopher Alasdair MacIntyre, our society has been drifting away from shared moral and epistemic frameworks for centuries. Since the Enlightenment, we have gradually replaced inherited roles, communal narratives and shared ethical traditions with individual autonomy and personal preference. 

What began as liberation from imposed belief systems has, over time, eroded the very structures that once tethered us to common purpose and personal meaning. AI did not create this fragmentation. But it is giving new form and speed to it, customizing not only what we see but how we interpret and believe.

It is not unlike the biblical story of Babel. A unified humanity once shared a single language, only to be fractured, confused and scattered by an act that made mutual understanding all but impossible. Today, we are not building a tower made of stone. We are building a tower of language itself. Once again, we risk the fall.

Human-machine bond

At first, personalization was a way to improve “stickiness” by keeping users engaged longer, returning more often and interacting more deeply with a site or service. Recommendation engines, tailored ads and curated feeds were all designed to keep our attention just a little longer, perhaps to entertain but often to move us to purchase a product. But over time, the goal has expanded. Personalization is no longer just about what holds us. It is what it knows about each of us, the dynamic graph of our preferences, beliefs and behaviors that becomes more refined with every interaction.

Today’s AI systems do not merely predict our preferences. They aim to create a bond through highly personalized interactions and responses, creating a sense that the AI system understands and cares about the user and supports their uniqueness. The tone of a chatbot, the pacing of a reply and the emotional valence of a suggestion are calibrated not only for efficiency but for resonance, pointing toward a more helpful era of technology. It should not be surprising that some people have even fallen in love and married their bots

The machine adapts not just to what we click on, but to who we appear to be. It reflects us back to ourselves in ways that feel intimate, even empathic. A recent research paper cited in Nature refers to this as “socioaffective alignment,” the process by which an AI system participates in a co-created social and psychological ecosystem, where preferences and perceptions evolve through mutual influence.

This is not a neutral development. When every interaction is tuned to flatter or affirm, when systems mirror us too well, they blur the line between what resonates and what is real. We are not just staying longer on the platform; we are forming a relationship. We are slowly and perhaps inexorably merging with an AI-mediated version of reality, one that is increasingly shaped by invisible decisions about what we are meant to believe, want or trust. 

This process is not science fiction; its architecture is built on attention, reinforcement learning with human feedback (RLHF) and personalization engines. It is also happening without many of us — likely most of us — even knowing. In the process, we gain AI “friends,” but at what cost? What do we lose, especially in terms of free will and agency?

Author and financial commentator Kyla Scanlon spoke on the Ezra Klein podcast about how the frictionless ease of the digital world may come at the cost of meaning. As she put it: “When things are a little too easy, it’s tough to find meaning in it… If you’re able to lay back, watch a screen in your little chair and have smoothies delivered to you — it’s tough to find meaning within that kind of WALL-E lifestyle because everything is just a bit too simple.”

The personalization of truth

As AI systems respond to us with ever greater fluency, they also move toward increasing selectivity. Two users asking the same question today might receive similar answers, differentiated mostly by the probabilistic nature of generative AI. Yet this is merely the beginning. Emerging AI systems are explicitly designed to adapt their responses to individual patterns, gradually tailoring answers, tone and even conclusions to resonate most strongly with each user. 

Personalization is not inherently manipulative. But it becomes risky when it is invisible, unaccountable or engineered more to persuade than to inform. In such cases, it does not just reflect who we are; it steers how we interpret the world around us.

As the Stanford Center for Research on Foundation Models notes in its 2024 transparency index, few leading models disclose whether their outputs vary by user identity, history or demographics, although the technical scaffolding for such personalization is increasingly in place and only beginning to be examined. While not yet fully realized across public platforms, this potential to shape responses based on inferred user profiles, resulting in increasingly tailored informational worlds, represents a profound shift that is already being prototyped and actively pursued by leading companies.

This personalization can be beneficial, and certainly that is the hope of those building these systems. Personalized tutoring shows promise in helping learners progress at their own pace. Mental health apps increasingly tailor responses to support individual needs, and accessibility tools adjust content to meet a range of cognitive and sensory differences. These are real gains. 

But if similar adaptive methods become widespread across information, entertainment and communication platforms, a deeper, more troubling shift looms ahead: A transformation from shared understanding toward tailored, individual realities. When truth itself begins to adapt to the observer, it becomes fragile and increasingly fungible. Instead of disagreements based primarily on differing values or interpretations, we could soon find ourselves struggling simply to inhabit the same factual world.

Of course, truth has always been mediated. In earlier eras, it passed through the hands of clergy, academics, publishers and evening news anchors who served as gatekeepers, shaping public understanding through institutional lenses. These figures were certainly not free from bias or agenda, yet they operated within broadly shared frameworks.

Today’s emerging paradigm promises something qualitatively different: AI-mediated truth through personalized inference that frames, filters and presents information, shaping what users come to believe. But unlike past mediators who, despite flaws, operated within publicly visible institutions, these new arbiters are commercially opaque, unelected and constantly adapting, often without disclosure. Their biases are not doctrinal but encoded through training data, architecture and unexamined developer incentives.

The shift is profound, from a common narrative filtered through authoritative institutions to potentially fractured narratives that reflect a new infrastructure of understanding, tailored by algorithms to the preferences, habits and inferred beliefs of each user. If Babel represented the collapse of a shared language, we may now stand at the threshold of the collapse of shared mediation.

If personalization is the new epistemic substrate, what might truth infrastructure look like in a world without fixed mediators? One possibility is the creation of AI public trusts, inspired by a proposal from legal scholar Jack Balkin, who argued that entities handling user data and shaping perception should be held to fiduciary standards of loyalty, care and transparency. 

AI models could be governed by transparency boards, trained on publicly funded data sets and required to show reasoning steps, alternate perspectives or confidence levels. These “information fiduciaries” would not eliminate bias, but they could anchor trust in process rather than purely in personalization. Builders can begin by adopting transparent “constitutions” that clearly define model behavior, and by offering chain-of-reasoning explanations that let users see how conclusions are shaped. These are not silver bullets, but they are tools that help keep epistemic authority accountable and traceable.

AI builders face a strategic and civic inflection point. They are not just optimizing performance; they are also confronting the risk that personalized optimization may fragment shared reality. This demands a new kind of responsibility to users: Designing systems that respect not only their preferences, but their role as learners and believers.

Unraveling and reweaving

What we may be losing is not simply the concept of truth, but the path through which we once recognized it. In the past, mediated truth — although imperfect and biased — was still anchored in human judgment and, often, only a layer or two removed from the lived experience of other humans whom you knew or could at least relate to. 

Today, that mediation is opaque and driven by algorithmic logic. And, while human agency has long been slipping, we now risk something deeper, the loss of the compass that once told us when we were off course. The danger is not only that we will believe what the machine tells us. It is that we will forget how we once discovered the truth for ourselves. What we risk losing is not just coherence, but the will to seek it. And with that, a deeper loss: The habits of discernment, disagreement and deliberation that once held pluralistic societies together. 

If Babel marked the shattering of a common tongue, our moment risks the quiet fading of shared reality. However, there are ways to slow or even to counter the drift. A model that explains its reasoning or reveals the boundaries of its design may do more than clarify output. It may help restore the conditions for shared inquiry. This is not a technical fix; it is a cultural stance. Truth, after all, has always depended not just on answers, but on how we arrive at them together. 

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Kyndryl service aims to control agentic AI across the enterprise

Kyndryl has launched a new service aimed at helping customers manage the growing use of AI agents across the enterprise. Its Agentic AI Framework is an orchestration platform built to deploy and manage autonomous, self-learning agents across business workflows in on-prem, cloud, or hybrid IT environments, according to the company. 

Read More »

Why enterprises need to drive telecom standards

Cutting access costs by supporting VPN-over-FWA or standardizing SD-WAN interconnects could save enterprises as much as a quarter of their VPN costs, but neither is provided in 5G or assured in 6G. Enterprises could change that if they applied appropriate pressure. Reason No. 3: Satellite, private mobile, public mobile, and

Read More »

TechnipFMC Enlisted for Equinor’s Heidrun Extension Project

Energy tech provider TechnipFMC has secured an integrated engineering, procurement, construction, and installation (iEPCI) contract for Equinor ASA’s Heidrun extension project. TechnipFMC said in a media release that the award follows an integrated front-end engineering and design study it had already completed. The new contracts, according to TechnipFMC, are valued between $75 million and $250 million. The project will enhance the current infrastructure and extend the production lifecycle of the Heidrun platform, TechnipFMC said. According to the Norwegian Offshore Directorate, the Heidrun field is in the Norwegian Sea 30 kilometers (18.6 miles) northeast of the Asgard field. Heidrun has a water depth of 350 meters (1,150 feet). “This direct award highlights the mutual benefit of early engagement, which led to an optimized field layout. We are excited to leverage our iEPCI integrated execution to upgrade this important asset for Equinor”, Jonathan Landes, President for Subsea at TechnipFMC, said. In 2024, Equinor increased its ownership in the Heidrun field to 34.4 percent following an asset swap with Petoro. Heidrun is among the fields with the longest remaining life on the Norwegian continental shelf, Equinor said at the time. Earlier this year, Equinor awarded a three-year well plugging contract involving Heidrun to Island Drilling Company AS, Archer Oiltools, and Baker Hughes Norge.  The scope of work under the contract includes mobilization, planned upgrading, and certain integrated drilling services, Equinor said. The semi-submersible rig Island Innovator will be deployed to carry out the contract. It will plug subsea wells at Heidrun, Snorre, and Norne, among others. Erik G. Kirkemo, Equinor senior vice president for drilling and well, said the company aims to drill 600 improved oil recovery wells and approximately 250 exploration wells to sustain its production on the Norwegian Continental Shelf until 2035. To contact the author, email [email protected] WHAT DO YOU

Read More »

SLB Sees ‘Constructive’ Second Half of 2025

SLB, the world’s largest oil-services provider, sees resiliency in the industry and remains constructive about the second half of 2025 despite uncertainties in customer demand.  “Despite pockets of activity adjustments in key markets, the industry has shown that it can operate through uncertainty without a significant drop in upstream spending,” SLB Chief Executive Officer Olivier Le Peuch said in a statement Friday. “This has been driven by the combination of capital discipline and the need for energy security.” His comments came as SLB posted second-quarter adjusted profit of 74 cents a share, exceeding analyst expectations. SLB, which gets about 82% of its revenue from international markets, has mitigated some of the negative impacts facing smaller peers that are more levered to domestic production. The company is seen as a gauge for the health of the sector through its broad footprint in all major crude-producing theaters.  US oil drilling has dropped 12% this year to the lowest since September 2021, driven by demand concerns triggered by US President Donald Trump’s tariff proposals and faster-than-expected increases in OPEC+ production. Government forecasters have trimmed domestic crude-production estimates for 2025, signaling a lower-for-longer activity environment for service companies. “Looking ahead, assuming commodity prices stay range bound, we remain constructive for the second half of the year,” Le Peuch said. Traders and analysts will also be listening closely to SLB’s quarterly conference call Friday for more details on the completion of the merger with ChampionX Corp. which the company announced Wednesday, according to a statement. SLB is a “leader in digital services for the energy industry and could soon become a leader in production services and equipment post the close of the acquisition,” Citigroup Global Markets Inc. analyst Scott Gruber wrote in a note to clients. SLB is the first of the biggest oilfield contractors

Read More »

WTI Flat as EU Targets Russian Refined Fuels

Oil ended the day little changed as traders weighed fresh efforts from the European Union to crimp Russian energy exports. West Texas Intermediate crude held steady to close near $67 a barrel after the EU agreed to a lower price cap for Moscow’s crude as part of a package of sanctions on Moscow. The measures include curbs on fuels made from Russian petroleum, additional banking limitations and a ban on a large oil refinery in India. The Asian country, which buys large amounts of Russian crude, is a major exporter of refined products to Europe, where markets for fuels like diesel have been tight. “While the EU measures may not drastically impact crude flows, the restrictions on refined products and expanded shadow fleet targeting are fueling concern in the diesel complex,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Oil has trended higher since early May, with both Morgan Stanley and Goldman Sachs Group Inc. making the case that a buildup in global crude stockpiles has occurred in regions that don’t hold much sway in price-setting. Meanwhile, spreads in the diesel market are indicating tightness. The gap between the first and second month of New York heating oil futures climbed to $4.17 a gallon at one point in the session, up from $2.99 on Thursday. (Diesel and heating oil are the same product in the US, just taxed differently.) “The logic of diesel tightness propping up crude flat prices remains unchanged,” said Huang Wanzhe, an analyst at Dadi Futures Co., who added that the peak-demand season had seen a solid start. “The key question is how long this strength can last,” she said. In wider markets, strong US data on consumer sentiment eased concerns about the world’s largest economy, helping to underpin a risk-on mood. Crude

Read More »

EU Slaps New Sanctions on Russia and Its Oil Trade

European Union states have approved a fresh sanctions package on Russia over its war against Ukraine including a revised oil price cap, new banking restrictions, and curbs on fuels made from Russian petroleum.  The package, the bloc’s 18th since Moscow’s full scale invasion, will see about 20 more Russian banks cut off the international payments system SWIFT and face a full transaction ban, as well as restrictions imposed on Russian petroleum refined in third countries. A large oil refinery in India, part-owned by Russia’s state-run oil company, Rosneft PJSC, was also blacklisted. The cap on Russian oil, currently set at $60 per barrel, will be set dynamically at 15 percent below market rates moving forward. The new mechanism will see the threshold start off somewhere between $45-$50 and automatically revised at least twice a year based on market prices, Bloomberg previously reported. The latest sanctions by the European Union are aimed at further crimping the Kremlin’s energy revenue, the bulk of which comes from oil exports to India and China.  However, the original price cap imposed by the Group of Seven has had a limited impact on Russia’s oil flows, as the nation has built up a huge shadow fleet of tankers to haul its oil without using western services. The EU has also so far failed to convince the US to offer crucial support to the lower cap. Discussions are ongoing with other G-7 members but the US opposition is making it hard to reach agreement, according to people familiar with the matter. The UK, however, is expected to be on board with the move, the people said. The EU’s move to restrict fuels such as diesel made from Russian crude could have some market impact, as Europe imports the fuel from India, which in turn buys large amounts of

Read More »

Aramco Nears $10B Jafurah Pipeline Stake Sale to GIP

Saudi Aramco is in advanced talks to sell a roughly $10 billion stake in midstream infrastructure serving the giant Jafurah natural gas project to a group led by BlackRock Inc., according to people with knowledge of the matter.  The consortium is backed by BlackRock’s Global Infrastructure Partners unit and could reach an agreement as soon as the coming days, said the people, who asked not to be identified discussing confidential information.  The deal will involve pipelines and other infrastructure serving the $100 billion-plus Jafurah project, which Aramco is developing to supply domestic power plants as well as for export. It’s an unconventional field, meaning the gas is trapped in hard-to-access rock formations and requires special techniques to extract. Reuters reported on Thursday that GIP was nearing a deal, citing unidentified people. Aramco didn’t respond to emailed queries outside regular business hours in Saudi Arabia.  Bloomberg News first revealed in 2021 that Aramco was considering introducing outside investors into parts of the Jafurah project. Aramco was approaching infrastructure funds to gauge their interest in the midstream assets, people with knowledge of the matter said the next year.  State-controlled Aramco has been seeking to bring in international capital and sell stakes in some assets as the government pursues massive projects to build futuristic cities and diversify its economy. The kingdom is pushing ahead with a vast expansion, including developing new tourism destinations and building up a manufacturing base, to prepare for a future in which oil demand will begin to wane. BlackRock was earlier among investors that bought stakes in Aramco’s national gas pipeline network.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Germany’s Top Performing Smallcap Surges Again

A breakneck rally in the shares of a German pipeline builder accelerated this week after the company won a role plugging LNG terminals on the coast into the nation’s gas grid.  Friedrich Vorwerk Group SE’s stock is up 24% since last Friday’s close, the biggest gain on Germany’s small-cap SDAX index. The bulk of the advance came after it secured a contract valued in the hundreds of millions of euros to build a 86km-long pipeline with a consortium of companies.  It’s an example of how European firms are benefiting from the wall of money Chancellor Friedrich Merz has unleashed to overhaul the nation’s infrastructure and military. The contract is the latest deal to help revive the fortunes of the builder of underground gas, electricity and hydrogen pipes, sending its stock price to a record high.  It’s “more like an add-on. It’s just nice to have,” said Nikolas Demeter, an analyst at B Metzler Seel Sohn & Co AG. For now, the company still has three buy ratings out of five from analysts. That may change because their targets trail the company’s current share price after this week’s contract win took its advance in the year past 200%. The shares now trade at almost 32 times forward blended earnings, compared with about 14 times for the SDAX index and the Stoxx 600 Index, the European benchmark. Labor Challenge Leon Mühlenbruch at mwb research AG, who has a valuation-driven sell rating on the stock, warns that Vorwerk’s full order book could become a problem. “Capacity constraints are becoming increasingly relevant,” Mühlenbruch said. “Further growth depends on expanding that capacity, a challenge due to the persistent shortage of specialized skilled labor.” But for now the Tostedt-based company is on a roll, and its rebound in recent years has been dramatic. After an initial

Read More »

Cisco upgrades 400G optical receiver to boost AI infrastructure throughput

“In the data center, what’s really changed in the last year or so is that with AI buildouts, there’s much, much more optics that are part of 400G and 800G. It’s not so much using 10G and 25G optics, which we still sell a ton of, for campus applications. But for AI infrastructure, the 400G and 800G optics are really the dominant optics for that application,” Gartner said. Most of the AI infrastructure builds have been for training models, especially in hyperscaler environments, Gartner said. “I expect, towards the tail end of this year, we’ll start to see more enterprises deploying AI infrastructure for inference. And once they do that, because it has an Nvidia GPU attached to it, it’s going to be a 400G or 800G optic.” Core enterprise applications – such as real-time trading, high-frequency transactions, multi-cloud communications, cybersecurity analytics, network forensics, and industrial IoT – can also utilize the higher network throughput, Gartner said. 

Read More »

Supermicro bets big on 4-socket X14 servers to regain enterprise trust

In April, Dell announced its PowerEdge R470, R570, R670, and R770 servers with Intel Xeon 6 Processors with P-cores, but with single and double-socket servers. Similarly, Lenovo’s ThinkSystem V4 servers are also based on the Intel Xeon 6 processor but are limited to dual socket configurations. The launch of 4-socket servers by Supermicro reflects a growing enterprise need for localized compute that can support memory-bound AI and reduce the complexity of distributed architectures. “The modern 4-socket servers solve multiple pain points that have intensified with GenAI and memory-intensive analytics. Enterprises are increasingly challenged by latency, interconnect complexity, and power budgets in distributed environments. High-capacity, scale-up servers provide an architecture that is more aligned with low-latency, large-model processing, especially where data residency or compliance constraints limit cloud elasticity,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “Launching a 4-socket Xeon 6 platform and packaging it within their modular ‘building block’ strategy shows Supermicro is focusing on staying ahead in enterprise and AI data center compute,” said Devroop Dhar, co-founder and MD at Primus Partner. A critical launch after major setbacks Experts peg this to be Supermicro’s most significant product launch since it became mired in governance and regulatory controversies. In 2024, the company lost Ernst & Young, its second auditor in two years, following allegations by Hindenburg Research involving accounting irregularities and the alleged export of sensitive chips to sanctioned entities. Compounding its troubles, Elon Musk’s AI startup xAI redirected its AI server orders to Dell, a move that reportedly cost Supermicro billions in potential revenue and damaged its standing in the hyperscaler ecosystem. Earlier this year, HPE signed a $1 billion contract to provide AI servers for X, a deal Supermicro was also bidding for. “The X14 launch marks a strategic reinforcement for Supermicro, showcasing its commitment

Read More »

Moving AI workloads off the cloud? A hefty data center retrofit awaits

“If you have a very specific use case, and you want to fold AI into some of your processes, and you need a GPU or two and a server to do that, then, that’s perfectly acceptable,” he says. “What we’re seeing, kind of universally, is that most of the enterprises want to migrate to these autonomous agents and agentic AI, where you do need a lot of compute capacity.” Racks of brand-new GPUs, even without new power and cooling infrastructure, can be costly, and Schneider Electric often advises cost-conscious clients to look at previous-generation GPUs to save money. GPU and other AI-related technology is advancing so rapidly, however, that it’s hard to know when to put down stakes. “We’re kind of in a situation where five years ago, we were talking about a data center lasting 30 years and going through three refreshes, maybe four,” Carlini says. “Now, because it is changing so much and requiring more and more power and cooling you can’t overbuild and then grow into it like you used to.”

Read More »

My take on the Gartner Magic Quadrant for LAN infrastructure? Highly inaccurate

Fortinet being in the leader quadrant may surprise some given they are best known as a security vendor, but the company has quietly built a broad and deep networking portfolio. I have no issue with them being considered a leader and believe for security conscious companies, Fortinet is a great option. Challenger Cisco is the only company listed as a challenger, and its movement out of the leader quadrant highlights just how inaccurate this document is. There is no vendor that sells more networking equipment in more places than Cisco, and it has led enterprise networking for decades. Several years ago, when it was a leader, I could argue the division of engineering between Meraki and Catalyst could have pushed them out, but it didn’t. So why now? At its June Cisco Live event, the company launched a salvo of innovation including AI Canvas, Cisco AI Assistant, and much more. It’s also continually improved the interoperability between Meraki and Catalyst and announced several new products. AI Canvas is a completely new take, was well received by customers at Cisco Live, and reinvents the concept of AIOps. As I stated above, because of the December cutoff time for information gathering, none of this was included, but that makes Cisco’s representation false. Also, I find this MQ very vague in its “Cautions” segment. As an example, it states: “Cisco’s product strategy isn’t well-aligned with key enterprise needs.” Some details here would be helpful. In my conversations with Cisco, which includes with Chief Product Officer and President Jeetu Patel, the company has reiterated that its strategy is to help customers be AI-ready with products that are easier to deploy and manage, more automated, and with a lower cost to run. That seems well-aligned with customer needs. If Gartner is hearing customers want networks

Read More »

Equinix, AWS embrace liquid cooling to power AI implementations

With AWS, it deployed In-Row Heat Exchangers (IRHX), a custom-built liquid cooling system designed specifically for servers using Nvidia’s Blackwell GPUs, it’s most powerful but also its hottest running processors used for AI training and inference. The IRHX unit has three components: a water‑distribution cabinet, an integrated pumping unit, and in‑row fan‑coil modules. It uses direct to chip liquid cooling just like the equinox servers, where cold‑plates attached to the chip draw heat from the chips and is cooled by the liquid. The warmed coolant then flows through the coils of heat exchangers, where high‑speed fans Blow on the pipes to cool them, like a car radiator. This type of cooling is nothing new, and there are a few direct to chip liquid cooling solutions on the market from Vertiv, CoolIT, Motivair, and Delta Electronics all sell liquid cooling options. But AWS separates the pumping unit from the fan-coil modules, letting a single pumping system to support large number of fan units. These modular fans can be added or removed as cooling requirements evolve, giving AWS the flexibility to adjust the system per row and site. This led to some concern that Amazon would disrupt the market for liquid cooling, but as a Dell’Oro Group analyst put it, Amazon develops custom technologies for itself and does not go into competition or business with other data center infrastructure companies.

Read More »

Intel CEO: We are not in the top 10 semiconductor companies

The Q&A session came on the heels of layoffs across the company. Tan was hired in March, and almost immediately he began to promise to divest and reduce non-core assets. Gelsinger had also begun divesting the company of losers, but they were nibbles around the edge. Tan is promising to take an axe to the place. In addition to discontinuing products, the company has outsourced marketing and media relations — for the first time in more than 25 years of covering this company, I have no internal contacts at Intel. Many more workers are going to lose their jobs in coming weeks. So far about 500 have been cut in Oregon and California but many more is expected — as much as 20% of the overall company staff may go, and Intel has over 100,000 employees, according to published reports. Tan believes the company is bloated and too bogged down with layers of management to be reactive and responsive in the same way that AMD and Nvidia are. “The whole process of that (deciding) is so slow and eventually nobody makes a decision,” he is quoted as saying. Something he has decided on is AI, and he seems to have decided to give up. “On training, I think it is too late for us,” Tan said, adding that Nvidia’s position in that market is simply “too strong.” So there goes what sales Gaudi3 could muster. Instead, Tan said Intel will focus on “edge” artificial intelligence, where AI capabilities Are brought to PCs and other remote devices rather than big AI processors in data centers like Nvidia and AMD are doing. “That’s an area that I think is emerging, coming up very big and we want to make sure that we capture,” Tan said.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »