Stay Ahead, Stay ONMINE

How LLMs Work: Pre-Training to Post-Training, Neural Networks, Hallucinations, and Inference

With the recent explosion of interest in large language models (LLMs), they often seem almost magical. But let’s demystify them. I wanted to step back and unpack the fundamentals — breaking down how LLMs are built, trained, and fine-tuned to become the AI systems we interact with today. This two-part deep dive is something I’ve been meaning […]

With the recent explosion of interest in large language models (LLMs), they often seem almost magical. But let’s demystify them.

I wanted to step back and unpack the fundamentals — breaking down how LLMs are built, trained, and fine-tuned to become the AI systems we interact with today.

This two-part deep dive is something I’ve been meaning to do for a while and was also inspired by Andrej Karpathy’s widely popular 3.5-hour YouTube video, which has racked up 800,000+ views in just 10 days. Andrej is a founding member of OpenAI, his insights are gold— you get the idea.

If you have the time, his video is definitely worth watching. But let’s be real — 3.5 hours is a long watch. So, for all the busy folks who don’t want to miss out, I’ve distilled the key concepts from the first 1.5 hours into this 10-minute read, adding my own breakdowns to help you build a solid intuition.

What you’ll get

Part 1 (this article): Covers the fundamentals of LLMs, including pre-training to post-training, neural networks, Hallucinations, and inference.

Part 2: Reinforcement learning with human/AI feedback, investigating o1 models, DeepSeek R1, AlphaGo

Let’s go! I’ll start with looking at how LLMs are being built.

At a high level, there are 2 key phases: pre-training and post-training.

1. Pre-training

Before an LLM can generate text, it must first learn how language works. This happens through pre-training, a highly computationally intensive task.

Step 1: Data collection and preprocessing

The first step in training an LLM is gathering as much high-quality text as possible. The goal is to create a massive and diverse dataset containing a wide range of human knowledge.

One source is Common Crawl, which is a free, open repository of web crawl data containing 250 billion web pages over 18 years. However, raw web data is noisy — containing spam, duplicates and low quality content — so preprocessing is essential.If you’re interested in preprocessed datasets, FineWeb offers a curated version of Common Crawl, and is made available on Hugging Face.

Once cleaned, the text corpus is ready for tokenization.

Step 2: Tokenization

Before a neural network can process text, it must be converted into numerical form. This is done through tokenization, where words, subwords, or characters are mapped to unique numerical tokens.

Think of tokens as the building blocks — the fundamental building blocks of all language models. In GPT4, there are 100,277 possible tokens.A popular tokenizer, Tiktokenizer, allows you to experiment with tokenization and see how text is broken down into tokens. Try entering a sentence, and you’ll see each word or subword assigned a series of numerical IDs.

Step 3: Neural network training

Once the text is tokenized, the neural network learns to predict the next token based on its context. As shown above, the model takes an input sequence of tokens (e.g., “we are cook ing”) and processes it through a giant mathematical expression — which represents the model’s architecture — to predict the next token.

A neural network consists of 2 key parts:

  1. Parameters (weights) — the learned numerical values from training.
  2. Architecture (mathematical expression) — the structure defining how the input tokens are processed to produce outputs.

Initially, the model’s predictions are random, but as training progresses, it learns to assign probabilities to possible next tokens.

When the correct token (e.g. “food”) is identified, the model adjusts its billions of parameters (weights) through backpropagation — an optimization process that reinforces correct predictions by increasing their probabilities while reducing the likelihood of incorrect ones.

This process is repeated billions of times across massive datasets.

Base model — the output of pre-training

At this stage, the base model has learned:

  • How words, phrases and sentences relate to each other
  • Statistical patterns in your training data

However, base models are not yet optimised for real-world tasks. You can think of them as an advanced autocomplete system — they predict the next token based on probability, but with limited instruction-following ability.

A base model can sometimes recite training data verbatim and can be used for certain applications through in-context learning, where you guide its responses by providing examples in your prompt. However, to make the model truly useful and reliable, it requires further training.

2. Post training — Making the model useful

Base models are raw and unrefined. To make them helpful, reliable, and safe, they go through post-training, where they are fine-tuned on smaller, specialised datasets.

Because the model is a neural network, it cannot be explicitly programmed like traditional software. Instead, we “program” it implicitly by training it on structured labeled datasets that represent examples of desired interactions.

How post training works

Specialised datasets are created, consisting of structured examples on how the model should respond in different situations. 

Some types of post training include:

  1. Instruction/conversation fine tuning
    Goal: To teach the model to follow instructions, be task oriented, engage in multi-turn conversations, follow safety guidelines and refuse malicious requests, etc.
    Eg: InstructGPT (2022): OpenAI hired some 40 contractors to create these labelled datasets. These human annotators wrote prompts and provided ideal responses based on safety guidelines. Today, many datasets are generated automatically, with humans reviewing and editing them for quality.
  2. Domain specific fine tuning
    Goal: Adapt the model for specialised fields like medicine, law and programming.

Post training also introduces special tokens — symbols that were not used during pre-training — to help the model understand the structure of interactions. These tokens signal where a user’s input starts and ends and where the AI’s response begins, ensuring that the model correctly distinguishes between prompts and replies.

Now, we’ll move on to some other key concepts.

Inference — how the model generates new text

Inference can be performed at any stage, even midway through pre-training, to evaluate how well the model has learned.

When given an input sequence of tokens, the model assigns probabilities to all possible next tokens based on patterns it has learned during training.

Instead of always choosing the most likely token, it samples from this probability distribution — similar to flipping a biased coin, where higher-probability tokens are more likely to be selected.

This process repeats iteratively, with each newly generated token becoming part of the input for the next prediction. 

Token selection is stochastic and the same input can produce different outputs. Over time, the model generates text that wasn’t explicitly in its training data but follows the same statistical patterns.

Hallucinations — when LLMs generate false info

Why do hallucinations occur?

Hallucinations happen because LLMs do not “know” facts — they simply predict the most statistically likely sequence of words based on their training data.

Early models struggled significantly with hallucinations.

For instance, in the example below, if the training data contains many “Who is…” questions with definitive answers, the model learns that such queries should always have confident responses, even when it lacks the necessary knowledge.

When asked about an unknown person, the model does not default to “I don’t know” because this pattern was not reinforced during training. Instead, it generates its best guess, often leading to fabricated information.

How do you reduce hallucinations?

Method 1: Saying “I don’t know”

Improving factual accuracy requires explicitly training the model to recognise what it does not know — a task that is more complex than it seems.

This is done via self interrogation, a process that helps define the model’s knowledge boundaries.

Self interrogation can be automated using another AI model, which generates questions to probe knowledge gaps. If it produces a false answer, new training examples are added, where the correct response is: “I’m not sure. Could you provide more context?”

If a model has seen a question many times in training, it will assign a high probability to the correct answer.

If the model has not encountered the question before, it distributes probability more evenly across multiple possible tokens, making the output more randomised. No single token stands out as the most likely choice.

Fine tuning explicitly trains the model to handle low-confidence outputs with predefined responses. 

For example, when I asked ChatGPT-4o, “Who is asdja rkjgklfj?”, it correctly responded: “I’m not sure who that is. Could you provide more context?”

Method 2: Doing a web search

A more advanced method is to extend the model’s knowledge beyond its training data by giving it access to external search tools.

At a high level, when a model detects uncertainty, it can trigger a web search. The search results are then inserted into a model’s context window — essentially allowing this new data to be part of it’s working memory. The model references this new information while generating a response.

Vague recollections vs working memory

Generally speaking, LLMs have two types of knowledge access.

  1. Vague recollections — the knowledge stored in the model’s parameters from pre-training. This is based on patterns it learned from vast amounts of internet data but is not precise nor searchable.
  2. Working memory — the information that is available in the model’s context window, which is directly accessible during inference. Any text provided in the prompt acts as a short term memory, allowing the model to recall details while generating responses.

Adding relevant facts within the context window significantly improves response quality.

Knowledge of self 

When asked questions like “Who are you?” or “What built you?”, an LLM will generate a statistical best guess based on its training data, unless explicitly programmed to respond accurately. 

LLMs do not have true self-awareness, their responses depend on patterns seen during training.

One way to provide the model with a consistent identity is by using a system prompt, which sets predefined instructions about how it should describe itself, its capabilities, and its limitations.

To end off

That’s a wrap for Part 1! I hope this has helped you build intuition on how LLMs work. In Part 2, we’ll dive deeper into reinforcement learning and some of the latest models.

Got questions or ideas for what I should cover next? Drop them in the comments — I’d love to hear your thoughts. See you in Part 2! 🙂

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

F5 tackles AI security with new platform extensions

F5 AI Guardrails deploys as a proxy between users and AI models. Wormke describes it as being inserted as a proxy layer at the “front door” of AI interaction, between AI applications, users and agents. It intercepts prompts before they reach the model and analyzes outputs before they return to

Read More »

AWS European cloud service launch raises questions over sovereignty

There are examples of similar scenarios in recent years. The International Criminal Court’s chief prosecutor was reportedly shut out of Microsoft applications following the imposition of US sanctions, for example. Other instances include Adobe cutting off Venezuelan customers in compliance with US sanctions against that country in 2019, while Microsoft

Read More »

IP Fabric 7.9 boosts visibility across hybrid environments

Multicloud and hybrid network viability has also been extended to include IPv6 path analysis, helping teams reason about connectivity in dual-stack and hybrid environments. This capability addresses a practical challenge for enterprises deploying IPv6 alongside existing IPv4 infrastructure. Network teams can now validate that applications can reach IPv6 endpoints and

Read More »

Batistas Poised for Venezuelan Oil Revival

The billionaire Batista brothers are eyeing a billion-barrel Venezuelan oil project that stands to benefit from US President Donald Trump’s planned revival of the South American nation’s energy sector. The Batistas, who control the world’s biggest meatpacker, are discreetly positioned on the outskirts of Venezuela’s oil sector via the stake one of their business associates holds in the Petrolera Roraima project, according to people familiar with the situation.  Prior to the ouster of strongman Nicolás Maduro earlier this month, a commercial representative of the Batistas obtained a stake in a cluster of oilfields formerly operated by ConocoPhillips. Fluxus, an oil company owned by the Batistas, could join that or other petroleum developments in the country once the business outlook clears up, said the people, who asked not to be named discussing non-public information. J&F SA, the Brazilian brothers’ holding company, said in response to questions that it doesn’t have any assets in Venezuela, and is closely monitoring events.  “Once a scenario of institutional stability and legal certainty is established, we will be ready to evaluate investments,” J&F said in an email.  The Batistas have taken a cautious approach to Venezuela since the US imposed sanctions because of extensive American investments that include chicken processor Pilgrim’s Pride Corp., people familiar with their business strategy said.  Although Trump has said the Venezuelan government “stole” oil riches claimed by American companies such as ConocoPhillips during a nationalization drive almost 20 years ago, he also has evinced no desire to reverse those asset seizures. That indicates the Batistas are in pole position to help expand the country’s oil production while US and European drillers await stronger financial and security guarantees. Since Maduro’s fall, Joesley Batista has emerged as a key figure in the post-Maduro transition. Last week, he flew from Washington to Caracas for

Read More »

Reliance Posts Refining Gains despite Sourcing Challenges

Reliance Industries Ltd saw revenue from its oil-to-chemicals segment for the quarter ended December 2025 (third quarter of financial year 2026) increase 8.4 percent from Q3 FY 2025 to $18 billion. That was helped by a two percent increase in refining throughput with 20.6 million metric tons of crude processed in the three-month period, despite challenges in procuring oil, according to an online statement by the diversified Indian conglomerate. “Agile crude sourcing helped sustain throughput despite procurement challenges”, Reliance said. “Partial resumption of Red Sea route also benefitted operations”, it added. Reliance operates what it says is the world’s biggest single-site refinery in Jamnagar, India. The facility has a declared processing capacity of 1.4 million barrels a day. The Q3 FY2026 statement said refinery utilization was maximized “to capture high margins”. Reliance reported 18.2 million metric tons in production meant for sale, up 1.7 percent year-on-year. Reliance’s fuel retailing network under the Reliance BP Mobility Ltd brand, a joint venture with BP PLC, expanded by 14 percent year-over-year to 2,125 outlets, driving volume growth of over 20 percent, according to the statement. A “sharp increase in transportation fuel cracks and higher sulfur realization” drove a 14.6 percent year-on-year increase to $1.18 billion in petrochemicals EBITDA. The improvement in transport fuel cracks was aided by “continued disruptions in Russian supply and unplanned outages in other regions”, Reliance said. “US/EU sanctions on Russian refiners further tightened fuel markets”. On the other hand, Reliance saw “weakness in downstream chemical margins and higher feedstock freight rates”. However, it added, “Favorable ethane cracking economics and domestic market placements continued to support profitability”. At the backdrop, both global and domestic demand for oil products grew year-on-year in Q3 FY2026, partially offset by a price decline, the statement noted. “Crude oil benchmarks declined y-o-y on expectations of

Read More »

Why Is the USA Natural Gas Price Rising Today?

Why is the U.S. natural gas price rising today? That was the question Rigzone asked Ole R. Hvalbye, a commodities analyst at Skandinaviska Enskilda Banken AB (SEB), in an exclusive interview on Monday. Responding to the question, Hvalbye highlighted to Rigzone that Henry Hub was trading around $3.5 per million British thermal units (MMBtu) today, “up from [around] … $3.1 per MMBtu before the weekend”, and noted that “the drivers look fairly straightforward and well known rather than structural”. “Short-term forecasts turned colder across parts of the U.S., lifting heating demand expectations and supporting front-end prices,” Hvalbye told Rigzone. “Feedgas flows remain elevated and firm, reinforcing near-term demand for U.S. gas and tightening the spot balance marginally,” he added. “After the recent sell-off, the market was relatively short, so colder weather and steady LNG demand triggered short-covering rather than fresh long positioning,” he continued. Hvalbye went on to state that, “on the supply side, there’s no disruption story”. “U.S. production remains strong, storage is still comfortable, and nothing suggests a sudden structural tightening from my data – i.e., a reason why the move looks tactical rather than fundamental,” he pointed out. Hvalbye highlighted to Rigzone that today’s price increase “isn’t a clean breakout”, adding that prices “are roughly back to where they were a week ago, so part of today’s move is simply retracing last week’s dip”. “In short: weather plus LNG demand plus positioning explain today’s strength. It’s a bounce, not a regime shift,” he added. In a separate exclusive interview with Rigzone on Monday, Art Hogan, Chief Market Strategist at B. Riley Wealth, said U.S. natural gas “is bouncing off a 13-week low of $3.10 last week after the weather outlook for late January shifted colder”. “The colder than normal outlook is expected to drive strong heating demand

Read More »

Var Energi Raises Estimates for New Barents Sea Oil Discovery

An appraisal well has confirmed Vår Energi ASA’ Zagato oil discovery in the Goliat area on Norway’s side of the North Sea, with preliminary estimated recoverable resources of 21-25 million barrels of oil equivalent (MMboe), the Norwegian Offshore Directorate (NOD) said. That is equivalent to 3.3-11.9 million standard cubic meters of oil equivalent (MMscmoe), up from the previous estimate of 2.8-10.1 MMscmoe before appraisal well 7122/8-3 A was drilled, the upstream regulator said in a press release. The latest target represents the 14th exploration well drilled in production license 229, awarded under the Barents Sea Project in 1997, the NOD noted. Var Energi said separately, “The latest well tested two intervals with each showing maximum flow rates of more than 4,000 barrels of oil per day, confirming reservoir quality”. “The production tests confirmed good quality reservoirs and oil quality similar to the Goliat field”, Vår Energi said. Goliat, discovered 2000, started producing 2016 and expanded with the startup of the Snadd and Goliat West accumulations in 2017 and 2021 respectively, according to field information on government website Norskpetroleum.no. Operator Vår Energi (65 percent) and partner Equinor ASA (35 percent) have now drilled five wells in the Goliat Ridge, Vår Energi noted. “Including the latest well, the Goliat Ridge is estimated to contain gross discovered recoverable resources of 35-138 MMboe, and with additional prospective resources taking the total gross potential to over 200 MMboe”, it said. “A tie-back to the nearby Goliat FPSO [floating production, storage and offloading vessel] is being planned, targeting first production in 2019. “Vår Energi was recently awarded an adjacent license to the Goliat field in the 2025 Awards in Predefined Areas, which offers additional prospectivity on trend with the Goliat Ridge discovery”. Norskpetroleum.no says plans for Goliat include a connection to the Equinor-operated gas liquefaction facility on Melkøya island.   “The recent discoveries reinforce Vår Energi’s position as a leading exploration company on the Norwegian continental shelf and continue to strengthen our ability to sustain high-value production of

Read More »

Where Will the WTI Oil Price Land in 2026 and 2027?

According to the U.S. Energy Information Administration’s (EIA) latest short term energy outlook (STEO) which was published on January 13, the West Texas Intermediate (WTI) spot price average will drop in 2026 and 2027. The EIA projected in this STEO that the WTI spot price will come in at $52.21 per barrel this year and $50.36 per barrel next year. The commodity averaged $65.40 per barrel in 2025, the EIA’s January STEO showed. A quarterly breakdown included in the outlook forecast that the WTI spot price will come in at $54.93 per barrel in the first quarter of 2026, $52.67 per barrel in the second quarter, $52.03 per barrel in the third quarter, $49.34 per barrel in the fourth quarter, $49.00 per barrel in the first quarter of 2027, $50.66 per barrel in the second quarter, $50.68 per barrel in the third quarter, and $51.00 per barrel in the fourth quarter of next year. In its previous STEO, which was released in December, the EIA projected that the WTI spot price would average $65.32 per barrel in 2025 and $51.42 per barrel in 2026. That STEO did not offer an average WTI spot price forecast for 2027. The EIA’s November STEO saw the WTI spot price averaging $65.15 per barrel in 2025 and $51.26 per barrel in 2026. A chart hosted on the EIA’s website, which was last updated on January 14 and displayed the annual average Cushing, OK, WTI spot price, on a free on board basis, from 1986 to 2025, showed that this commodity hit a peak in 2008, at $99.67 per barrel. The commodity saw its lowest price, between 1986 and 2025, in 1986, at $15.05 per barrel, the chart highlighted. The highest price the commodity has seen this decade came in 2022, at $94.90 per barrel,

Read More »

Leviathan Partners Approve Expansion Project

Chevron Corp and its local partners have agreed on a $2.36-billion final investment decision (FID) for stage 1 of a project to raise production in the Leviathan natural gas and condensate field offshore Israel. Expected to start operation in the second half of 2029, the first stage of Phase 1B aims to increase capacity to about 21 billion cubic meters (741.61 billion cubic feet) a year, consortium member NewMed Energy LP said in a stock filing on Friday. On August 21, 2025, it said the Energy and Infrastructures Ministry had approved Phase 1B. Leviathan, discovered 2010 off the coast of Haifa city, went onstream December 2019 under Phase 1A, which has a capacity of about 12 Bcm per annum, according to NewMed Energy. Chevron upstream president Clay Neff said in a separate statement issued online by the United States energy giant, “Chevron is a leading energy player in the Eastern Mediterranean, where we are focused on natural gas production and exports”. “Our decision to invest in the expansion of Leviathan’s production capacity reflects our confidence in the future of energy in the region”, Neff added. Neff claimed, “Pragmatic U.S. and regional energy policies are helping to strengthen energy security across the Eastern Mediterranean and foster an environment that encourages investment in the Middle East and globally”. “This milestone demonstrates our ongoing commitment to partner with the state of Israel to develop natural gas resources and provide essential energy to millions of people in Israel, Egypt and Jordan”, said Jack Baker, Chevron managing director for the Eastern Mediterranean.  NewMed Energy’s regulatory disclosure said, “According to the development plan, Stage One of the Expansion Project includes the drilling and completion of three additional production wells, the addition of supplementary subsea systems and expansion of the processing systems on the platform, with the aim

Read More »

NVIDIA’s Rubin Redefines the AI Factory

The Architecture Shift: From “GPU Server” to “Rack-Scale Supercomputer” NVIDIA’s Rubin architecture is built around a single design thesis: “extreme co-design.” In practice, that means GPUs, CPUs, networking, security, software, power delivery, and cooling are architected together; treating the data center as the compute unit, not the individual server. That logic shows up most clearly in the NVL72 system. NVLink 6 serves as the scale-up spine, designed to let 72 GPUs communicate all-to-all with predictable latency, something NVIDIA argues is essential for mixture-of-experts routing and synchronization-heavy inference paths. NVIDIA is not vague about what this requires. Its technical materials describe the Rubin GPU as delivering 50 PFLOPS of NVFP4 inference and 35 PFLOPS of NVFP4 training, with 22 TB/s of HBM4 bandwidth and 3.6 TB/s of NVLink bandwidth per GPU. The point of that bandwidth is not headline-chasing. It is to prevent a rack from behaving like 72 loosely connected accelerators that stall on communication. NVIDIA wants the rack to function as a single engine because that is what it will take to drive down cost per token at scale. The New Idea NVIDIA Is Elevating: Inference Context Memory as Infrastructure If there is one genuinely new concept in the Rubin announcements, it is the elevation of context memory, and the admission that GPU memory alone will not carry the next wave of inference. NVIDIA describes a new tier called NVIDIA Inference Context Memory Storage, powered by BlueField-4, designed to persist and share inference state (such as KV caches) across requests and nodes for long-context and agentic workloads. NVIDIA says this AI-native context tier can boost tokens per second by up to 5× and improve power efficiency by up to 5× compared with traditional storage approaches. The implication is clear: the path to cheaper inference is not just faster GPUs.

Read More »

Power shortages, carbon capture, and AI automation: What’s ahead for data centers in 2026

“Despite a broader use of AI tools in enterprises and by consumers, that does not mean that AI compute, AI infrastructure in general, will be more evenly spread out,” said Daniel Bizo, research director at Uptime Institute, during the webinar. “The concentration of AI compute infrastructure is only increasing in the coming years.” For enterprises, the infrastructure investment remains relatively modest, Uptime Institute found. Enterprises will limit investment to inference and only some training, and inference workloads don’t require dramatic capacity increases. “Our prediction, our observation, was that the concentration of AI compute infrastructure is only increasing in the coming years by a couple of points. By the end of this year, 2026, we are projecting that around 10 gigawatts of new IT load will have been added to the global data center world, specifically to run generative AI workloads and adjacent workloads, but definitely centered on generative AI,” Bizo said. “This means these 10 gigawatts or so load, we are talking about anywhere between 13 to 15 million GPUs and accelerators deployed globally. We are anticipating that a majority of these are and will be deployed in supercomputing style.” 2. Developers will not outrun the power shortage The most pressing challenge facing the industry, according to Uptime, is that data centers can be built in less than three years, but power generation takes much longer. “It takes three to six years to deploy a solar or wind farm, around six years for a combined-cycle gas turbine plant, and even optimistically, it probably takes more than 10 years to deploy a conventional nuclear power plant,” said Max Smolaks, research analyst at Uptime Institute. This mismatch was manageable when data centers were smaller and growth was predictable, the report notes. But with projects now measured in tens and sometimes hundreds of

Read More »

Google warns transmission delays are now the biggest threat to data center expansion

The delays stem from aging transmission infrastructure unable to handle concentrated power demands. Building regional transmission lines currently takes seven to eleven years just for permitting, Hanna told the gathering. Southwest Power Pool has projected 115 days of potential loss of load if transmission infrastructure isn’t built to match demand growth, he added. These systemic delays are forcing enterprises to reconsider fundamental assumptions about cloud capacity. Regions including Northern Virginia and Santa Clara that were prime locations for hyperscale builds are running out of power capacity. The infrastructure constraints are also reshaping cloud competition around power access rather than technical capabilities. “This is no longer about who gets to market with the most GPU instances,” Gogia said. “It’s about who gets to the grid first.” Co-location emerges as a faster alternative to grid delays Unable to wait years for traditional grid connections, hyperscalers are pursuing co-location arrangements that place data centers directly adjacent to power plants, bypassing the transmission system entirely. Pricing for these arrangements has jumped 20% in power-constrained markets as demand outstrips availability, with costs flowing through to cloud customers via regional pricing differences, Gogia said. Google is exploring such arrangements, though Hanna said the company’s “strong preference is grid-connected load.” “This is a speed to power play for us,” he said, noting Google wants facilities to remain “front of the meter” to serve the broader grid rather than operating as isolated power sources. Other hyperscalers are negotiating directly with utilities, acquiring land near power plants, and exploring ownership stakes in power infrastructure from batteries to small modular nuclear reactors, Hanna said.

Read More »

OpenAI turns to Cerebras in a mega deal to scale AI inference infrastructure

Analysts expect AI workloads to grow more varied and more demanding in the coming years, driving the need for architectures tuned for inference performance and putting added pressure on data center networks. “This is prompting hyperscalers to diversify their computing systems, using Nvidia GPUs for general-purpose AI workloads, in-house AI accelerators for highly optimized tasks, and systems such as Cerebras for specialized low-latency workloads,” said Neil Shah, vice president for research at Counterpoint Research. As a result, AI platforms operating at hyperscale are pushing infrastructure providers away from monolithic, general-purpose clusters toward more tiered and heterogeneous infrastructure strategies. “OpenAI’s move toward Cerebras inference capacity reflects a broader shift in how AI data centers are being designed,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “This move is less about replacing Nvidia and more about diversification as inference scales.” At this level, infrastructure begins to resemble an AI factory, where city-scale power delivery, dense east–west networking, and low-latency interconnects matter more than peak FLOPS, Ram added. “At this magnitude, conventional rack density, cooling models, and hierarchical networks become impractical,” said Manish Rawat, semiconductor analyst at TechInsights. “Inference workloads generate continuous, latency-sensitive traffic rather than episodic training bursts, pushing architectures toward flatter network topologies, higher-radix switching, and tighter integration of compute, memory, and interconnect.”

Read More »

Cisco’s 2026 agenda prioritizes AI-ready infrastructure, connectivity

While most of the demand for AI data center capacity today comes from hyperscalers and neocloud providers, that will change as enterprise customers delve more into the AI networking world. “The other ecosystem members and enterprises themselves are becoming responsible for an increasing proportion of the AI infrastructure buildout as inferencing and agentic AI, sovereign cloud, and edge AI become more mainstream,” Katz wrote. More enterprises will move to host AI on premises via the introduction of AI agents that are designed to inject intelligent insight into applications and help improve operations. That’s where the AI impact on enterprise network traffic will appear, suggests Nolle. “Enterprises need to host AI to create AI network impact. Just accessing it doesn’t do much to traffic. Having cloud agents access local data center resources (RAG etc.) creates a governance issue for most corporate data, so that won’t go too far either,” Nolle said.  “Enterprises are looking at AI agents, not the way hyperscalers tout agentic AI, but agents running on small models, often open-source, and are locally hosted. This is where real AI traffic will develop, and Cisco could be vulnerable if they don’t understand this point and at least raise it in dialogs where AI hosting comes up,” Nolle said. “I don’t expect they’d go too far, because the real market for enterprise AI networking is probably a couple years out.” Meanwhile, observers expect Cisco to continue bolstering AI networking capabilities for enterprise branch, campus and data centers as well as hyperscalers, including through optical support and other gear.

Read More »

Microsoft tells communities it will ‘pay its way’ as AI data center resource usage sparks backlash

It will work with utilities and public commissions to set the rates it pays high enough to cover data center electricity costs (including build-outs, additions, and active use). “Our goal is straightforward: To ensure that the electricity cost of serving our data centers is not passed on to residential customers,” Smith emphasized. For example, the company is supporting a new rate structure Wisconsin that would charge a class of “very large customers,” including data centers, the true cost of the electricity required to serve them. It will collaborate “early, closely, and transparently” with local utilities to add electricity and supporting infrastructure to existing grids when needed. For instance, Microsoft has contracted with the Midcontinent Independent System Operator (MISO) to add 7.9GW of new electricity generation to the grid, “more than double our current consumption,” Smith noted. It will pursue ways to make data centers more efficient. For example, it is already experimenting with AI to improve planning, extract more electricity from existing infrastructure, improve system resilience, and speed development of new infrastructure and technologies (like nuclear energy). It will advocate for state and national public policies that ensure electricity access that is affordable, reliable, and sustainable in neighboring communities. Microsoft previously established priorities for electricity policy advocacy, Smith noted, but “progress has been uneven. This needs to change.” Microsoft is similarly committed when it comes to data center water use, promising four actions: Reducing the overall amount of water its data centers use, initially improving it by 40% by 2030. The company is exploring innovations in cooling, including closed-loop systems that recirculate cooling liquids. It will collaborate with local utilities to map out water, wastewater, and pressure needs, and will “fully fund” infrastructure required for growth. For instance, in Quincy, Washington, Microsoft helped construct a water reuse utility that recirculates

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »