Stay Ahead, Stay ONMINE

How LLMs Work: Pre-Training to Post-Training, Neural Networks, Hallucinations, and Inference

With the recent explosion of interest in large language models (LLMs), they often seem almost magical. But let’s demystify them. I wanted to step back and unpack the fundamentals — breaking down how LLMs are built, trained, and fine-tuned to become the AI systems we interact with today. This two-part deep dive is something I’ve been meaning […]

With the recent explosion of interest in large language models (LLMs), they often seem almost magical. But let’s demystify them.

I wanted to step back and unpack the fundamentals — breaking down how LLMs are built, trained, and fine-tuned to become the AI systems we interact with today.

This two-part deep dive is something I’ve been meaning to do for a while and was also inspired by Andrej Karpathy’s widely popular 3.5-hour YouTube video, which has racked up 800,000+ views in just 10 days. Andrej is a founding member of OpenAI, his insights are gold— you get the idea.

If you have the time, his video is definitely worth watching. But let’s be real — 3.5 hours is a long watch. So, for all the busy folks who don’t want to miss out, I’ve distilled the key concepts from the first 1.5 hours into this 10-minute read, adding my own breakdowns to help you build a solid intuition.

What you’ll get

Part 1 (this article): Covers the fundamentals of LLMs, including pre-training to post-training, neural networks, Hallucinations, and inference.

Part 2: Reinforcement learning with human/AI feedback, investigating o1 models, DeepSeek R1, AlphaGo

Let’s go! I’ll start with looking at how LLMs are being built.

At a high level, there are 2 key phases: pre-training and post-training.

1. Pre-training

Before an LLM can generate text, it must first learn how language works. This happens through pre-training, a highly computationally intensive task.

Step 1: Data collection and preprocessing

The first step in training an LLM is gathering as much high-quality text as possible. The goal is to create a massive and diverse dataset containing a wide range of human knowledge.

One source is Common Crawl, which is a free, open repository of web crawl data containing 250 billion web pages over 18 years. However, raw web data is noisy — containing spam, duplicates and low quality content — so preprocessing is essential.If you’re interested in preprocessed datasets, FineWeb offers a curated version of Common Crawl, and is made available on Hugging Face.

Once cleaned, the text corpus is ready for tokenization.

Step 2: Tokenization

Before a neural network can process text, it must be converted into numerical form. This is done through tokenization, where words, subwords, or characters are mapped to unique numerical tokens.

Think of tokens as the building blocks — the fundamental building blocks of all language models. In GPT4, there are 100,277 possible tokens.A popular tokenizer, Tiktokenizer, allows you to experiment with tokenization and see how text is broken down into tokens. Try entering a sentence, and you’ll see each word or subword assigned a series of numerical IDs.

Step 3: Neural network training

Once the text is tokenized, the neural network learns to predict the next token based on its context. As shown above, the model takes an input sequence of tokens (e.g., “we are cook ing”) and processes it through a giant mathematical expression — which represents the model’s architecture — to predict the next token.

A neural network consists of 2 key parts:

  1. Parameters (weights) — the learned numerical values from training.
  2. Architecture (mathematical expression) — the structure defining how the input tokens are processed to produce outputs.

Initially, the model’s predictions are random, but as training progresses, it learns to assign probabilities to possible next tokens.

When the correct token (e.g. “food”) is identified, the model adjusts its billions of parameters (weights) through backpropagation — an optimization process that reinforces correct predictions by increasing their probabilities while reducing the likelihood of incorrect ones.

This process is repeated billions of times across massive datasets.

Base model — the output of pre-training

At this stage, the base model has learned:

  • How words, phrases and sentences relate to each other
  • Statistical patterns in your training data

However, base models are not yet optimised for real-world tasks. You can think of them as an advanced autocomplete system — they predict the next token based on probability, but with limited instruction-following ability.

A base model can sometimes recite training data verbatim and can be used for certain applications through in-context learning, where you guide its responses by providing examples in your prompt. However, to make the model truly useful and reliable, it requires further training.

2. Post training — Making the model useful

Base models are raw and unrefined. To make them helpful, reliable, and safe, they go through post-training, where they are fine-tuned on smaller, specialised datasets.

Because the model is a neural network, it cannot be explicitly programmed like traditional software. Instead, we “program” it implicitly by training it on structured labeled datasets that represent examples of desired interactions.

How post training works

Specialised datasets are created, consisting of structured examples on how the model should respond in different situations. 

Some types of post training include:

  1. Instruction/conversation fine tuning
    Goal: To teach the model to follow instructions, be task oriented, engage in multi-turn conversations, follow safety guidelines and refuse malicious requests, etc.
    Eg: InstructGPT (2022): OpenAI hired some 40 contractors to create these labelled datasets. These human annotators wrote prompts and provided ideal responses based on safety guidelines. Today, many datasets are generated automatically, with humans reviewing and editing them for quality.
  2. Domain specific fine tuning
    Goal: Adapt the model for specialised fields like medicine, law and programming.

Post training also introduces special tokens — symbols that were not used during pre-training — to help the model understand the structure of interactions. These tokens signal where a user’s input starts and ends and where the AI’s response begins, ensuring that the model correctly distinguishes between prompts and replies.

Now, we’ll move on to some other key concepts.

Inference — how the model generates new text

Inference can be performed at any stage, even midway through pre-training, to evaluate how well the model has learned.

When given an input sequence of tokens, the model assigns probabilities to all possible next tokens based on patterns it has learned during training.

Instead of always choosing the most likely token, it samples from this probability distribution — similar to flipping a biased coin, where higher-probability tokens are more likely to be selected.

This process repeats iteratively, with each newly generated token becoming part of the input for the next prediction. 

Token selection is stochastic and the same input can produce different outputs. Over time, the model generates text that wasn’t explicitly in its training data but follows the same statistical patterns.

Hallucinations — when LLMs generate false info

Why do hallucinations occur?

Hallucinations happen because LLMs do not “know” facts — they simply predict the most statistically likely sequence of words based on their training data.

Early models struggled significantly with hallucinations.

For instance, in the example below, if the training data contains many “Who is…” questions with definitive answers, the model learns that such queries should always have confident responses, even when it lacks the necessary knowledge.

When asked about an unknown person, the model does not default to “I don’t know” because this pattern was not reinforced during training. Instead, it generates its best guess, often leading to fabricated information.

How do you reduce hallucinations?

Method 1: Saying “I don’t know”

Improving factual accuracy requires explicitly training the model to recognise what it does not know — a task that is more complex than it seems.

This is done via self interrogation, a process that helps define the model’s knowledge boundaries.

Self interrogation can be automated using another AI model, which generates questions to probe knowledge gaps. If it produces a false answer, new training examples are added, where the correct response is: “I’m not sure. Could you provide more context?”

If a model has seen a question many times in training, it will assign a high probability to the correct answer.

If the model has not encountered the question before, it distributes probability more evenly across multiple possible tokens, making the output more randomised. No single token stands out as the most likely choice.

Fine tuning explicitly trains the model to handle low-confidence outputs with predefined responses. 

For example, when I asked ChatGPT-4o, “Who is asdja rkjgklfj?”, it correctly responded: “I’m not sure who that is. Could you provide more context?”

Method 2: Doing a web search

A more advanced method is to extend the model’s knowledge beyond its training data by giving it access to external search tools.

At a high level, when a model detects uncertainty, it can trigger a web search. The search results are then inserted into a model’s context window — essentially allowing this new data to be part of it’s working memory. The model references this new information while generating a response.

Vague recollections vs working memory

Generally speaking, LLMs have two types of knowledge access.

  1. Vague recollections — the knowledge stored in the model’s parameters from pre-training. This is based on patterns it learned from vast amounts of internet data but is not precise nor searchable.
  2. Working memory — the information that is available in the model’s context window, which is directly accessible during inference. Any text provided in the prompt acts as a short term memory, allowing the model to recall details while generating responses.

Adding relevant facts within the context window significantly improves response quality.

Knowledge of self 

When asked questions like “Who are you?” or “What built you?”, an LLM will generate a statistical best guess based on its training data, unless explicitly programmed to respond accurately. 

LLMs do not have true self-awareness, their responses depend on patterns seen during training.

One way to provide the model with a consistent identity is by using a system prompt, which sets predefined instructions about how it should describe itself, its capabilities, and its limitations.

To end off

That’s a wrap for Part 1! I hope this has helped you build intuition on how LLMs work. In Part 2, we’ll dive deeper into reinforcement learning and some of the latest models.

Got questions or ideas for what I should cover next? Drop them in the comments — I’d love to hear your thoughts. See you in Part 2! 🙂

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cloudflare firewall reacts badly to React exploit mitigation

During the same window, Downdetector saw a spike in problem reports for enterprise services including Shopify, Zoom, Claude AI, and Amazon Web Services, and a host of consumer services from games to dating apps. Cloudflare explained the outage on its service status page: “A change made to how Cloudflare’s Web

Read More »

CompTIA training targets workplace AI use

CompTIA AI Essentials (V2) delivers training to help employees, students, and other professionals strengthen the skills they need for effective business use of AI tools such as ChatGPT, Copilot, and Gemini. In its first iteration, CompTIA’s AI Essentials focused on AI fundamentals to help professionals learn how to apply AI technology

Read More »

OPEC Receives Updated Compensation Plans

A statement posted on OPEC’s website this week announced that the OPEC Secretariat has received updated compensation plans from Iraq, the United Arab Emirates (UAE), Kazakhstan, and Oman. A table accompanying this statement showed that these compensation plans amount to a total of 221,000 barrels per day in November, 272,000

Read More »

LogicMonitor closes Catchpoint buy, targets AI observability

The acquisition combines LogicMonitor’s observability platform with Catchpoint’s internet-level intelligence, which monitors performance from thousands of global vantage points. Once integrated, Catchpoint’s synthetic monitoring, network data, and real-user monitoring will feed directly into Edwin AI, LogicMonitor’s intelligence engine. The goal is to let enterprise customers shift from reactive alerting to

Read More »

Chevron, Gorgon Partners OK $2B to Drill for More Gas

Chevron Corp’s Australian unit and its joint venture partners have reached a final investment decision to further develop the massive Gorgon natural gas project in Western Australia, it said in a statement on Friday. Chevron Australia and its partners — including Exxon Mobil Corp. and Shell Plc — will spend A$3 billion ($2 billion) connecting two offshore natural gas fields to existing infrastructure and processing facilities on Barrow Island as part of the Gorgon Stage 3 development, it said in the statement. Six wells will also be drilled.  Gorgon, on the remote Barrow Island in northwestern Australia, is the largest resource development in Australia’s history, and produces about 15.6 million tons of liquefied natural gas a year. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

USA Crude Oil Stocks Rise Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 0.6 million barrels from the week ending November 21 to the week ending November 28, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. That EIA report was released on December 3 and included data for the week ending November 28. It showed that crude oil stocks, not including the SPR, stood at 427.5 million barrels on November 28, 426.9 million barrels on November 21, and 423.4 million barrels on November 29, 2024. Crude oil in the SPR stood at 411.7 million barrels on November 28, 411.4 million barrels on November 21, and 391.8 million barrels on November 29, 2024, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.687 billion barrels on November 28, the report showed. Total petroleum stocks were up 5.5 million barrels week on week and up 58.5 million barrels year on year, the report pointed out. “At 427.5 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA noted in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 4.5 million barrels from last week and are about two percent below the five year average for this time of year. Finished gasoline and blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 2.1 million barrels last week and are about seven percent below the five year average for this time of year. Propane/propylene inventories decreased 0.7 million barrels from last week and are about 15 percent above the five year average for this

Read More »

Today’s $67 Per Barrel Is Only $44 in 2008 Dollars

Today’s $67 per barrel is only $44 per barrel in 2008-dollars. That’s what Skandinaviska Enskilda Banken AB (SEB) Chief Commodities Analyst Bjarne Schieldrop said in a SEB report sent to Rigzone by the SEB team on Wednesday. “The ‘fair price’ of oil today ($67 per barrel) is nominally not much different from the average prices over the three years to April 2008,” Schieldrop highlighted in the report. “Since then, we have had 52 percent U.S. inflation. And still the nominal fair price of oil is more or less the same. Today’s $67 per barrel is only $44 per barrel in 2008-dollars,” he added. “In real terms the world is getting cheaper and cheaper oil – to the joy of consumers and to the terror of oil producers who have to chase every possible avenue of productivity improvements to counter inflation and maintain margins,” Schieldrop continued, noting that, as they successfully do so, “the consequence is a nominal oil price not going up”. In the report, Schieldrop went on to outline that a “cost-floor of around $40 per barrel” multiplied by “a natural cost inflation-drift of 2.4 percent” comes to $0.96 per barrel. He added that, since 2008, the oil industry has been able to counter this drift with an equal amount of productivity. “The very stable five year oil price at around $67 per barrel over the past three years, and still the same today, is implying that the market is expecting the global oil industry will be able to counter an ongoing 2.4 percent inflation per year to 2030 with an equal amount of productivity,” Schieldrop said. “The world consumes 38 billion barrels per year. A productivity improvement of $0.96 per barrel equals $36 billion in productivity/year or $182 billion to 2030,” he added. Schieldrop outlined in the report that the

Read More »

Crescent Says Signed Over $900MM Non-Core Divestments

Crescent Energy Co has entered into more than $900 million worth of sales involving non-core assets this year, the Houston, Texas-based oil and gas producer said. Crescent set a target of offloading non-core assets worth around $1 billion when it announced its acquisition of Vital Energy Inc on August 25. The latest divestment involves its non-operated DJ Basin assets, which will be acquired by an unnamed private buyer for $90 million. Mostly located in Weld County, Colorado, the assets produce about 7,000 barrels of oil equivalent a day (boed), with oil accounting for about 20 percent, Crescent said in an online statement. In its quarterly report November 2 Crescent said it had signed over $700 million worth of non-core divestitures including all its Barnett, conventional Rockies and Mid-Continent positions. In its latest divestment update, Crescent said, “The company has recently closed its previously announced conventional Rockies and Barnett divestitures and expects the remainder of its announced non-core asset sales to close before year-end”. On April 22 Crescent said it had sold non-operated Permian Basin assets to an unnamed private buyer for $83 million. The assets, in Reeves County, Texas, had a projected 2025 production of approximately 3,000 boed, with oil comprising over 35 percent.  On August 25 it announced its $3.1-billion all-stock, debt-inclusive purchase of Tulsa, Oklahoma-headquartered Vital. Expected to close before the year ends, the combination will create a “top-10 independent”, a joint statement said. The enlarged Crescent will have a “scaled and focused asset portfolio with flexible capital allocation across more than a decade of high-quality inventory in the Eagle Ford, Permian and Uinta Basins”, the companies said. Vital shareholders will receive 1.9062 Crescent shares for each Vital share. On a diluted basis, Crescent and Vital shareholders will own approximately 77 percent and 23 percent of the combined entity

Read More »

Oil Could be Significantly Impacted by USA-VEN Tensions

Benchmark crude oil prices could be impacted significantly by escalating military tensions between the U.S. and Venezuela, with the Trump administration tightening pressure on Nicolas Maduros’ regime and signaling the possibility of a U.S. incursion. That’s what Rystad Energy stated in a market update sent to Rigzone by the Rystad Energy team late Thursday, adding that Venezuela currently produces 1.1 million barrels per day of crude oil, “placing this volume at risk depending on the scale of military activity”. “Although the volume is small in terms of global trade flows, the quality is unique as over 67 percent of the output is heavy,” Rystad highlighted in the update. Rystad Energy’s Head of Geopolitical Analysis, Jorge Leon, noted in the update that “the loss of Venezuelan volumes would likely result in stronger crude oil prices in the Pacific Basin, with China and India dependent on the heavy supply”. “Dubai prices are likely to develop stronger premiums to Brent crude while other heavy grades will strengthen against the light grades. Some upward movement in Brent and West Texas Intermediate (WTI) prices is also expected with the overall loss of Venezuelan supply,” he added. Leon went on to warn in the update that volatility in the region is unlikely to subside in the immediate term. “The geopolitical risk premium remains firmly embedded in oil markets, with upside price risks persisting as traders brace for possible setbacks or renewed escalation,” he said. “Over the coming days and weeks, the balance between cautious optimism and entrenched uncertainty will continue to shape market sentiment,” he added. In the update, Rystad highlighted that Venezuela “claims to have the largest proven reserves in the world, around 300 billion barrels as of 2024, concentrated in the Orinoco belt with most being heavy oil”.  The company noted in the update

Read More »

Bilfinger Workers Support Strike Action

(Update) December 5, 2025, 10:58 AM GMT: Article updated with Unite spokesperson comment. UK union Unite announced, in a statement sent to Rigzone this week, that over 400 offshore members employed by Bilfinger UK Limited “have supported taking strike action in an escalating dispute over pensions”. A Unite spokesperson declined to confirm exact numbers in relation to a vote on industrial action among offshore members employed by Bilfinger UK. The spokesperson told Rigzone, however, that Unite has “a majority of members voting in favor of action and that … means over 400 are now mandated to be supportive of the strike action”. In the statement, Unite said “a majority of Bilfinger workers have emphatically backed strike action in a fight to secure a fairer pension deal”. “Unite members are demanding that Bilfinger move to a gross earnings pension scheme like many other private sector and offshore companies because workers are losing out on thousands of pounds in pension contributions due to their pattern of pay being weekly,” Unite added. Unite noted in the statement that the majority of Bilfinger workers are enrolled in a statutory minimum workplace pension scheme “where the company pays a maximum three percent of ‘qualifying earnings’ contribution”. “The qualifying earnings income is between GBP 6,240 [$8,322] and GBP 50,270 [$67,060]. Anything above or below that does not factor in pension contributions. It means Bilfinger’s annual pension contribution is capped at GBP 1,320.90 [$1,762.10] per year irrespective of income,” Unite said. The union estimated in the statement that around GBP 2,254 [$3,006] is being lost every year in employer pension contributions when compared with a gross salary pension scheme for a worker earning GBP 59,580.36 [$79,486.58]. “If Bilfinger fails to act on the pensions issue then strikes will be called in the coming weeks”, Unite warned in the

Read More »

With AI Factories, AWS aims to help enterprises scale AI while respecting data sovereignty

“The AWS AI Factory seeks to resolve the tension between cloud-native innovation velocity and sovereign control. Historically, these objectives lived in opposition. CIOs faced an unsustainable dilemma: choose between on-premises security or public cloud cost and speed benefits,” he said. “This is arguably AWS’s most significant move in the sovereign AI landscape.” On premises GPUs are already a thing AI Factories isn’t the first attempt to put cloud-managed AI accelerators in customers’ data centers. Oracle introduced Nvidia processors to its Cloud@Customer managed on-premises offering in March, while Microsoft announced last month that it will add Nvidia processors to its Azure Local service. Google Distributed Cloud also includes a GPU offering, and even AWS offers lower-powered Nvidia processors in its AWS Outposts. AWS’ AI Factories is also likely to square off against from a range of similar products, such as Nvidia’s AI Factory, Dell’s AI Factory stack, and HPE’s Private Cloud for AI — each tightly coupled with Nvidia GPUs, networking, or software, and all vying to become the default on-premises AI platform. But, said Sopko, AWS will have an advantage over rivals due to its hardware-software integration and operational maturity: “The secret sauce is the software, not the infrastructure,” he said. Omdia principal analyst Alexander Harrowell expects AWS’s AI Factories to combine the on-premises control of Outposts with the flexibility and ability to run a wider variety of services offered by AWS Local Zones, which puts small data centers close to large population centers to reduce service latency. Sopko cautioned that enterprises are likely to face high commitment costs, drawing a parallel with Oracle’s OCI Dedicated Region, one of its Cloud@Customer offerings.

Read More »

HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships

On the hardware front, HPE is targeting the AI data center edge with a new MX router and the scale-out networking delivery with a new QFX switch. Juniper’s MX series is its flagship routing family aimed at carriers, large-scale enterprise data center and WAN customers, while the QFX line services data center customers anchoring spine/leaf networks as well as top-of-rack systems. The new 1U, 1.6Tbps MX301 multiservice edge router, available now, is aimed at bringing AI inferencing closer to the source of data generation and can be positioned in metro, mobile backhaul, and enterprise routing applications, Rahim said. It includes high-density support for 16 x 1/1025/50GbE, 10 x 100Gb and 4 x 400Gb interfaces. “The MX301 is essentially the on-ramp to provide high speed, secure connections from distributed inference cluster users, devices and agents from the edge all the way to the AI data center,” Rami said. “The requirements here are typically around high performance, but also very high logical skills and integrated security.” In the QFX arena, the new QFX5250 switch, available in 1Q 2026, is a fully liquid-cooled box aimed at tying together Nvidia Rubin and/or AMD MI400 GPUs for AI consumption across the data center. It is built on Broadcom Tomahawk 6 silicon and supports up to 102.4Tbps Ethernet bandwidth, Rahim said.  “The QFX5250 combines HPE liquid cooling technology with Juniper networking software (Junos) and integrated AIops intelligence to deliver a high-performance, power-efficient and simplified operations for next-generation AI inference,” Rami said. Partnership expansions Also key to HPE/Juniper’s AI networking plans are its partnerships with Nvidia and AMD. The company announced its relationship with Nvidia now includes HPE Juniper edge onramp and long-haul data center interconnect (DCI) support in its Nvidia AI Computing by HPE portfolio. This extension uses the MX and Junipers PTX hyperscaler routers to support high-scale, secure

Read More »

What is co-packaged optics? A solution for surging capacity in AI data center networks

When it announced its CPO-capable switches, Nvidia said they would improve resiliency by 10 times at scale compared to previous switch generations. Several factors contribute to this claim, including the fact that the optical switches require four times fewer lasers, Shainer says. Whereas the laser source was previously part of the transceiver, the optical engine is now incorporated onto the ASIC, allowing multiple optical channels to share a single laser. Additionally, in Nvidia’s implementation, the laser source is located outside of the switch. “We want to keep the ability to replace a laser source in case it has failed and needs to be replaced,” he says. “They are completely hot-swappable, so you don’t need to shut down the switch.” Nonetheless, you may often hear that when something fails in a CPO box, you need to replace the entire box. That may be true if it’s the photonics engine embedded in silicon inside the box. “But they shouldn’t fail that often. There are not a lot of moving parts in there,” Wilkinson says. While he understands the argument around failures, he doesn’t expect it to pan out as CPO gets deployed. “It’s a fallacy,” he says. There’s also a simple workaround to the resiliency issue, which hyperscalers are already talking about, Karavalas says: overbuild. “Have 10% more ports than you need or 5%,” he says. “If you lose a port because the optic goes bad, you just move it and plug it in somewhere else.” Which vendors are backing co-packaged optics? In terms of vendors that have or plan to have CPO offerings, the list is not long, unless you include various component players like TSMC. But in terms of major switch vendors, here’s a rundown: Broadcom has been making steady progress on CPO since 2021. It is now shipping “to

Read More »

Nvidia’s $2B Synopsys stake tests independence of open AI interconnect standard

But the concern for enterprise IT leaders is whether Nvidia’s financial stakes in UALink consortium members could influence the development of an open standard specifically designed to compete with Nvidia’s proprietary technology and to give enterprises more choices in the datacenter. Organizations planning major AI infrastructure investments view such open standards as critical to avoiding vendor lock-in and maintaining competitive pricing. “This does put more pressure on UALink since Intel is also a member and also took investment from Nvidia,” Sag said. UALink and Synopsys’s critical role UALink represents the industry’s most significant effort to prevent vendor lock-in for AI infrastructure. The consortium ratified its UALink 200G 1.0 Specification in April, defining an open standard for connecting up to 1,024 AI accelerators within computing pods at 200 Gbps per lane — directly competing with Nvidia’s NVLink for scale-up applications. Synopsys plays a critical role. The company joined UALink’s board in January and in December announced the industry’s first UALink design components, enabling chip designers to build UALink-compatible accelerators. Analysts flag governance concerns Gaurav Gupta, VP analyst at Gartner, acknowledged the tension. “The Nvidia-Synopsys deal does raise questions around the future of UALink as Synopsys is a key partner of the consortium and holds critical IP for UALink, which competes with Nvidia’s proprietary NVLink,” he said. Sanchit Vir Gogia, chief analyst at Greyhound Research, sees deeper structural concerns. “Synopsys is not a peripheral player in this standard; it is the primary supplier of UALink IP and a board member within the UALink Consortium,” he said. “Nvidia’s entry into Synopsys’ shareholder structure risks contaminating that neutrality.”

Read More »

Cooling crisis at CME: A wakeup call for modern infrastructure governance

Organizations should reassess redundancy However, he pointed out, “the deeper concern is that CME had a secondary data center ready to take the load, yet the failover threshold was set too high, and the activation sequence remained manually gated. The decision to wait for the cooling issue to self-correct rather than trigger the backup site immediately revealed a governance model that had not evolved to keep pace with the operational tempo of modern markets.” Thermal failures, he said, “do not unfold on the timelines assumed in traditional disaster recovery playbooks. They escalate within minutes and demand automated responses that do not depend on human certainty about whether a facility will recover in time.” Matt Kimball, VP and principal analyst at Moor Insights & Strategy, said that to some degree what happened in Aurora highlights an issue that may arise on occasion: “the communications gap that can exist between IT executives and data center operators. Think of ‘rack in versus rack out’ mindsets.” Often, he said, the operational elements of that data center environment, such as cooling, power, fire hazards, physical security, and so forth, fall outside the realm of an IT executive focused on delivering IT services to the business. “And even if they don’t fall outside the realm, these elements are certainly not a primary focus,” he noted. “This was certainly true when I was living in the IT world.” Additionally, said Kimball, “this highlights the need for organizations to reassess redundancy and resilience in a new light. Again, in IT, we tend to focus on resilience and redundancy at the app, server, and workload layers. Maybe even cluster level. But as we continue to place more and more of a premium on data, and the terms ‘business critical’ or ‘mission critical’ have real relevance, we have to zoom out

Read More »

Microsoft loses two senior AI infrastructure leaders as data center pressures mount

Microsoft did not immediately respond to a request for comment. Microsoft’s constraints Analysts say the twin departures mark a significant setback for Microsoft at a critical moment in the AI data center race, with pressure mounting from both OpenAI’s model demands and Google’s infrastructure scale. “Losing some of the best professionals working on this challenge could set Microsoft back,” said Neil Shah, partner and co-founder at Counterpoint Research. “Solving the energy wall is not trivial, and there may have been friction or strategic differences that contributed to their decision to move on, especially if they saw an opportunity to make a broader impact and do so more lucratively at a company like Nvidia.” Even so, Microsoft has the depth and ecosystem strength to continue doubling down on AI data centers, said Prabhu Ram, VP for industry research at Cybermedia Research. According to Sanchit Gogia, chief analyst at Greyhound Research, the departures come at a sensitive moment because Microsoft is trying to expand its AI infrastructure faster than physical constraints allow. “The executives who have left were central to GPU cluster design, data center engineering, energy procurement, and the experimental power and cooling approaches Microsoft has been pursuing to support dense AI workloads,” Gogia said. “Their exit coincides with pressures the company has already acknowledged publicly. GPUs are arriving faster than the company can energize the facilities that will house them, and power availability has overtaken chip availability as the real bottleneck.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »