Stay Ahead, Stay ONMINE

Method of Moments Estimation with Python Code

Let’s say you are in a customer care center, and you would like to know the probability distribution of the number of calls per minute, or in other words, you want to answer the question: what is the probability of receiving zero, one, two, … etc., calls per minute? You need this distribution in order […]

Let’s say you are in a customer care center, and you would like to know the probability distribution of the number of calls per minute, or in other words, you want to answer the question: what is the probability of receiving zero, one, two, … etc., calls per minute? You need this distribution in order to predict the probability of receiving different number of calls based on which you can plan how many employees are needed, whether or not an expansion is required, etc.

In order to let our decision ‘data informed’ we start by collecting data from which we try to infer this distribution, or in other words, we want to generalize from the sample data to the unseen data which is also known as the population in statistical terms. This is the essence of statistical inference.

From the collected data we can compute the relative frequency of each value of calls per minute. For example, if the collected data over time looks something like this: 2, 2, 3, 5, 4, 5, 5, 3, 6, 3, 4, … etc. This data is obtained by counting the number of calls received every minute. In order to compute the relative frequency of each value you can count the number of occurrences of each value divided by the total number of occurrences. This way you will end up with something like the grey curve in the below figure, which is equivalent to the histogram of the data in this example.

Image generated by the Author

Another option is to assume that each data point from our data is a realization of a random variable (X) that follows a certain probability distribution. This probability distribution represents all the possible values that are generated if we were to collect this data long into the future, or in other words, we can say that it represents the population from which our sample data was collected. Furthermore, we can assume that all the data points come from the same probability distribution, i.e., the data points are identically distributed. Moreover, we assume that the data points are independent, i.e., the value of one data point in the sample is not affected by the values of the other data points. The independence and identical distribution (iid) assumption of the sample data points allows us to proceed mathematically with our statistical inference problem in a systematic and straightforward way. In more formal terms, we assume that a generative probabilistic model is responsible for generating the iid data as shown below.

Image generated by the Author

In this particular example, a Poisson distribution with mean value λ = 5 is assumed to have generated the data as shown in the blue curve in the below figure. In other words, we assume here that we know the true value of λ which is generally not known and needs to be estimated from the data.

Image generated by the Author

As opposed to the previous method in which we had to compute the relative frequency of each value of calls per minute (e.g., 12 values to be estimated in this example as shown in the grey figure above), now we only have one parameter that we aim at finding which is λ. Another advantage of this generative model approach is that it is better in terms of generalization from sample to population. The assumed probability distribution can be said to have summarized the data in an elegant way that follows the Occam’s razor principle.

Before proceeding further into how we aim at finding this parameter λ, let’s show some Python code first that was used to generate the above figure.

# Import the Python libraries that we will need in this article
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import math
from scipy import stats

# Poisson distribution example
lambda_ = 5
sample_size = 1000
data_poisson = stats.poisson.rvs(lambda_,size= sample_size) # generate data

# Plot the data histogram vs the PMF
x1 = np.arange(data_poisson.min(), data_poisson.max(), 1)
fig1, ax = plt.subplots()
plt.bar(x1, stats.poisson.pmf(x1,lambda_),
        label="Possion distribution (PMF)",color = BLUE2,linewidth=3.0,width=0.3,zorder=2)
ax.hist(data_poisson, bins=x1.size, density=True, label="Data histogram",color = GRAY9, width=1,zorder=1,align='left')

ax.set_title("Data histogram vs. Poisson true distribution", fontsize=14, loc='left')
ax.set_xlabel('Data value')
ax.set_ylabel('Probability')
ax.legend()
plt.savefig("Possion_hist_PMF.png", format="png", dpi=800)

Our problem now is about estimating the value of the unknown parameter λ using the data we collected. This is where we will use the method of moments (MoM) approach that appears in the title of this article.

First, we need to define what is meant by the moment of a random variable. Mathematically, the kth moment of a discrete random variable (X) is defined as follows:

Take the first moment E(X) as an example, which is also the mean μ of the random variable, and assuming that we collect our data which is modeled as N iid realizations of the random variable X. A reasonable estimate of μ is the sample mean which is defined as follows:

Thus, in order to obtain a MoM estimate of a model parameter that parametrizes the probability distribution of the random variable X, we first write the unknown parameter as a function of one or more of the kth moments of the random variable, then we replace the kth moment with its sample estimate. The more unknown parameters we have in our models, the more moments we need.

In our Poisson model example, this is very simple as shown below.

In the next part, we test our MoM estimator on the simulated data we had earlier. The Python code for obtaining the estimator and plotting the corresponding probability distribution using the estimated parameter is shown below.

# Method of moments estimator using the data (Poisson Dist)
lambda_hat = sum(data_poisson) / len(data_poisson)

# Plot the MoM estimated PMF vs the true PMF
x1 = np.arange(data_poisson.min(), data_poisson.max(), 1)
fig2, ax = plt.subplots()
plt.bar(x1, stats.poisson.pmf(x1,lambda_hat),
        label="Estimated PMF",color = ORANGE1,linewidth=3.0,width=0.3)
plt.bar(x1+0.3, stats.poisson.pmf(x1,lambda_),
        label="True PMF",color = BLUE2,linewidth=3.0,width=0.3)

ax.set_title("Estimated Poisson distribution vs. true distribution", fontsize=14, loc='left')
ax.set_xlabel('Data value')
ax.set_ylabel('Probability')
ax.legend()
#ax.grid()
plt.savefig("Possion_true_vs_est.png", format="png", dpi=800)

The below figure shows the estimated distribution versus the true distribution. The distributions are quite close indicating that the MoM estimator is a reasonable estimator for our problem. In fact, replacing expectations with averages in the MoM estimator implies that the estimator is a consistent estimator by the law of large numbers, which is a good justification for using such estimator.

Image generated by the Author

Another MoM estimation example is shown below assuming the iid data is generated by a normal distribution with mean μ and variance σ² as shown below.

Image generated by the Author

In this particular example, a Gaussian (normal) distribution with mean value μ = 10 and σ = 2 is assumed to have generated the data. The histogram of the generated data sample (sample size = 1000) is shown in grey in the below figure, while the true distribution is shown in the blue curve.

Image generated by the Author

The Python code that was used to generate the above figure is shown below.

# Normal distribution example
mu = 10
sigma = 2
sample_size = 1000
data_normal = stats.norm.rvs(loc=mu, scale=sigma ,size= sample_size) # generate data

# Plot the data histogram vs the PDF
x2 = np.linspace(data_normal.min(), data_normal.max(), sample_size)
fig3, ax = plt.subplots()
ax.hist(data_normal, bins=50, density=True, label="Data histogram",color = GRAY9)
ax.plot(x2, stats.norm(loc=mu, scale=sigma).pdf(x2),
        label="Normal distribution (PDF)",color = BLUE2,linewidth=3.0)

ax.set_title("Data histogram vs. true distribution", fontsize=14, loc='left')
ax.set_xlabel('Data value')
ax.set_ylabel('Probability')
ax.legend()
ax.grid()

plt.savefig("Normal_hist_PMF.png", format="png", dpi=800)

Now, we would like to use the MoM estimator to find an estimate of the model parameters, i.e., μ and σ² as shown below.

In order to test this estimator using our sample data, we plot the distribution with the estimated parameters (orange) in the below figure, versus the true distribution (blue). Again, it can be shown that the distributions are quite close. Of course, in order to quantify this estimator, we need to test it on multiple realizations of the data and observe properties such as bias, variance, etc. Such important aspects have been discussed in an earlier article.

Image generated by the Author

The Python code that was used to estimate the model parameters using MoM, and to plot the above figure is shown below.

# Method of moments estimator using the data (Normal Dist)
mu_hat = sum(data_normal) / len(data_normal) # MoM mean estimator
var_hat = sum(pow(x-mu_hat,2) for x in data_normal) / len(data_normal) # variance
sigma_hat = math.sqrt(var_hat)  # MoM standard deviation estimator

# Plot the MoM estimated PDF vs the true PDF
x2 = np.linspace(data_normal.min(), data_normal.max(), sample_size)
fig4, ax = plt.subplots()
ax.plot(x2, stats.norm(loc=mu_hat, scale=sigma_hat).pdf(x2),
        label="Estimated PDF",color = ORANGE1,linewidth=3.0)
ax.plot(x2, stats.norm(loc=mu, scale=sigma).pdf(x2),
        label="True PDF",color = BLUE2,linewidth=3.0)

ax.set_title("Estimated Normal distribution vs. true distribution", fontsize=14, loc='left')
ax.set_xlabel('Data value')
ax.set_ylabel('Probability')
ax.legend()
ax.grid()
plt.savefig("Normal_true_vs_est.png", format="png", dpi=800)

Another useful probability distribution is the Gamma distribution. An example for the application of this distribution in real life was discussed in a previous article. However, in this article, we derive the MoM estimator of the Gamma distribution parameters α and β as shown below, assuming the data is iid.

Image generated by the Author

In this particular example, a Gamma distribution with α = 6 and β = 0.5 is assumed to have generated the data. The histogram of the generated data sample (sample size = 1000) is shown in grey in the below figure, while the true distribution is shown in the blue curve.

Image generated by the Author

The Python code that was used to generate the above figure is shown below.

# Gamma distribution example
alpha_ = 6 # shape parameter
scale_ = 2 # scale paramter (lamda) = 1/beta in gamma dist.
sample_size = 1000
data_gamma = stats.gamma.rvs(alpha_,loc=0, scale=scale_ ,size= sample_size) # generate data

# Plot the data histogram vs the PDF
x3 = np.linspace(data_gamma.min(), data_gamma.max(), sample_size)
fig5, ax = plt.subplots()
ax.hist(data_gamma, bins=50, density=True, label="Data histogram",color = GRAY9)
ax.plot(x3, stats.gamma(alpha_,loc=0, scale=scale_).pdf(x3),
        label="Gamma distribution (PDF)",color = BLUE2,linewidth=3.0)

ax.set_title("Data histogram vs. true distribution", fontsize=14, loc='left')
ax.set_xlabel('Data value')
ax.set_ylabel('Probability')
ax.legend()
ax.grid()
plt.savefig("Gamma_hist_PMF.png", format="png", dpi=800)

Now, we would like to use the MoM estimator to find an estimate of the model parameters, i.e., α and β, as shown below.

In order to test this estimator using our sample data, we plot the distribution with the estimated parameters (orange) in the below figure, versus the true distribution (blue). Again, it can be shown that the distributions are quite close.

Image generated by the Author

The Python code that was used to estimate the model parameters using MoM, and to plot the above figure is shown below.

# Method of moments estimator using the data (Gamma Dist)
sample_mean = data_gamma.mean()
sample_var = data_gamma.var()
scale_hat = sample_var/sample_mean #scale is equal to 1/beta in gamma dist.
alpha_hat = sample_mean**2/sample_var

# Plot the MoM estimated PDF vs the true PDF
x4 = np.linspace(data_gamma.min(), data_gamma.max(), sample_size)
fig6, ax = plt.subplots()

ax.plot(x4, stats.gamma(alpha_hat,loc=0, scale=scale_hat).pdf(x4),
        label="Estimated PDF",color = ORANGE1,linewidth=3.0)
ax.plot(x4, stats.gamma(alpha_,loc=0, scale=scale_).pdf(x4),
        label="True PDF",color = BLUE2,linewidth=3.0)

ax.set_title("Estimated Gamma distribution vs. true distribution", fontsize=14, loc='left')
ax.set_xlabel('Data value')
ax.set_ylabel('Probability')
ax.legend()
ax.grid()
plt.savefig("Gamma_true_vs_est.png", format="png", dpi=800)

Note that we used the following equivalent ways of writing the variance when deriving the estimators in the cases of Gaussian and Gamma distributions.

Conclusion

In this article, we explored various examples of the method of moments estimator and its applications in different problems in data science. Moreover, detailed Python code that was used to implement the estimators from scratch as well as to plot the different figures is also shown. I hope that you will find this article helpful.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

NetBrain’s new AI agents automate network diagnosis

In testing, the system handled the majority of real-world network issues. “90% of the real-world network issues that they had when they threw them at it, it handled it,” Nixon said. “[People] couldn’t quite believe that it was at the 90% mark. People went in thinking, ‘Well, if this gives me

Read More »

IBM FlashSystems gain AI-assisted telemetry, analytics

For security, the systems include a new FlashCore Module all-flash drive, which brings hardware-accelerated, real-time ransomware detection, data reduction, analytics and operations. The devices can spot anomalies and patterns in data that need to be remediated, IBM noted. “The next-generation IBM FlashSystem elevates storage to an intelligent, always-available layer, where autonomous

Read More »

Versa bolsters data protection, AI-powered operations in SASE upgrade

Docker-containerized ML models execute data discovery and classification locally, maintaining data sovereignty while scanning file repositories, SaaS applications, and inline traffic flows, the authors stated.  “Versa DLP uses advanced transformer models and fine-tuned Large Language Models (LLMs) to detect sensitive information across diverse document types and formats. Unlike traditional pattern

Read More »

DKnife targets network gateways in long running AitM campaign

Beyond update hijacking, the framework supports DNS manipulation, binary replacement, and selective traffic forwarding, giving attackers control over how specific requests are handled. Indicators point to China-Nexus development and targeting Several aspects of DKnife’s design and operation suggested ties to China-aligned threat actors. Talos identified configuration data and code comments written in

Read More »

OPEC Says Oil Production Declined Last Month

OPEC+ oil production declined sharply last month amid losses in Kazakhstan, Venezuela and Iran, the group said.  The 22 nations of the alliance produced an average of 42.448 million barrels a day in January, or 439,000 a day less than the previous month, according to a copy of the group’s monthly report obtained by Bloomberg. Kazakhstan accounted for more than half of the drop. While the report didn’t give a reason for the overall decline, Kazakhstan’s production fell as it suspended operations at the Tengiz oil field, the country’s largest. The Chevron-led venture started to restore output there at the end of last month.  Separately, Venezuelan oil exports were disrupted by a US blockade during the ousting of former President Nicolas Maduro, while Iran continues to face American sanctions. Saudi Arabia and several other key nations held steady in January as the Organization of the Petroleum Exporting Countries and its allies began a three-month freeze to offset a seasonal lull in consumption. They’ll meet online on March 1 to review production levels for April and beyond. OPEC kept forecasts for global oil supply and demand unchanged for this year and next, according to the report. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Ukraine Hits Lukoil Refinery

Ukraine attacked an oil refinery in Russia’s Volgograd region in the first major strike on Russia’s oil-processing industry this year. An overnight drone strike sparked a fire at the facility, Ukraine’s General Staff said on Telegram Wednesday. “The scope of the damage is being clarified,” it said, adding that the refinery helps supply the Russian army. Ukraine carried out multiple high-precision strikes on Russia’s energy assets last year, leading to refinery shutdowns, disruptions at oil terminals and the rerouting of some tankers. The attacks were designed to curb the Kremlin’s energy revenues and restrict fuel supplies to Russian front lines in the war, now nearing its fifth year. The Volgograd refinery, which was attacked several times last year, has a design capacity of about 300,000 barrels of crude a day. It mainly supplies oil products to southern Russia, with some volumes exported. The administration of the Volgograd region said in a Telegram statement that an an industrial plant caught fire after a drone attack but did not name the facility. Lukoil, Russia’s largest private oil producer, did not immediately respond to a request for comment. Satellite images from NASA’s Fire Information for Resource Management System show multiple fires at the refinery that began during the night of Feb. 10-11. The fires were not visible the previous day, according to the data. In January, Ukraine targeted three small independent Russian refineries, which together account for about 7% of Russia’s typical monthly crude throughput. The lull in drone strikes had offered temporary relief for Russia’s downstream sector, allowing refinery runs to gradually increase. Encouraged by the recovery, the government lifted its ban on most gasoline exports, permitting producers to resume shipments in February — a month earlier than planned. While Ukrainian attacks on Russia’s oil industry slowed in January, Moscow continued intense assaults on energy infrastructure

Read More »

TotalEnergies Cuts Buyback to Lower End of Range

(Update) February 11, 2026, 5:10 PM GMT: Article updated with comments on dividend growth, potential investment decisions and acquisitions from 14th paragraph. TotalEnergies SE trimmed its share buybacks to the lower end of its guidance range, aiming to keep debt in check as it adjusts to lower oil prices. The company plans to repurchase $750 million of stock in the first quarter, compared with $1.5 billion in the final three months of 2025, it said in an earnings statement Wednesday. For the year, its buyback target was kept at a range of $3 billion to $6 billion. TotalEnergies is the third and last of Europe’s top oil and gas producers to release earnings after Shell Plc and BP Plc published disappointing quarterly reports. The company has a lower ratio of debt to equity than its European peers and kept quarterly dividend unchanged. “This year we want to balance cash generation with cash expenditure,” Chief Executive Officer Patrick Pouyanne said during a press conference in Paris to discuss earnings. “We don’t know what will happen this year. We want to keep a healthy balance sheet.” Shares of Total closed 2.7% up, at their highest since July 2024. The company has a “solid balance sheet despite uncertain environment,“ Jefferies analysts led by Mark Wilson said in a note after the earnings release. While Big Oil is still churning out hefty profits, cash flows — particularly in Europe — have been undermined by last year’s 18% dive in crude prices. There are also widespread forecasts that the market will remain oversupplied this year as production swells both inside and outside the OPEC+ alliance. “Oil supply remains abundant, so the market is rather trending down,” Pouyanne said, adding that sanctions on Russia are causing a buildup of the nation’s crude at sea. Total’s adjusted

Read More »

EIA Sees Brent Price Dropping in 2026 and 2027

In its latest short term energy outlook (STEO), which was released on February 10, the U.S. Energy Information Administration (EIA) projected that the average Brent spot price will drop in 2026 and 2027. According to this STEO, the EIA sees the Brent spot price coming in at $57.69 per barrel in 2026 and $53.00 per barrel in 2027. The Brent spot price averaged $69.04 per barrel in 2025, the STEO showed. A quarterly breakdown included in the EIA’s latest STEO showed that the organization expects the Brent spot price to come in at $64.44 per barrel in the first quarter of this year, $57.32 per barrel in the second quarter, $55.35 per barrel in the third quarter, $54.00 per barrel in the fourth quarter, and $53.00 per barrel across the first, second, third, and fourth quarters of next year. In the STEO, the EIA highlighted that the Brent crude oil spot price averaged $67 per barrel in January, which it pointed out was $4 per barrel higher than the average in December. The EIA noted that daily Brent crude oil prices increased from an average of $62 per barrel on January 2 to $72 per barrel on January 30. “Crude oil prices rose in response to disruptions to crude oil production in the United States and Kazakhstan,” the EIA highlighted in the STEO. “Despite the near-term increase in prices and short-term disruptions to oil supply, we forecast that strong growth in global oil production will result in high global oil inventory builds over the forecast, causing crude oil prices to fall,” it added. “We forecast that Brent spot prices will average $58 per barrel in 2026 and $53 per barrel in 2027, down from an average of $69 per barrel in 2025,” it continued. In its STEO, the EIA said

Read More »

USA Allows Oilfield Contractors to Go to Work in VEN Fields

The US government issued a general license to allow oilfield-service companies to work in Venezuela as the Trump administration eases sanctions and pushes to rebuild the nation’s crude infrastructure. The license issued by the Treasury Department allows US firms to explore, develop and produce oil and natural gas in Venezuela under certain limited conditions, according to a statement Tuesday. The move is the latest in a series of steps Washington has taken to entice US companies to revive output from Venezuela’s vast crude reserves after last month’s capture of strongman Nicolás Maduro. In January, the US issued a general license that allowed for a wide range of crude operations, including exporting, transporting, refining and buying and selling crude. The general license announced Tuesday involves tasks such as geological mapping, reservoir analysis and related tasks that augment the commencement of oil production.  However, the license does not allow new joint ventures in Venezuela. US people and firms will need to provide detailed plans to the State Department and Department of Energy for any work in the country, according to the statement. The Treasury Department is also preparing to issue a general license allowing companies to pump oil in Venezuela, Bloomberg reported earlier this month.  Oilfield service companies are hired by producers to asses discoveries, drill wells, and enhance output from older assets. SLB Ltd., Halliburton Co. and Baker Hughes Co. dominate the sector. SLB has been working in Venezuela for Chevron Corp., operating under a US license held by the supermajor. The other large contractors scaled back or shut down their primary operations in the country as the previous regime tightened control over the energy industry.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate

Read More »

EIA Fuel Update Shows Increasing Price Trend for USA Gasoline

The U.S. Energy Information Administration’s (EIA) latest gasoline fuel update, which was released on February 10, showed an increasing trend for the U.S. regular gasoline price. According to the update, the U.S. regular gasoline price averaged $2.853 per gallon on January 26, $2.867 per gallon on February 2, and $2.902 per gallon on February 9. Although the February 9 price was up $0.035 from the week ago price, it was down $0.226 from the year ago price, the update outlined. Of the five Petroleum Administration for Defense District (PADD) regions highlighted in the EIA’s latest fuel update, the West Coast was shown to have the highest U.S. regular gasoline price as of February 9, at $3.938 per gallon. The Gulf Coast was shown in the update to have the lowest U.S. regular gasoline price as of February 9, at $2.476 per gallon. A glossary section of the EIA site notes that the 50 U.S. states and the District of Columbia are divided into five districts, with PADD 1 further split into three subdistricts. PADDs 6 and 7 encompass U.S. territories, the site adds. In a blog posted on its website on February 9, GasBuddy noted that, according to its data, the U.S. average price of gasoline “has risen 1.2 cents over the last week and stands at $2.84 per gallon”. “The national average is up 5.4 cents from a month ago and is 24.9 cents per gallon lower than a year ago,” it added. In that blog, Patrick De Haan, head of petroleum analysis at GasBuddy, said, “the national average price of gasoline only edged slightly higher last week, but nine of the ten largest weekly price movements were increases, led by West Coast states as California begins the transition to summer gasoline”. “Most states saw relatively minor fluctuations, but we’re

Read More »

Energy providers seek flexible load strategies for data center operations

“In theory, yes, they’d have to wait a little bit longer while their queries are routed to a data center that has capacity,” said Lawrence. The one thing the industry cannot do is operate like it has in the past, where data center power was tuned and then forgotten for six months. Previously, data centers would test their power sources once or twice a year. They don’t have that luxury anymore. They need to check their power sources and loads far more regularly, according to Lawrence. “I think that for that for the data center industry to continue to survive like we all need it, there’s going to have to be some realignment on the incentives to why somebody would become flexible,” said Lawrence. The survey suggests that utilities and load operators expect to expand their demand response activities and budgets in the near term. Sixty-three percent of respondents anticipate DR program funding to grow by 50% or more over the next three years. While they remain a major source of load growth and system strain, 57% of respondents indicate that onsite power generation from data centers will be most important to improving grid stability over the next five years. One of the proposed fixes to the power shortage has been small modular nuclear reactors. These have gained a lot of traction in the marketplace even if they have nothing to sell yet. But Lawrence said that that’s not an ideal solution for existing power generators, ironically enough.

Read More »

Nokia predicts huge WAN traffic growth, but experts question assumptions

Consumer, which includes both mobile access and fixed access, including fixed wireless access. Enterprise and industrial, which covers wide-area connectivity that supports knowledge work, automation, machine vision, robotics coordination, field support, and industrial IoT. AI, including applications that people directly invoke, such as assistants, copilots, and media generation, as well as autonomous use cases in which AI systems trigger other AI systems to perform functions and move data across networks. The report outlines three scenarios: conservative, moderate, and aggressive. “Our goal is to present scenarios that fall within a realistic range of possible outcomes, encouraging stakeholders to plan across the full spectrum of high-impact demand possibilities,” the report says. Nokia’s prediction for global WAN traffic growth ranges from a 13% CAGR for the conservative scenario to 16% CAGR for moderate and 22% CAGR for aggressive. Looking more closely at the moderate scenario, it’s clear that consumer traffic dominates. Enterprise and industrial traffic make up only about 14% to 17% of overall WAN traffic, although their share is expected to grow during the 10-year forecast period. “On the consumer side, the vast majority of traffic by volume is video,” says William Webb, CEO of the consulting firm Commcisive. Asked whether any of that consumer traffic is at some point served up by enterprises, the answer is a decisive “no.” It’s mostly YouTube and streaming services like Netflix, he says. In short, that doesn’t raise enterprise concerns. Nokia predicts AI traffic boom AI is a different story. “Consumer- and enterprise-generated AI traffic imposes a substantial impact on the wide-area network (WAN) by adding AI workloads processed by data centers across the WAN. AI traffic does not stay inside one data center; it moves across edge, metro, core, and cloud infrastructure, driving dense lateral flows and new capacity demands,” the report says. An

Read More »

Cisco amps up Silicon One line, delivers new systems and optics for AI networking

Those building blocks include the new G300 as well as the G200 51.2 Tbps chip, which is aimed at spine and aggregation applications, and the G100 25.6 Tbps chip, which is aimed at leaf operations. Expanded portfolio of Silicon One P200-powered systems Cisco in October rolled out the P200 Silicon One chip and the high-end, 51.2 Tbps 8223 router aimed at distributed AI workloads. That system supports Octal Small Form-Factor Pluggable (OSFP) and Quad Small Form-Factor Pluggable Double Density (QSFP-DD) optical form factors that help the box support geographically dispersed AI clusters. Cisco grew the G200 family this week with the addition of the 8122X-64EF-O, a 64x800G switch that will run the SONiC OS and includes support for Cisco 800G Linear Pluggable Optics (LPO) connectivity. LPO components typically set up direct links between fiber optic modules, eliminating the need for traditional components such as a digital signal processor. Cisco said its P200 systems running IOS XR software now better support core routing services to allow data-center-to-data-center links and data center interconnect applications. In addition, Cisco introduced a P200-powered 88-LC2-36EF-M line card, which delivers 28.8T of capacity. “Available for both our 8-slot and 18-slot modular systems, this line card enables up to an unprecedented 518.4T of total system bandwidth, the highest in the industry,” wrote Guru Shenoy, senior vice president of the Cisco provider connectivity group, in a blog post about the news. “When paired with Cisco 800G ZR/ZR+ coherent pluggable optics, these systems can easily connect sites over 1,000 kilometers apart, providing the high-density performance needed for modern data center interconnects and core routing.”

Read More »

NetBox Labs ships AI copilot designed for network engineers, not developers

Natural language for network engineers Beevers explained that network operations teams face two fundamental barriers to automation. First, they lack accurate data about their infrastructure. Second, they aren’t software developers and shouldn’t have to become them. “These are not software developers. They are network engineers or IT infrastructure engineers,” Beevers said. “The big realization for us through the copilot journey is they will never be software developers. Let’s stop trying to make them be. Let’s let these computers that are really good at being software developers do that, and let’s let the network engineers or the data center engineers be really good at what they’re really good at.”  That vision drove the development of NetBox Copilot’s natural language interface and its capabilities. Grounding AI in infrastructure reality The challenge with deploying AI  in network operations is trust. Generic large language models hallucinate, produce inconsistent results, and lack the operational context to make reliable decisions. NetBox Copilot addresses this by grounding the AI agent in NetBox’s comprehensive infrastructure data model. NetBox serves as the system of record for network and infrastructure teams, maintaining a semantic map of devices, connections, IP addressing, rack layouts, power distribution and the relationships between these elements. Copilot has native awareness of this data structure and the context it provides. This enables queries that would be difficult or impossible with traditional interfaces. Network engineers can ask “Which devices are missing IP addresses?” to validate data completeness, “Who changed this prefix last week?” for change tracking and compliance, or “What depends on this switch?” for impact analysis before maintenance windows.

Read More »

US pushes voluntary pact to curb AI data center energy impact

Others note that cost pressure isn’t limited to the server rack. Danish Faruqui, CEO of Fab Economics, said the AI ecosystem is layered from silicon to software services, creating multiple points where infrastructure expenses eventually resurface. “Cloud service providers are likely to gradually introduce more granular pricing models across cloud, AI, and SaaS offerings, tailored by customer type, as they work to absorb the costs associated with the White House energy and grid compact,” Faruqui said.   This may not show up as explicit energy surcharges, but instead surface through reduced discounts, higher spending commitments, and premiums for guaranteed capacity or performance. “Smaller enterprises will feel the impact first, while large strategic customers remain insulated longer,” Rawat said. “Ultimately, the compact would delay and redistribute cost pressure; it does not eliminate it.” Implications for data center design The proposal is also likely to accelerate changes in how AI facilities are designed. “Data centers will evolve into localized microgrids that combine utility power with on-site generation and higher-level implementation of battery energy storage systems,” Faruqui said. “Designing for grid interaction will become imperative for AI data centers, requiring intelligent, high-speed switching gear, increased battery energy storage capacity for frequency regulation, and advanced control systems that can manage on-site resources.”

Read More »

Intel teams with SoftBank to develop new memory type

However, don’t expect anything anytime soon. Intel’s Director of Global Strategic Partnerships Sanam Masroor outlined the plans in a blog post. Operations are expected to begin in Q1 2026, with prototypes due in 2027 and commercial products by 2030. While Intel has not come out and said it, that memory design is almost identical to HBM used in GPU accelerators and AI data centers. HBM sits right on the GPU die for immediate access to the GPU, unlike standard DRAM which resides on memory sticks plugged into the motherboard. HBM is much faster than DDR memory but is also much more expensive to produce. It’s also much more profitable than standard DRAM which is why the big three memory makers – Micron, Samsung, and SK Hynix – are favoring production of it.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »