Stay Ahead, Stay ONMINE

Mastering the Poisson Distribution: Intuition and Foundations

You’ve probably used the normal distribution one or two times too many. We all have — It’s a true workhorse. But sometimes, we run into problems. For instance, when predicting or forecasting values, simulating data given a particular data-generating process, or when we try to visualise model output and explain them intuitively to non-technical stakeholders. Suddenly, things don’t make much sense: can a user really have made -8 clicks on the banner? Or even 4.3 clicks? Both are examples of how count data doesn’t behave. I’ve found that better encapsulating the data generating process into my modelling has been key to having sensible model output. Using the Poisson distribution when it was appropriate has not only helped me convey more meaningful insights to stakeholders, but it has also enabled me to produce more accurate error estimates, better Inference, and sound decision-making. In this post, my aim is to help you get a deep intuitive feel for the Poisson distribution by walking through example applications, and taking a dive into the foundations — the maths. I hope you learn not just how it works, but also why it works, and when to apply the distribution. If you know of a resource that has helped you grasp the concepts in this blog particularly well, you’re invited to share it in the comments! Outline Examples and use cases: Let’s walk through some use cases and sharpen the intuition I just mentioned. Along the way, the relevance of the Poisson Distribution will become clear. The foundations: Next, let’s break down the equation into its individual components. By studying each part, we’ll uncover why the distribution works the way it does. The assumptions: Equipped with some formality, it will be easier to understand the assumptions that power the distribution, and at the same time set the boundaries for when it works, and when not. When real life deviates from the model: Finally, let’s explore the special links that the Poisson distribution has with the Negative Binomial distribution. Understanding these relationships can deepen our understanding, and provide alternatives when the Poisson distribution is not suited for the job. Example in an online marketplace I chose to deep dive into the Poisson distribution because it frequently appears in my day-to-day work. Online marketplaces rely on binary user choices from two sides: a seller deciding to list an item and a buyer deciding to make a purchase. These micro-behaviours drive supply and demand, both in the short and long term. A marketplace is born. Binary choices aggregate into counts — the sum of many such decisions as they occur. Attach a timeframe to this counting process, and you’ll start seeing Poisson distributions everywhere. Let’s explore a concrete example next. Consider a seller on a platform. In a given month, the seller may or may not list an item for sale (a binary choice). We would only know if she did because then we’d have a measurable count of the event. Nothing stops her from listing another item in the same month. If she does, we count those events. The total could be zero for an inactive seller or, say, 120 for a highly engaged seller. Over several months, we would observe a varying number of listed items by this seller — sometimes fewer, sometimes more — hovering around an average monthly listing rate. That is essentially a Poisson process. When we get to the assumptions section, you’ll see what we had to assume away to make this example work. Other examples Other phenomena that can be modelled with a Poisson distribution include: Sports analytics: The number of goals scored in a match between two teams. Queuing: Customers arriving at a help desk or customer support calls. Insurance: The number of claims made within a given period. Each of these examples warrants further inspection, but for the remainder of this post, we’ll use the marketplace example to illustrate the inner workings of the distribution. The mathy bit … or foundations. I find opening up the probability mass function (PMF) of distributions helpful to understanding why things work as they do. The PMF of the Poisson distribution goes like: Where λ is the rate parameter, and 𝑘 is the manifested count of the random variable (𝑘 = 0, 1, 2, 3, … events). Very neat and compact. The probability mass function of the Poisson distribution, for a few different lambdas. Contextualising λ and k: the marketplace example In the context of our earlier example — a seller listing items on our platform — λ represents the seller’s average monthly listings. As the expected monthly value for this seller, λ orchestrates the number of items she would list in a month. Note that λ is a Greek letter, so read: λ is a parameter that we can estimate from data. On the other hand, 𝑘 does not hold any information about the seller’s idiosyncratic behaviour. It’s the target value we set for the number of events that may happen to learn about its probability. The dual role of λ as the mean and variance When I said that λ orchestrates the number of monthly listings for the seller, I meant it quite literally. Namely, λ is both the expected value and variance of the distribution, indifferently, for all values of λ. This means that the mean-to-variance ratio (index of dispersion) is always 1. To put this into perspective, the normal distribution requires two parameters — 𝜇 and 𝜎², the average and variance respectively — to fully describe it. The Poisson distribution achieves the same with just one. Having to estimate only one parameter can be beneficial for parametric inference. Specifically, by reducing the variance of the model and increasing the statistical power. On the other hand, it can be too limiting of an assumption. Alternatives like the Negative Binomial distribution can alleviate this limitation. We’ll explore that later. Breaking down the probability mass function Now that we know the smallest building blocks, let’s zoom out one step: what is λᵏ, 𝑒^⁻λ, and 𝑘!, and more importantly, what is each of these components’ function in the whole? λᵏ is a weight that expresses how likely it is for 𝑘 events to happen, given that the expectation is λ. Note that “likely” here does not mean a probability, yet. It’s merely a signal strength. 𝑘! is a combinatorial correction so that we can say that the order of the events is irrelevant. The events are interchangeable. 𝑒^⁻λ normalises the integral of the PMF function to sum up to 1. It’s called the partition function of exponential-family distributions. In more detail, λᵏ relates the observed value 𝑘 to the expected value of the random variable, λ. Intuitively, more probability mass lies around the expected value. Hence, if the observed value lies close to the expectation, the probability of occurring is larger than the probability of an observation far removed from the expectation. Before we can cross-check our intuition with the numerical behaviour of λᵏ, we need to consider what 𝑘! does. Interchangeable events Had we cared about the order of events, then each unique event could be ordered in 𝑘! ways. But because we don’t, and we deem each event interchangeable, we “divide out” 𝑘! from λᵏ to correct for the overcounting. Since λᵏ is an exponential term, the output will always be larger as 𝑘 grows, holding λ constant. That is the opposite of our intuition that there is maximum probability when λ = 𝑘, as the output is larger when 𝑘 = λ + 1. But now that we know about the interchangeable events assumption — and the overcounting issue — we know that we have to factor in 𝑘! like so: λᵏ 𝑒^⁻λ / 𝑘!, to see the behaviour we expect. Now let’s check the intuition of the relationship between λ and 𝑘 through λᵏ, corrected for 𝑘!. For the same λ, say λ = 4, we should see λᵏ 𝑒^⁻λ / 𝑘! to be smaller for values of 𝑘 that are far removed from 4, compared to values of 𝑘 that lie close to 4. Like so: inline code: 4²/2 = 8 is smaller than 4⁴/24 = 10.7. This is consistent with the intuition of a higher likelihood of 𝑘 when it’s near the expectation. The image below shows this relationship more generally, where you see that the output is larger as 𝑘 approaches λ. The probability mass function without the normalising component e^-lambda. The assumptions First, let’s get one thing off the table: the difference between a Poisson process, and the Poisson distribution. The process is a stochastic continuous-time model of points happening in given interval: 1D, a line; 2D, an area, or higher dimensions. We, data scientists, most often deal with the one-dimensional case, where the “line” is time, and the points are the events of interest — I dare to say. These are the assumptions of the Poisson process: The occurrence of one event does not affect the probability of a second event. Think of our seller going on to list another item tomorrow indifferently of having done so already today, or the one from five days ago for that matter. The point here is that there is no memory between events. The average rate at which events occur, is independent of any occurrence. In other words, no event that happened (or will happen) alters λ, which remains constant throughout the observed timeframe. In our seller example, this means that listing an item today does not increase or decrease the seller’s motivation or likelihood of listing another item tomorrow. Two events cannot occur at exactly the same instant. If we were to zoom at an infinite granular level on the timescale, no two listings could have been placed simultaneously; always sequentially. From these assumptions — no memory, constant rate, events happening alone — it follows that 1) any interval’s number of events is Poisson-distributed with parameter λₜ and 2) that disjoint intervals are independent — two key properties of a Poisson process. A Note on the distribution:The distribution simply describes probabilities for various numbers of counts in an interval. Strictly speaking, one can use the distribution pragmatically whenever the data is nonnegative, can be unbounded on the right, has mean λ, and reasonably models the data. It would be just convenient if the underlying process is a Poisson one, and actually justifies using the distribution. The marketplace example: Implications So, can we justify using the Poisson distribution for our marketplace example? Let’s open up the assumptions of a Poisson process and take the test. Constant λ Why it may fail: The seller has patterned online activity; holidays; promotions; listings are seasonal goods. Consequence: λ is not constant, leading to overdispersion (mean-to-variance ratio is larger than 1, or to temporal patterns. Independence and memorylessness Why it may fail: The propensity to list again is higher after a successful listing, or conversely, listing once depletes the stock and intervenes with the propensity of listing again. Consequence: Two events are no longer independent, as the occurrence of one informs the occurrence of the other. Simultaneous events Why it may fail: Batch-listing, a new feature, was introduced to help the sellers. Consequence: Multiple listings would come online at the same time, clumped together, and they would be counted simultaneously. Balancing rigour and pragmatism As Data Scientists on the job, we may feel trapped between rigour and pragmatism. The three steps below should give you a sound foundation to decide on which side to err, when the Poisson distribution falls short: Pinpoint your goal: is it inference, simulation or prediction, and is it about high-stakes output? List the worst thing that can happen, and the cost of it for the business. Identify the problem and solution: why does the Poisson distribution not fit, and what can you do about it? list 2-3 solutions, including changing nothing. Balance gains and costs: Will your workaround improve things, or make it worse? and at what cost: interpretability, new assumptions introduced and resources used. Does it help you in achieving your goal? That said, here are some counters I use when needed. When real life deviates from your model Everything described so far pertains to the standard, or homogenous, Poisson process. But what if reality begs for something different? In the next section, we’ll cover two extensions of the Poisson distribution when the constant λ assumption does not hold. These are not mutually exclusive, but neither they are the same: Time-varying λ: a single seller whose listing rate ramps up before holidays and slows down afterward Mixed Poisson distribution: multiple sellers listing items, each with their own λ can be seen as a mixture of various Poisson processes Time-varying λ The first extension allows λ to have its own value for each time t. The PMF then becomes Where the number of events 𝐾(𝑇) in an interval 𝑇 follows the Poisson distribution with a rate no longer equal to a fixed λ, but one equal to: More intuitively, integrating over the interval 𝑡 to 𝑡 + 𝑖 gives us a single number: the expected value of events over that interval. The integral will vary by each arbitrary interval, and that’s what makes λ change over time. To understand how that integration works, it was helpful for me to think of it like this: if the interval 𝑡 to 𝑡₁ integrates to 3, and 𝑡₁ to 𝑡₂ integrates to 5, then the interval 𝑡 to 𝑡₂ integrates to 8 = 3 + 5. That’s the two expectations summed up, and now the expectation of the entire interval. Practical implication One may want to modeling the expected value of the Poisson distribution as a function of time. For instance, to model an overall change in trend, or seasonality. In generative model notation: Time may be a continuous variable, or an arbitrary function of it. Process-varying λ: Mixed Poisson distribution But then there’s a gotcha. Remember when I said that λ has a dual role as the mean and variance? That still applies here. Looking at the “relaxed” PMF*, the only thing that changes is that λ can vary freely with time. But it’s still the one and only λ that orchestrates both the expected value and the dispersion of the PMF*. More precisely, 𝔼[𝑋] = Var(𝑋) still holds. There are various reasons for this constraint not to hold in reality. Model misspecification, event interdependence and unaccounted for heterogeneity could be the issues at hand. I’d like to focus on the latter case, as it justifies the Negative Binomial distribution — one of the topics I promised to open up. Heterogeneity and overdispersionImagine we are not dealing with one seller, but with 10 of them listing at different intensity levels, λᵢ, where 𝑖 = 1, 2, 3, …, 10 sellers. Then, essentially, we have 10 Poisson processes going on. If we unify the processes and estimate the grand λ, we simplify the mixture away. Meaning, we get a correct estimate of all sellers on average, but the resulting grand λ is naive and does not know about the original spread of λᵢ. It still assumes that the variance and mean are equal, as per the axioms of the distribution. This will lead to overdispersion and, in turn, to underestimated errors. Ultimately, it inflates the false positive rate and drives poor decision-making. We need a way to embrace the heterogeneity amongst sellers’ λᵢ. Negative binomial: Extending the Poisson distributionAmong the few ways one can look at the Negative Binomial distribution, one way is to see it as a compound Poisson process — 10 sellers, sounds familiar yet? That means multiple independent Poisson processes are summed up to a single one. Mathematically, first we draw λ from a Gamma distribution: λ ~ Γ(r, θ), then we draw the count 𝑋 | λ ~ Poisson(λ). In one image, it is as if we would sample from plenty Poisson distributions, corresponding to each seller. A negative Binomial distribution arises from many Poisson distributions. The more exposing alias of the Negative binomial distribution is Gamma-Poisson mixture distribution, and now we know why: the dictating λ comes from a continuous mixture. That’s what we needed to explain the heterogeneity amongst sellers. Let’s simulate this scenario to gain more intuition. Gamma mixture of lambda. First, we draw λᵢ from a Gamma distribution: λᵢ ~ Γ(r, θ). Intuitively, the Gamma distribution tells us about the variety in the intensity — listing rate — amongst the sellers. On a practical note, one can instill their assumptions about the degree of heterogeneity in this step of the model: how different are sellers? By varying the levels of heterogeneity, one can observe the impact on the final Poisson-like distribution. Doing this type of checks (i.e., posterior predictive check), is common in Bayesian modeling, where the assumptions are set explicitly. Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions. In the second step, we plug the obtained λ into the Poisson distribution: 𝑋 | λ ~ Poisson(λ), and obtain a Poisson-like distribution that represents the summed subprocesses. Notably, this unified process has a larger dispersion than expected from a homogeneous Poisson distribution, but it is in line with the Gamma mixture of λ. Heterogeneous λ and inference A practical consequence of introducing flexibility into your assumed distribution is that inference becomes more challenging. More parameters (i.e., the Gamma parameters) need to be estimated. Parameters act as flexible explainers of the data, tending to overfit and explain away variance in your variable. The more parameters you have, the better the explanation may seem, but the model also becomes more susceptible to noise in the data. Higher variance reduces the power to identify a difference in means, if one exists, because — well — it gets lost in the variance. Countering the loss of power Confirm whether you indeed need to extend the standard Poisson distribution. If not, simplify to the simplest, most fit model. A quick check on overdispersion may suffice for this. Pin down the estimates of the Gamma mixture distribution parameters using regularising, informative priors (think: Bayes). During my research process for writing this blog, I learned a great deal about the connective tissue underlying all of this: how the binomial distribution plays a fundamental role in the processes we’ve discussed. And while I’d love to ramble on about this, I’ll save it for another post, perhaps. In the meantime, feel free to share your understanding in the comments section below 👍. Conclusion The Poisson distribution is a simple distribution that can be highly suitable for modelling count data. However, when the assumptions do not hold, one can extend the distribution by allowing the rate parameter to vary as a function of time or other factors, or by assuming subprocesses that collectively make up the count data. This added flexibility can address the limitations, but it comes at a cost: increased flexibility in your modelling raises the variance and, consequently, undermines the statistical power of your model. If your end goal is inference, you may want to think twice and consider exploring simpler models for the data. Alternatively, switch to the Bayesian paradigm and leverage its built-in solution to regularise estimates: informative priors. I hope this has given you what you came for — a better intuition about the Poisson distribution. I’d love to hear your thoughts about this in the comments! Unless otherwise noted, all images are by the author.Originally published at https://aalvarezperez.github.io on January 5, 2025.

You’ve probably used the normal distribution one or two times too many. We all have — It’s a true workhorse. But sometimes, we run into problems. For instance, when predicting or forecasting values, simulating data given a particular data-generating process, or when we try to visualise model output and explain them intuitively to non-technical stakeholders. Suddenly, things don’t make much sense: can a user really have made -8 clicks on the banner? Or even 4.3 clicks? Both are examples of how count data doesn’t behave.

I’ve found that better encapsulating the data generating process into my modelling has been key to having sensible model output. Using the Poisson distribution when it was appropriate has not only helped me convey more meaningful insights to stakeholders, but it has also enabled me to produce more accurate error estimates, better Inference, and sound decision-making.

In this post, my aim is to help you get a deep intuitive feel for the Poisson distribution by walking through example applications, and taking a dive into the foundations — the maths. I hope you learn not just how it works, but also why it works, and when to apply the distribution.

If you know of a resource that has helped you grasp the concepts in this blog particularly well, you’re invited to share it in the comments!

Outline

  1. Examples and use cases: Let’s walk through some use cases and sharpen the intuition I just mentioned. Along the way, the relevance of the Poisson Distribution will become clear.
  2. The foundations: Next, let’s break down the equation into its individual components. By studying each part, we’ll uncover why the distribution works the way it does.
  3. The assumptions: Equipped with some formality, it will be easier to understand the assumptions that power the distribution, and at the same time set the boundaries for when it works, and when not.
  4. When real life deviates from the model: Finally, let’s explore the special links that the Poisson distribution has with the Negative Binomial distribution. Understanding these relationships can deepen our understanding, and provide alternatives when the Poisson distribution is not suited for the job.

Example in an online marketplace

I chose to deep dive into the Poisson distribution because it frequently appears in my day-to-day work. Online marketplaces rely on binary user choices from two sides: a seller deciding to list an item and a buyer deciding to make a purchase. These micro-behaviours drive supply and demand, both in the short and long term. A marketplace is born.

Binary choices aggregate into counts — the sum of many such decisions as they occur. Attach a timeframe to this counting process, and you’ll start seeing Poisson distributions everywhere. Let’s explore a concrete example next.

Consider a seller on a platform. In a given month, the seller may or may not list an item for sale (a binary choice). We would only know if she did because then we’d have a measurable count of the event. Nothing stops her from listing another item in the same month. If she does, we count those events. The total could be zero for an inactive seller or, say, 120 for a highly engaged seller.

Over several months, we would observe a varying number of listed items by this seller — sometimes fewer, sometimes more — hovering around an average monthly listing rate. That is essentially a Poisson process. When we get to the assumptions section, you’ll see what we had to assume away to make this example work.

Other examples

Other phenomena that can be modelled with a Poisson distribution include:

  • Sports analytics: The number of goals scored in a match between two teams.
  • Queuing: Customers arriving at a help desk or customer support calls.
  • Insurance: The number of claims made within a given period.

Each of these examples warrants further inspection, but for the remainder of this post, we’ll use the marketplace example to illustrate the inner workings of the distribution.

The mathy bit

… or foundations.

I find opening up the probability mass function (PMF) of distributions helpful to understanding why things work as they do. The PMF of the Poisson distribution goes like:

Where λ is the rate parameter, and 𝑘 is the manifested count of the random variable (𝑘 = 0, 1, 2, 3, … events). Very neat and compact.

Graph: The probability mass function of the Poisson distribution, for a few different lambdas.
The probability mass function of the Poisson distribution, for a few different lambdas.

Contextualising λ and k: the marketplace example

In the context of our earlier example — a seller listing items on our platform — λ represents the seller’s average monthly listings. As the expected monthly value for this seller, λ orchestrates the number of items she would list in a month. Note that λ is a Greek letter, so read: λ is a parameter that we can estimate from data. On the other hand, 𝑘 does not hold any information about the seller’s idiosyncratic behaviour. It’s the target value we set for the number of events that may happen to learn about its probability.

The dual role of λ as the mean and variance

When I said that λ orchestrates the number of monthly listings for the seller, I meant it quite literally. Namely, λ is both the expected value and variance of the distribution, indifferently, for all values of λ. This means that the mean-to-variance ratio (index of dispersion) is always 1.

To put this into perspective, the normal distribution requires two parameters — 𝜇 and 𝜎², the average and variance respectively — to fully describe it. The Poisson distribution achieves the same with just one.

Having to estimate only one parameter can be beneficial for parametric inference. Specifically, by reducing the variance of the model and increasing the statistical power. On the other hand, it can be too limiting of an assumption. Alternatives like the Negative Binomial distribution can alleviate this limitation. We’ll explore that later.

Breaking down the probability mass function

Now that we know the smallest building blocks, let’s zoom out one step: what is λᵏ, 𝑒^⁻λ, and 𝑘!, and more importantly, what is each of these components’ function in the whole?

  • λᵏ is a weight that expresses how likely it is for 𝑘 events to happen, given that the expectation is λ. Note that “likely” here does not mean a probability, yet. It’s merely a signal strength.
  • 𝑘! is a combinatorial correction so that we can say that the order of the events is irrelevant. The events are interchangeable.
  • 𝑒^⁻λ normalises the integral of the PMF function to sum up to 1. It’s called the partition function of exponential-family distributions.

In more detail, λᵏ relates the observed value 𝑘 to the expected value of the random variable, λ. Intuitively, more probability mass lies around the expected value. Hence, if the observed value lies close to the expectation, the probability of occurring is larger than the probability of an observation far removed from the expectation. Before we can cross-check our intuition with the numerical behaviour of λᵏ, we need to consider what 𝑘! does.

Interchangeable events

Had we cared about the order of events, then each unique event could be ordered in 𝑘! ways. But because we don’t, and we deem each event interchangeable, we “divide out” 𝑘! from λᵏ to correct for the overcounting.

Since λᵏ is an exponential term, the output will always be larger as 𝑘 grows, holding λ constant. That is the opposite of our intuition that there is maximum probability when λ = 𝑘, as the output is larger when 𝑘 = λ + 1. But now that we know about the interchangeable events assumption — and the overcounting issue — we know that we have to factor in 𝑘! like so: λᵏ 𝑒^⁻λ / 𝑘!, to see the behaviour we expect.

Now let’s check the intuition of the relationship between λ and 𝑘 through λᵏ, corrected for 𝑘!. For the same λ, say λ = 4, we should see λᵏ 𝑒^⁻λ / 𝑘! to be smaller for values of 𝑘 that are far removed from 4, compared to values of 𝑘 that lie close to 4. Like so: inline code: 4²/2 = 8 is smaller than 4⁴/24 = 10.7. This is consistent with the intuition of a higher likelihood of 𝑘 when it’s near the expectation. The image below shows this relationship more generally, where you see that the output is larger as 𝑘 approaches λ.

Graph: The probability mass function without the normalising component e^-lambda.
The probability mass function without the normalising component e^-lambda.

The assumptions

First, let’s get one thing off the table: the difference between a Poisson process, and the Poisson distribution. The process is a stochastic continuous-time model of points happening in given interval: 1D, a line; 2D, an area, or higher dimensions. We, data scientists, most often deal with the one-dimensional case, where the “line” is time, and the points are the events of interest — I dare to say.

These are the assumptions of the Poisson process:

  1. The occurrence of one event does not affect the probability of a second event. Think of our seller going on to list another item tomorrow indifferently of having done so already today, or the one from five days ago for that matter. The point here is that there is no memory between events.
  2. The average rate at which events occur, is independent of any occurrence. In other words, no event that happened (or will happen) alters λ, which remains constant throughout the observed timeframe. In our seller example, this means that listing an item today does not increase or decrease the seller’s motivation or likelihood of listing another item tomorrow.
  3. Two events cannot occur at exactly the same instant. If we were to zoom at an infinite granular level on the timescale, no two listings could have been placed simultaneously; always sequentially.

From these assumptions — no memory, constant rate, events happening alone — it follows that 1) any interval’s number of events is Poisson-distributed with parameter λₜ and 2) that disjoint intervals are independent — two key properties of a Poisson process.

A Note on the distribution:
The distribution simply describes probabilities for various numbers of counts in an interval. Strictly speaking, one can use the distribution pragmatically whenever the data is nonnegative, can be unbounded on the right, has mean λ, and reasonably models the data. It would be just convenient if the underlying process is a Poisson one, and actually justifies using the distribution.

The marketplace example: Implications

So, can we justify using the Poisson distribution for our marketplace example? Let’s open up the assumptions of a Poisson process and take the test.

Constant λ

  • Why it may fail: The seller has patterned online activity; holidays; promotions; listings are seasonal goods.
  • Consequence: λ is not constant, leading to overdispersion (mean-to-variance ratio is larger than 1, or to temporal patterns.

Independence and memorylessness

  • Why it may fail: The propensity to list again is higher after a successful listing, or conversely, listing once depletes the stock and intervenes with the propensity of listing again.
  • Consequence: Two events are no longer independent, as the occurrence of one informs the occurrence of the other.

Simultaneous events

  • Why it may fail: Batch-listing, a new feature, was introduced to help the sellers.
  • Consequence: Multiple listings would come online at the same time, clumped together, and they would be counted simultaneously.

Balancing rigour and pragmatism

As Data Scientists on the job, we may feel trapped between rigour and pragmatism. The three steps below should give you a sound foundation to decide on which side to err, when the Poisson distribution falls short:

  1. Pinpoint your goal: is it inference, simulation or prediction, and is it about high-stakes output? List the worst thing that can happen, and the cost of it for the business.
  2. Identify the problem and solution: why does the Poisson distribution not fit, and what can you do about it? list 2-3 solutions, including changing nothing.
  3. Balance gains and costs: Will your workaround improve things, or make it worse? and at what cost: interpretability, new assumptions introduced and resources used. Does it help you in achieving your goal?

That said, here are some counters I use when needed.

When real life deviates from your model

Everything described so far pertains to the standard, or homogenous, Poisson process. But what if reality begs for something different?

In the next section, we’ll cover two extensions of the Poisson distribution when the constant λ assumption does not hold. These are not mutually exclusive, but neither they are the same:

  1. Time-varying λ: a single seller whose listing rate ramps up before holidays and slows down afterward
  2. Mixed Poisson distribution: multiple sellers listing items, each with their own λ can be seen as a mixture of various Poisson processes

Time-varying λ

The first extension allows λ to have its own value for each time t. The PMF then becomes

Where the number of events 𝐾(𝑇) in an interval 𝑇 follows the Poisson distribution with a rate no longer equal to a fixed λ, but one equal to:

More intuitively, integrating over the interval 𝑡 to 𝑡 + 𝑖 gives us a single number: the expected value of events over that interval. The integral will vary by each arbitrary interval, and that’s what makes λ change over time. To understand how that integration works, it was helpful for me to think of it like this: if the interval 𝑡 to 𝑡₁ integrates to 3, and 𝑡₁ to 𝑡₂ integrates to 5, then the interval 𝑡 to 𝑡₂ integrates to 8 = 3 + 5. That’s the two expectations summed up, and now the expectation of the entire interval.

Practical implication 
One may want to modeling the expected value of the Poisson distribution as a function of time. For instance, to model an overall change in trend, or seasonality. In generative model notation:

Time may be a continuous variable, or an arbitrary function of it.

Process-varying λ: Mixed Poisson distribution

But then there’s a gotcha. Remember when I said that λ has a dual role as the mean and variance? That still applies here. Looking at the “relaxed” PMF*, the only thing that changes is that λ can vary freely with time. But it’s still the one and only λ that orchestrates both the expected value and the dispersion of the PMF*. More precisely, 𝔼[𝑋] = Var(𝑋) still holds.

There are various reasons for this constraint not to hold in reality. Model misspecification, event interdependence and unaccounted for heterogeneity could be the issues at hand. I’d like to focus on the latter case, as it justifies the Negative Binomial distribution — one of the topics I promised to open up.

Heterogeneity and overdispersion
Imagine we are not dealing with one seller, but with 10 of them listing at different intensity levels, λᵢ, where 𝑖 = 1, 2, 3, …, 10 sellers. Then, essentially, we have 10 Poisson processes going on. If we unify the processes and estimate the grand λ, we simplify the mixture away. Meaning, we get a correct estimate of all sellers on average, but the resulting grand λ is naive and does not know about the original spread of λᵢ. It still assumes that the variance and mean are equal, as per the axioms of the distribution. This will lead to overdispersion and, in turn, to underestimated errors. Ultimately, it inflates the false positive rate and drives poor decision-making. We need a way to embrace the heterogeneity amongst sellers’ λᵢ.

Negative binomial: Extending the Poisson distribution
Among the few ways one can look at the Negative Binomial distribution, one way is to see it as a compound Poisson process — 10 sellers, sounds familiar yet? That means multiple independent Poisson processes are summed up to a single one. Mathematically, first we draw λ from a Gamma distribution: λ ~ Γ(r, θ), then we draw the count 𝑋 | λ ~ Poisson(λ).

In one image, it is as if we would sample from plenty Poisson distributions, corresponding to each seller.

A negative Binomial distribution arises from many Poisson distributions.
A negative Binomial distribution arises from many Poisson distributions.

The more exposing alias of the Negative binomial distribution is Gamma-Poisson mixture distribution, and now we know why: the dictating λ comes from a continuous mixture. That’s what we needed to explain the heterogeneity amongst sellers.

Let’s simulate this scenario to gain more intuition.

Gamma mixture of lambda.
Gamma mixture of lambda.

First, we draw λᵢ from a Gamma distribution: λᵢ ~ Γ(r, θ). Intuitively, the Gamma distribution tells us about the variety in the intensity — listing rate — amongst the sellers.

On a practical note, one can instill their assumptions about the degree of heterogeneity in this step of the model: how different are sellers? By varying the levels of heterogeneity, one can observe the impact on the final Poisson-like distribution. Doing this type of checks (i.e., posterior predictive check), is common in Bayesian modeling, where the assumptions are set explicitly.

Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions.
Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions.

In the second step, we plug the obtained λ into the Poisson distribution: 𝑋 | λ ~ Poisson(λ), and obtain a Poisson-like distribution that represents the summed subprocesses. Notably, this unified process has a larger dispersion than expected from a homogeneous Poisson distribution, but it is in line with the Gamma mixture of λ.

Heterogeneous λ and inference

A practical consequence of introducing flexibility into your assumed distribution is that inference becomes more challenging. More parameters (i.e., the Gamma parameters) need to be estimated. Parameters act as flexible explainers of the data, tending to overfit and explain away variance in your variable. The more parameters you have, the better the explanation may seem, but the model also becomes more susceptible to noise in the data. Higher variance reduces the power to identify a difference in means, if one exists, because — well — it gets lost in the variance.

Countering the loss of power

  1. Confirm whether you indeed need to extend the standard Poisson distribution. If not, simplify to the simplest, most fit model. A quick check on overdispersion may suffice for this.
  2. Pin down the estimates of the Gamma mixture distribution parameters using regularising, informative priors (think: Bayes).

During my research process for writing this blog, I learned a great deal about the connective tissue underlying all of this: how the binomial distribution plays a fundamental role in the processes we’ve discussed. And while I’d love to ramble on about this, I’ll save it for another post, perhaps. In the meantime, feel free to share your understanding in the comments section below 👍.

Conclusion

The Poisson distribution is a simple distribution that can be highly suitable for modelling count data. However, when the assumptions do not hold, one can extend the distribution by allowing the rate parameter to vary as a function of time or other factors, or by assuming subprocesses that collectively make up the count data. This added flexibility can address the limitations, but it comes at a cost: increased flexibility in your modelling raises the variance and, consequently, undermines the statistical power of your model.

If your end goal is inference, you may want to think twice and consider exploring simpler models for the data. Alternatively, switch to the Bayesian paradigm and leverage its built-in solution to regularise estimates: informative priors.

I hope this has given you what you came for — a better intuition about the Poisson distribution. I’d love to hear your thoughts about this in the comments!

Unless otherwise noted, all images are by the author.
Originally published at 
https://aalvarezperez.github.io on January 5, 2025.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Takeaways from Cisco’s AI Summit

Software development is in an absolute frenzy right now, Scott said. “You have very, very senior people, the best coders you’ve ever met in your life, who are just completely overwhelmed trying to keep up with the rate of progress that’s happening right now.” Optimizing AI development for agents or humans?

Read More »

Eying AI factories, Nvidia buys bigger stake in CoreWeave

Nvidia continues to throw its sizable bank account around, this time making a $2 billion investment in GPU cloud service provider CoreWeave. The company says the investment reflects Nvidia’s “confidence in CoreWeave’s business, team and growth strategy as a cloud platform built on Nvidia infrastructure.” CoreWeave is not the only

Read More »

AI, security tailwinds signal promising 2026 for Cisco

A big component of AI in communications is agentic agents talking to employees and customers, and bringing trust to the system is where Cisco should shine. It builds and runs its own infrastructure, which is secure by design. Cisco has relationships with governments all over the world, and between Webex

Read More »

BOEM Announces ‘Major Step’ Toward Expanding Offshore Energy

In a statement posted on its website, the Bureau of Ocean Energy Management (BOEM) announced “another major step toward expanding offshore energy development pursuant to the One Big Beautiful Bill Act”. BOEM noted in the statement that it has released the Final Notice of Sale for Lease Sale Big Beautiful Gulf 2 (BBG2), which it said is the second of 30 Gulf of America lease sales required by the One Big Beautiful Bill Act. The final notice of sale will be published in the Federal Register on February 5, BOEM revealed in the statement, outlining that this satisfies the requirement for the notice to publish at least 30 days prior to the scheduled lease sale date on March 11. Lease Sale BBG2 proposes to offer approximately 15,066 unleased blocks covering about 80.4 million acres on the U.S. Outer Continental Shelf in the Gulf of America, BOEM highlighted in the statement, adding that the blocks are located three to 231 miles offshore and span water depths from nine feet to more than 11,100 feet.   BOEM noted in the statement that certain areas will be excluded from the sale, “including blocks subject to the Sept. 8, 2020, presidential withdrawal; blocks adjacent to or beyond the U.S. Exclusive Economic Zone in the Eastern Gap; blocks within the boundaries of the Flower Garden Banks National Marine Sanctuary; [and] any block which received a bid in Lease Sale BBG1”. In the statement, BOEM Acting Director Matt Giacona said, “lease Sale BBG2 is a key step in advancing BOEM’s offshore oil and gas program in the Gulf of America”. “Following the strong industry response to Lease Sale BBG1, this proposed sale aims to ensure continued investment in the U.S. Outer Continental Shelf and support American energy independence,” he added. BOEM noted in its statement that the Gulf

Read More »

Pemex Slashes Debt to 11 Year Low

Mexico’s Petroleos Mexicanos reduced its debt to the lowest level in 11 years, a hopeful sign for the struggling state-owned oil company as it seeks to reverse a decades-long production decline and revive its money-losing refining business. The company cut its total debt to roughly $84.5 billion, according to a company presentation, after receiving more than $40 billion of support last year from Mexico’s finance ministry through debt purchases and cash injections. Pemex also made about 390.2 billion pesos ($22.7 billion) of payments to partners in 2025, Chief Executive Officer Victor Rodriguez said in a press briefing Wednesday, another indication the company is making headway in whittling down its large debts to service providers. The company’s crude oil output has dropped by about 50% from its peak more than two decades ago. For years, Pemex has struggled to bring new discoveries online as production fell at many of its most prolific fields.  Mexican President Claudia Sheinbaum is seeking a turnaround by attracting more private investment to the nation’s aging oil and gas fields, with the aim of making Pemex self-sufficient by 2027. Sheinbaum said Wednesday that government financial assistance for the company may continue next year. Pemex also is taking steps to improve efficiency at its aging refineries, which have been hit by accidents and outages in recent years. Mexico’s flagship Dos Bocas plant is now producing about 300,000 barrels of fuel per day, Sheinbaum said, lifting total output at the country’s domestic refineries to around 1.2 million barrels a day. Separately, Pemex has drawn criticism from US President Donald Trump over its oil sales to Cuba. Sheinbaum said Mexico is holding diplomatic talks to keep supplying oil to the communist nation on humanitarian grounds. Mexico sold nearly $500 million of crude to the island in 2025, she said Wednesday. WHAT DO YOU THINK?

Read More »

Shell’s Profit Falls

(Update) February 5, 2026, 9:57 AM GMT: Article updated with with shares, analyst and investor reaction and context throughout. Shell Plc profits slumped in the fourth quarter, undershooting expectations as lower crude prices, a weak oil-trading performance and struggling chemicals business overshadowed stronger refining margins. Europe’s largest oil company took on more debt, maintained its quarterly share buyback of $3.5 billion and raised its dividend even as volatile oil prices pressure its plan to boost investor returns through 2030. Gearing — the ratio of net debt to equity — rose, challenging the firm’s ability to organically stick with its level of share repurchases through this year.  Investors are increasingly focused on Shell’s growth outlook after Chief Executive Officer Wael Sawan cut costs and shed underperforming assets. His goal to close a large valuation gap with Exxon Mobil Corp. and Chevron Corp. has become harder this year after the shares of the US rivals soared, buoyed by strong production from low-cost oil fields in Guyana, the Permian Basin and Kazakhstan.  Shell shares fell as much as 2.6% on Thursday, outpacing declines of peers BP Plc and TotalEnergies SE. Shell’s stock was the best performer among the world’s top five oil majors in dollar terms last year, but since mid-November the gains have fizzled and so far in 2026 it’s been lagging its peers. Still, it has outperformed European rivals during Sawan’s three-year tenure. Shell’s adjusted net income of $3.26 billion for the quarter was down 11% from a year earlier and lower than the average analyst estimate of $3.51 billion. A slight production rise — in line with expectations — was unable to lift earnings. The London-based company’s 2% year-on-year production growth in the quarter pales in comparison to that of the Americans. Chevron increased output by 20% in the fourth quarter

Read More »

USA Expected to Produce Almost 1/4 of Global Oil Output

The U.S. is projected to produce almost a quarter of the world’s petroleum and other liquid fuels output in 2026 in the U.S. Energy Information Administration’s (EIA) latest short term energy outlook. According to the EIA’s January STEO, which was published last month, the EIA sees U.S. petroleum and other liquid fuels production averaging 23.88 million barrels per day in 2026. The EIA forecasts in the STEO that global production this year will come in at 107.65 million barrels per day, putting the U.S. contribution at 22.18 percent of the overall figure. A quarterly breakdown included in the EIA’s latest STEO outlined that the EIA sees U.S. petroleum and other liquid fuels production making up 22.18 percent of the world’s total petroleum and liquid fuels production in the first quarter of this year, 22.25 percent in the second quarter, 22.15 percent in the third quarter, and 22.14 percent in the fourth quarter. The U.S. share of global petroleum and other liquid fuels production is projected to drop to 21.84 percent in 2027 in the EIA’s latest STEO, which forecasts that U.S. output will average 23.63 million barrels per day and that global production will average 108.18 million barrels per day. Back in 2025, the U.S. share of global petroleum and other liquid fuels production came in at 22.22 percent, the EIA’s January STEO showed. U.S. output averaged 23.62 million barrels per day in 2025 and global production came in at 106.28 million barrels per day, the STEO highlighted. OPEC+ is projected to contribute 44.93 million barrels per day, or 41.73 percent, to the 2026 total, and 45.07 million barrels per day, or 41.66 percent, to the 2027 total, the EIA pointed out in its STEO. OPEC+ contributed 43.80 million barrels per day, or 41.21 percent, to the 2025 total, the

Read More »

Oil Ends Higher Amid Rising Middle East Risks

Oil edged higher as traders parsed conflicting reports on the status of nuclear talks between the US and Iran, clouding the outlook on whether Washington will proceed with military strikes against the major oil producer. West Texas Intermediate rose 3.1% to settle above $65 a barrel. Prices pared gains in post-settlement trading as Iranian Foreign Minister Abbas Araghchi confirmed in a social media post that negotiations will be held in Oman on Friday. The commodity surged earlier on reports that the US told Iran it will not agree to Tehran’s demands to change the location and format of talks planned for Friday, Axios said, citing two US officials. Adding to bullish momentum, US President Donald Trump said that Iran’s Supreme Leader Ayatollah Ali Khamenei “should be very worried” in an interview with NBC. Traders have been closely monitoring the risk of possible US military intervention in Iran, which could disrupt key shipping lanes as well as the country’s roughly 3.3 million barrels-per-day oil production. Doubts over whether talks surrounding Iran’s nuclear program would proceed as planned have intensified since Tuesday, when US and Iranian forces appeared to square off in the sea and air. An Iranian drone approached an American aircraft carrier in the Arabian Sea and was shot down just hours after a US-flagged oil tanker was hailed by small armed ships in the Strait of Hormuz off Iran’s coast. Concern over a potential conflict in the Middle East, a source of about a third of the world’s crude, helped lift prices last month despite signs of a growing oversupply. It has also kept the cost of bullish options high relative to bearish ones for the longest stretch in more than a year. “Geopolitical tensions are really driving it,” Equinor Chief Financial Officer Torgrim Reitan said in a Bloomberg

Read More »

Holtum Says LNG Projects Need New Financing Playbook

Trafigura Group Chief Executive Officer Richard Holtum said the liquefied natural gas industry needs a “bit of innovation” when it comes to financing projects. “I feel sorry for LNG projects in the US,” Holtum said on a panel at the LNG 2026 conference in Doha. “They would only get bank financing when they show that they’ve sold 80%-90% of their volume on long-term projects.”  LNG developers are scrambling to fully finance their projects to export more of the fuel, with the next wave of production from terminals under construction in the US and Qatar due to enter the market over the next decade. In the US, several projects including Delfin LNG off the coast of Louisiana, are working to finalize the debt and equity commitments. The current approach of LNG financing contrasts with oil, where banks are more comfortable with the inherent value of the commodity, the CEO said. “Whilst if your project financing some oil exploration, it’s simply the bank takes a view that, okay, oil is worth $40, $50, $60, $70, whatever it is, it doesn’t matter, they take a view, that’s what it’s worth in the long term,” he said. A similar model for LNG, where project financing is based on a long-term price forecast for the fuel, doesn’t seem to be developing, Holtum said. Still, global LNG capacity is set to rise by about 50% by the end of the decade — the biggest build-out in the industry’s history — led by the US.  The current arrangement, where 90% of LNG is sold to utilities under long-term contracts, risks running into difficulties because many companies and countries have made net-zero commitments, according to the Trafigura CEO. “If they have made those commitments, signing a 20-year contract or a 25-year contract that starts in 2030 is inherently problematic,” Holtum

Read More »

Azure outage disrupts VMs and identity services for over 10 hours

After multiple infrastructure scale-up attempts failed to handle the backlog and retry volumes, Microsoft ultimately removed traffic from the affected service to repair the underlying infrastructure without load. “The outage didn’t just take websites offline, but it halted development workflows and disrupted real-world operations,” said Pareekh Jain, CEO at EIIRTrend & Pareekh Consulting. Cloud outages on the rise Cloud outages have become more frequent in recent years, with major providers such as AWS, Google Cloud, and IBM all experiencing high-profile disruptions. AWS services were severely impacted for more than 15 hours when a DNS problem rendered the DynamoDB API unreliable. In November, a bad configuration file in Cloudflare’s Bot Management system led to intermittent service disruptions across several online platforms. In June, an invalid automated update disrupted the company’s identity and access management (IAM) system, resulting in users being unable to use Google to authenticate on third-party apps. “The evolving data center architecture is shaped by the shift to more demanding, intricate workloads driven by the new velocity and variability of AI. This rapid expansion is not only introducing complexities but also challenging existing dependencies. So any misconfiguration or mismanagement at the control layer can disrupt the environment,” said Neil Shah, co-founder and VP at Counterpoint Research. Preparing for the next cloud incident This is not an isolated incident. For CIOs, the event only reinforces the need to rethink resilience strategies. In the immediate aftermath when a hyperscale dependency fails, waiting is not a recommended strategy for CIOs, and they should focus on a strategy of stabilize, prioritize, and communicate, stated Jain. “First, stabilize by declaring a formal cloud incident with a single incident commander, quickly determining whether the issue affects control-plane operations or running workloads, and freezing all non-essential changes such as deployments and infrastructure updates.”

Read More »

Intel sets sights on data center GPUs amid AI-driven infrastructure shifts

Supply chain reliability is another underappreciated advantage. Hyperscalers want a credible second source, but only if Intel can offer stable, predictable roadmaps across multiple product generations. However, the company runs into a major constraint at the software layer. “The decisive bottleneck is software,” Rawat said. “CUDA functions as an industry operating standard, embedded across models, pipelines, and DevOps. Intel’s challenge is to prove that migration costs are low, and that ongoing optimization does not become a hidden engineering tax.” For enterprise buyers, that software gap translates directly into switching risk. Tighter integration of Intel CPUs, GPUs, and networking could improve system-level efficiency for enterprises and cloud providers, but the dominance of the CUDA ecosystem remains the primary barrier to switching, said Charlie Dai, VP and principal analyst at Forrester. “Even with strong hardware integration, buyers will hesitate without seamless compatibility with mainstream ML/DL frameworks and tooling,” Dai added.

Read More »

8 hot networking trends for 2026

Recurring license fees may have dissuaded enterprises from adopting AIOps in the past, but that’s changing, Morgan adds: “Over the past few years, vendors have added features and increased the value of those licenses, including 24×7 support. Now, by paying the equivalent of a fraction of a network engineer’s salary in license fees, a mid-sized enterprise can reduce hours spent on operations and level-one support in order to allocate more of their valuable networking experts’ time to AI projects. Every enterprise’s business case will be different, but with networking expertise in high demand, we predict that in 2026, the labor savings will outweigh the additional license costs for the majority of mid-to-large sized enterprises.” 2. AI boosts data center networking investments Enterprise data centers, which not so long ago were on the endangered species list, have made a remarkable comeback, driven by the reality that many AI workloads need to be hosted on premises, either for privacy, security, regulatory, latency or cost considerations. The global market for data center networking technologies was estimated at around $46 billion in 2025 and is projected to reach $103 billion by the end of 2030, a growth rate of nearly 18%, according to BCC Research: “The data center networking technologies market is rapidly changing due to increasing use of AI-powered solutions across data centers and sectors like telecom, IT, banking, financial services, insurance, government and commercial industries.” McKinsey predicts that global demand for data center capacity could nearly triple by 2030, with about 70% of that demand coming from AI workloads. McKinsey says both training and inference workloads are contributing to data center growth, with inference expected to become the dominant workload by 2030. 3. Private clouds roll in Clearly, the hyperscalers are driving most of the new data center construction, but enterprises are

Read More »

Cisco: Infrastructure, trust, model development are key AI challenges

“The G200 chip was for the scale out, because what’s happening now is these models are getting bigger where they don’t just fit within a single data center. You don’t have enough power to just pull into a single data center,” Patel said. “So now you need to have data centers that might be hundreds of kilometers apart, that operate like an ultra-cluster that are coherent. And so that requires a completely different chip architecture to make sure that you have capabilities like deep buffering and so on and so forth… You need to make sure that these data centers can be scaled across physical boundaries.”  “In addition, we are reaching the physical limits of copper and optics, and coherent optics especially are going to be extremely important as we go start building out this data center infrastructure. So that’s an area that you’re starting to see a tremendous amount of progress being made,” Patel said. The second constraint is the AI trust deficit, Patel said. “We currently need to make sure that these systems are trusted by the people that are using them, because if you don’t trust these systems, you’ll never use them,” Patel said. “This is the first time that security is actually becoming a prerequisite for adoption. In the past, you always ask the question whether you want to be secure, or you want to be productive. And those were kind of needs that offset each other,” Patel said. “We need to make sure that we trust not just using AI for cyber defense, but we trust AI itself,” Patel said. The third constraint is the notion of a data gap. AI models get trained on human-generated data that’s publicly available on the Internet, but “we’re running out,” Patel said. “And what you’re starting to see happen

Read More »

How Robotics Is Re-Engineering Data Center Construction and Operations

Physical AI: A Reusable Robotics Stack for Data Center Operations This is where the recent collaboration between Multiply Labs and NVIDIA becomes relevant, even though the application is biomanufacturing rather than data centers. Multiply Labs has outlined a robotics approach built on three core elements: Digital twins using NVIDIA Isaac Sim to model hardware and validate changes in simulation before deployment. Foundation-model-based skill learning via NVIDIA Isaac GR00T, enabling robots to generalize tasks rather than rely on brittle, hard-coded behaviors. Perception pipelines including FoundationPose and FoundationStereo, that convert expert demonstrations into structured training data. Taken together, this represents a reusable blueprint for data center robotics. Applying the Lesson to Data Center Environments The same physical-AI techniques now being applied in lab and manufacturing environments map cleanly onto the realities of data center operations, particularly where safety, uptime, and variability intersect. Digital-twin-first deployment Before a robot ever enters a live data hall, it needs to be trained in simulation. That means modeling aisle geometry, obstacles, rack layouts, reflective surfaces, and lighting variation; along with “what if” scenarios such as blocked aisles, emergency egress conditions, ladders left in place, or spill events. Simulation-first workflows make it possible to validate behavior and edge cases before introducing any new system into a production environment. Skill learning beats hard-coded rules Data centers appear structured, but in practice they are full of variability: temporary cabling, staged parts, mixed-vendor racks, and countless human exceptions. Foundation-model approaches to manipulation are designed to generalize across that messiness far better than traditional rule-based automation, which tends to break when conditions drift even slightly from the expected state. Imitation learning captures tribal knowledge Many operational tasks rely on tacit expertise developed over years in the field, such as how to manage stiff patch cords, visually confirm latch engagement, or stage a

Read More »

Applied Digital CEO Wes Cummins On the Hard Part of the AI Boom: Execution

Designing for What Comes After the Current AI Cycle Applied Digital’s design philosophy starts with a premise many developers still resist: today’s density assumptions may not hold. “We’re designing for maximum flexibility for the future—higher density power, lower density power, higher voltage delivery, and more floor space,” Cummins said. “It’s counterintuitive because densities are going up, but we don’t know what comes next.” That choice – to allocate more floor space even as rack densities climb – signals a long-view approach. Facilities are engineered to accommodate shifts in voltage, cooling topology, and customer requirements without forcing wholesale retrofits. Higher-voltage delivery, mixed cooling configurations, and adaptable data halls are baked in from the start. The goal is not to predict the future perfectly, Cummins stressed, but to avoid painting infrastructure into a corner. Supply Chain as Competitive Advantage If flexibility is the design thesis, supply chain control is the execution weapon. “It’s a huge advantage that we locked in our MEP supply chain 18 to 24 months ago,” Cummins said. “It’s a tight environment, and more timelines are going to get missed in 2026 because of it.” Applied Digital moved early to secure long-lead mechanical, electrical, and plumbing components; well before demand pressure fully rippled through transformers, switchgear, chillers, generators, and breakers. That foresight now underpins the company’s ability to make credible delivery commitments while competitors confront procurement bottlenecks. Cummins was blunt: many delays won’t stem from poor planning, but from simple unavailability. From 100 MW to 700 MW Without Losing Control The past year marked a structural pivot for Applied Digital. What began as a single, 100-megawatt “field of dreams” facility in North Dakota has become more than 700 MW under construction, with expansion still ahead. “A hundred megawatts used to be considered scale,” Cummins said. “Now we’re at 700

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »