Stay Ahead, Stay ONMINE

Mastering the Poisson Distribution: Intuition and Foundations

You’ve probably used the normal distribution one or two times too many. We all have — It’s a true workhorse. But sometimes, we run into problems. For instance, when predicting or forecasting values, simulating data given a particular data-generating process, or when we try to visualise model output and explain them intuitively to non-technical stakeholders. Suddenly, things don’t make much sense: can a user really have made -8 clicks on the banner? Or even 4.3 clicks? Both are examples of how count data doesn’t behave. I’ve found that better encapsulating the data generating process into my modelling has been key to having sensible model output. Using the Poisson distribution when it was appropriate has not only helped me convey more meaningful insights to stakeholders, but it has also enabled me to produce more accurate error estimates, better Inference, and sound decision-making. In this post, my aim is to help you get a deep intuitive feel for the Poisson distribution by walking through example applications, and taking a dive into the foundations — the maths. I hope you learn not just how it works, but also why it works, and when to apply the distribution. If you know of a resource that has helped you grasp the concepts in this blog particularly well, you’re invited to share it in the comments! Outline Examples and use cases: Let’s walk through some use cases and sharpen the intuition I just mentioned. Along the way, the relevance of the Poisson Distribution will become clear. The foundations: Next, let’s break down the equation into its individual components. By studying each part, we’ll uncover why the distribution works the way it does. The assumptions: Equipped with some formality, it will be easier to understand the assumptions that power the distribution, and at the same time set the boundaries for when it works, and when not. When real life deviates from the model: Finally, let’s explore the special links that the Poisson distribution has with the Negative Binomial distribution. Understanding these relationships can deepen our understanding, and provide alternatives when the Poisson distribution is not suited for the job. Example in an online marketplace I chose to deep dive into the Poisson distribution because it frequently appears in my day-to-day work. Online marketplaces rely on binary user choices from two sides: a seller deciding to list an item and a buyer deciding to make a purchase. These micro-behaviours drive supply and demand, both in the short and long term. A marketplace is born. Binary choices aggregate into counts — the sum of many such decisions as they occur. Attach a timeframe to this counting process, and you’ll start seeing Poisson distributions everywhere. Let’s explore a concrete example next. Consider a seller on a platform. In a given month, the seller may or may not list an item for sale (a binary choice). We would only know if she did because then we’d have a measurable count of the event. Nothing stops her from listing another item in the same month. If she does, we count those events. The total could be zero for an inactive seller or, say, 120 for a highly engaged seller. Over several months, we would observe a varying number of listed items by this seller — sometimes fewer, sometimes more — hovering around an average monthly listing rate. That is essentially a Poisson process. When we get to the assumptions section, you’ll see what we had to assume away to make this example work. Other examples Other phenomena that can be modelled with a Poisson distribution include: Sports analytics: The number of goals scored in a match between two teams. Queuing: Customers arriving at a help desk or customer support calls. Insurance: The number of claims made within a given period. Each of these examples warrants further inspection, but for the remainder of this post, we’ll use the marketplace example to illustrate the inner workings of the distribution. The mathy bit … or foundations. I find opening up the probability mass function (PMF) of distributions helpful to understanding why things work as they do. The PMF of the Poisson distribution goes like: Where λ is the rate parameter, and 𝑘 is the manifested count of the random variable (𝑘 = 0, 1, 2, 3, … events). Very neat and compact. The probability mass function of the Poisson distribution, for a few different lambdas. Contextualising λ and k: the marketplace example In the context of our earlier example — a seller listing items on our platform — λ represents the seller’s average monthly listings. As the expected monthly value for this seller, λ orchestrates the number of items she would list in a month. Note that λ is a Greek letter, so read: λ is a parameter that we can estimate from data. On the other hand, 𝑘 does not hold any information about the seller’s idiosyncratic behaviour. It’s the target value we set for the number of events that may happen to learn about its probability. The dual role of λ as the mean and variance When I said that λ orchestrates the number of monthly listings for the seller, I meant it quite literally. Namely, λ is both the expected value and variance of the distribution, indifferently, for all values of λ. This means that the mean-to-variance ratio (index of dispersion) is always 1. To put this into perspective, the normal distribution requires two parameters — 𝜇 and 𝜎², the average and variance respectively — to fully describe it. The Poisson distribution achieves the same with just one. Having to estimate only one parameter can be beneficial for parametric inference. Specifically, by reducing the variance of the model and increasing the statistical power. On the other hand, it can be too limiting of an assumption. Alternatives like the Negative Binomial distribution can alleviate this limitation. We’ll explore that later. Breaking down the probability mass function Now that we know the smallest building blocks, let’s zoom out one step: what is λᵏ, 𝑒^⁻λ, and 𝑘!, and more importantly, what is each of these components’ function in the whole? λᵏ is a weight that expresses how likely it is for 𝑘 events to happen, given that the expectation is λ. Note that “likely” here does not mean a probability, yet. It’s merely a signal strength. 𝑘! is a combinatorial correction so that we can say that the order of the events is irrelevant. The events are interchangeable. 𝑒^⁻λ normalises the integral of the PMF function to sum up to 1. It’s called the partition function of exponential-family distributions. In more detail, λᵏ relates the observed value 𝑘 to the expected value of the random variable, λ. Intuitively, more probability mass lies around the expected value. Hence, if the observed value lies close to the expectation, the probability of occurring is larger than the probability of an observation far removed from the expectation. Before we can cross-check our intuition with the numerical behaviour of λᵏ, we need to consider what 𝑘! does. Interchangeable events Had we cared about the order of events, then each unique event could be ordered in 𝑘! ways. But because we don’t, and we deem each event interchangeable, we “divide out” 𝑘! from λᵏ to correct for the overcounting. Since λᵏ is an exponential term, the output will always be larger as 𝑘 grows, holding λ constant. That is the opposite of our intuition that there is maximum probability when λ = 𝑘, as the output is larger when 𝑘 = λ + 1. But now that we know about the interchangeable events assumption — and the overcounting issue — we know that we have to factor in 𝑘! like so: λᵏ 𝑒^⁻λ / 𝑘!, to see the behaviour we expect. Now let’s check the intuition of the relationship between λ and 𝑘 through λᵏ, corrected for 𝑘!. For the same λ, say λ = 4, we should see λᵏ 𝑒^⁻λ / 𝑘! to be smaller for values of 𝑘 that are far removed from 4, compared to values of 𝑘 that lie close to 4. Like so: inline code: 4²/2 = 8 is smaller than 4⁴/24 = 10.7. This is consistent with the intuition of a higher likelihood of 𝑘 when it’s near the expectation. The image below shows this relationship more generally, where you see that the output is larger as 𝑘 approaches λ. The probability mass function without the normalising component e^-lambda. The assumptions First, let’s get one thing off the table: the difference between a Poisson process, and the Poisson distribution. The process is a stochastic continuous-time model of points happening in given interval: 1D, a line; 2D, an area, or higher dimensions. We, data scientists, most often deal with the one-dimensional case, where the “line” is time, and the points are the events of interest — I dare to say. These are the assumptions of the Poisson process: The occurrence of one event does not affect the probability of a second event. Think of our seller going on to list another item tomorrow indifferently of having done so already today, or the one from five days ago for that matter. The point here is that there is no memory between events. The average rate at which events occur, is independent of any occurrence. In other words, no event that happened (or will happen) alters λ, which remains constant throughout the observed timeframe. In our seller example, this means that listing an item today does not increase or decrease the seller’s motivation or likelihood of listing another item tomorrow. Two events cannot occur at exactly the same instant. If we were to zoom at an infinite granular level on the timescale, no two listings could have been placed simultaneously; always sequentially. From these assumptions — no memory, constant rate, events happening alone — it follows that 1) any interval’s number of events is Poisson-distributed with parameter λₜ and 2) that disjoint intervals are independent — two key properties of a Poisson process. A Note on the distribution:The distribution simply describes probabilities for various numbers of counts in an interval. Strictly speaking, one can use the distribution pragmatically whenever the data is nonnegative, can be unbounded on the right, has mean λ, and reasonably models the data. It would be just convenient if the underlying process is a Poisson one, and actually justifies using the distribution. The marketplace example: Implications So, can we justify using the Poisson distribution for our marketplace example? Let’s open up the assumptions of a Poisson process and take the test. Constant λ Why it may fail: The seller has patterned online activity; holidays; promotions; listings are seasonal goods. Consequence: λ is not constant, leading to overdispersion (mean-to-variance ratio is larger than 1, or to temporal patterns. Independence and memorylessness Why it may fail: The propensity to list again is higher after a successful listing, or conversely, listing once depletes the stock and intervenes with the propensity of listing again. Consequence: Two events are no longer independent, as the occurrence of one informs the occurrence of the other. Simultaneous events Why it may fail: Batch-listing, a new feature, was introduced to help the sellers. Consequence: Multiple listings would come online at the same time, clumped together, and they would be counted simultaneously. Balancing rigour and pragmatism As Data Scientists on the job, we may feel trapped between rigour and pragmatism. The three steps below should give you a sound foundation to decide on which side to err, when the Poisson distribution falls short: Pinpoint your goal: is it inference, simulation or prediction, and is it about high-stakes output? List the worst thing that can happen, and the cost of it for the business. Identify the problem and solution: why does the Poisson distribution not fit, and what can you do about it? list 2-3 solutions, including changing nothing. Balance gains and costs: Will your workaround improve things, or make it worse? and at what cost: interpretability, new assumptions introduced and resources used. Does it help you in achieving your goal? That said, here are some counters I use when needed. When real life deviates from your model Everything described so far pertains to the standard, or homogenous, Poisson process. But what if reality begs for something different? In the next section, we’ll cover two extensions of the Poisson distribution when the constant λ assumption does not hold. These are not mutually exclusive, but neither they are the same: Time-varying λ: a single seller whose listing rate ramps up before holidays and slows down afterward Mixed Poisson distribution: multiple sellers listing items, each with their own λ can be seen as a mixture of various Poisson processes Time-varying λ The first extension allows λ to have its own value for each time t. The PMF then becomes Where the number of events 𝐾(𝑇) in an interval 𝑇 follows the Poisson distribution with a rate no longer equal to a fixed λ, but one equal to: More intuitively, integrating over the interval 𝑡 to 𝑡 + 𝑖 gives us a single number: the expected value of events over that interval. The integral will vary by each arbitrary interval, and that’s what makes λ change over time. To understand how that integration works, it was helpful for me to think of it like this: if the interval 𝑡 to 𝑡₁ integrates to 3, and 𝑡₁ to 𝑡₂ integrates to 5, then the interval 𝑡 to 𝑡₂ integrates to 8 = 3 + 5. That’s the two expectations summed up, and now the expectation of the entire interval. Practical implication One may want to modeling the expected value of the Poisson distribution as a function of time. For instance, to model an overall change in trend, or seasonality. In generative model notation: Time may be a continuous variable, or an arbitrary function of it. Process-varying λ: Mixed Poisson distribution But then there’s a gotcha. Remember when I said that λ has a dual role as the mean and variance? That still applies here. Looking at the “relaxed” PMF*, the only thing that changes is that λ can vary freely with time. But it’s still the one and only λ that orchestrates both the expected value and the dispersion of the PMF*. More precisely, 𝔼[𝑋] = Var(𝑋) still holds. There are various reasons for this constraint not to hold in reality. Model misspecification, event interdependence and unaccounted for heterogeneity could be the issues at hand. I’d like to focus on the latter case, as it justifies the Negative Binomial distribution — one of the topics I promised to open up. Heterogeneity and overdispersionImagine we are not dealing with one seller, but with 10 of them listing at different intensity levels, λᵢ, where 𝑖 = 1, 2, 3, …, 10 sellers. Then, essentially, we have 10 Poisson processes going on. If we unify the processes and estimate the grand λ, we simplify the mixture away. Meaning, we get a correct estimate of all sellers on average, but the resulting grand λ is naive and does not know about the original spread of λᵢ. It still assumes that the variance and mean are equal, as per the axioms of the distribution. This will lead to overdispersion and, in turn, to underestimated errors. Ultimately, it inflates the false positive rate and drives poor decision-making. We need a way to embrace the heterogeneity amongst sellers’ λᵢ. Negative binomial: Extending the Poisson distributionAmong the few ways one can look at the Negative Binomial distribution, one way is to see it as a compound Poisson process — 10 sellers, sounds familiar yet? That means multiple independent Poisson processes are summed up to a single one. Mathematically, first we draw λ from a Gamma distribution: λ ~ Γ(r, θ), then we draw the count 𝑋 | λ ~ Poisson(λ). In one image, it is as if we would sample from plenty Poisson distributions, corresponding to each seller. A negative Binomial distribution arises from many Poisson distributions. The more exposing alias of the Negative binomial distribution is Gamma-Poisson mixture distribution, and now we know why: the dictating λ comes from a continuous mixture. That’s what we needed to explain the heterogeneity amongst sellers. Let’s simulate this scenario to gain more intuition. Gamma mixture of lambda. First, we draw λᵢ from a Gamma distribution: λᵢ ~ Γ(r, θ). Intuitively, the Gamma distribution tells us about the variety in the intensity — listing rate — amongst the sellers. On a practical note, one can instill their assumptions about the degree of heterogeneity in this step of the model: how different are sellers? By varying the levels of heterogeneity, one can observe the impact on the final Poisson-like distribution. Doing this type of checks (i.e., posterior predictive check), is common in Bayesian modeling, where the assumptions are set explicitly. Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions. In the second step, we plug the obtained λ into the Poisson distribution: 𝑋 | λ ~ Poisson(λ), and obtain a Poisson-like distribution that represents the summed subprocesses. Notably, this unified process has a larger dispersion than expected from a homogeneous Poisson distribution, but it is in line with the Gamma mixture of λ. Heterogeneous λ and inference A practical consequence of introducing flexibility into your assumed distribution is that inference becomes more challenging. More parameters (i.e., the Gamma parameters) need to be estimated. Parameters act as flexible explainers of the data, tending to overfit and explain away variance in your variable. The more parameters you have, the better the explanation may seem, but the model also becomes more susceptible to noise in the data. Higher variance reduces the power to identify a difference in means, if one exists, because — well — it gets lost in the variance. Countering the loss of power Confirm whether you indeed need to extend the standard Poisson distribution. If not, simplify to the simplest, most fit model. A quick check on overdispersion may suffice for this. Pin down the estimates of the Gamma mixture distribution parameters using regularising, informative priors (think: Bayes). During my research process for writing this blog, I learned a great deal about the connective tissue underlying all of this: how the binomial distribution plays a fundamental role in the processes we’ve discussed. And while I’d love to ramble on about this, I’ll save it for another post, perhaps. In the meantime, feel free to share your understanding in the comments section below 👍. Conclusion The Poisson distribution is a simple distribution that can be highly suitable for modelling count data. However, when the assumptions do not hold, one can extend the distribution by allowing the rate parameter to vary as a function of time or other factors, or by assuming subprocesses that collectively make up the count data. This added flexibility can address the limitations, but it comes at a cost: increased flexibility in your modelling raises the variance and, consequently, undermines the statistical power of your model. If your end goal is inference, you may want to think twice and consider exploring simpler models for the data. Alternatively, switch to the Bayesian paradigm and leverage its built-in solution to regularise estimates: informative priors. I hope this has given you what you came for — a better intuition about the Poisson distribution. I’d love to hear your thoughts about this in the comments! Unless otherwise noted, all images are by the author.Originally published at https://aalvarezperez.github.io on January 5, 2025.

You’ve probably used the normal distribution one or two times too many. We all have — It’s a true workhorse. But sometimes, we run into problems. For instance, when predicting or forecasting values, simulating data given a particular data-generating process, or when we try to visualise model output and explain them intuitively to non-technical stakeholders. Suddenly, things don’t make much sense: can a user really have made -8 clicks on the banner? Or even 4.3 clicks? Both are examples of how count data doesn’t behave.

I’ve found that better encapsulating the data generating process into my modelling has been key to having sensible model output. Using the Poisson distribution when it was appropriate has not only helped me convey more meaningful insights to stakeholders, but it has also enabled me to produce more accurate error estimates, better Inference, and sound decision-making.

In this post, my aim is to help you get a deep intuitive feel for the Poisson distribution by walking through example applications, and taking a dive into the foundations — the maths. I hope you learn not just how it works, but also why it works, and when to apply the distribution.

If you know of a resource that has helped you grasp the concepts in this blog particularly well, you’re invited to share it in the comments!

Outline

  1. Examples and use cases: Let’s walk through some use cases and sharpen the intuition I just mentioned. Along the way, the relevance of the Poisson Distribution will become clear.
  2. The foundations: Next, let’s break down the equation into its individual components. By studying each part, we’ll uncover why the distribution works the way it does.
  3. The assumptions: Equipped with some formality, it will be easier to understand the assumptions that power the distribution, and at the same time set the boundaries for when it works, and when not.
  4. When real life deviates from the model: Finally, let’s explore the special links that the Poisson distribution has with the Negative Binomial distribution. Understanding these relationships can deepen our understanding, and provide alternatives when the Poisson distribution is not suited for the job.

Example in an online marketplace

I chose to deep dive into the Poisson distribution because it frequently appears in my day-to-day work. Online marketplaces rely on binary user choices from two sides: a seller deciding to list an item and a buyer deciding to make a purchase. These micro-behaviours drive supply and demand, both in the short and long term. A marketplace is born.

Binary choices aggregate into counts — the sum of many such decisions as they occur. Attach a timeframe to this counting process, and you’ll start seeing Poisson distributions everywhere. Let’s explore a concrete example next.

Consider a seller on a platform. In a given month, the seller may or may not list an item for sale (a binary choice). We would only know if she did because then we’d have a measurable count of the event. Nothing stops her from listing another item in the same month. If she does, we count those events. The total could be zero for an inactive seller or, say, 120 for a highly engaged seller.

Over several months, we would observe a varying number of listed items by this seller — sometimes fewer, sometimes more — hovering around an average monthly listing rate. That is essentially a Poisson process. When we get to the assumptions section, you’ll see what we had to assume away to make this example work.

Other examples

Other phenomena that can be modelled with a Poisson distribution include:

  • Sports analytics: The number of goals scored in a match between two teams.
  • Queuing: Customers arriving at a help desk or customer support calls.
  • Insurance: The number of claims made within a given period.

Each of these examples warrants further inspection, but for the remainder of this post, we’ll use the marketplace example to illustrate the inner workings of the distribution.

The mathy bit

… or foundations.

I find opening up the probability mass function (PMF) of distributions helpful to understanding why things work as they do. The PMF of the Poisson distribution goes like:

Where λ is the rate parameter, and 𝑘 is the manifested count of the random variable (𝑘 = 0, 1, 2, 3, … events). Very neat and compact.

Graph: The probability mass function of the Poisson distribution, for a few different lambdas.
The probability mass function of the Poisson distribution, for a few different lambdas.

Contextualising λ and k: the marketplace example

In the context of our earlier example — a seller listing items on our platform — λ represents the seller’s average monthly listings. As the expected monthly value for this seller, λ orchestrates the number of items she would list in a month. Note that λ is a Greek letter, so read: λ is a parameter that we can estimate from data. On the other hand, 𝑘 does not hold any information about the seller’s idiosyncratic behaviour. It’s the target value we set for the number of events that may happen to learn about its probability.

The dual role of λ as the mean and variance

When I said that λ orchestrates the number of monthly listings for the seller, I meant it quite literally. Namely, λ is both the expected value and variance of the distribution, indifferently, for all values of λ. This means that the mean-to-variance ratio (index of dispersion) is always 1.

To put this into perspective, the normal distribution requires two parameters — 𝜇 and 𝜎², the average and variance respectively — to fully describe it. The Poisson distribution achieves the same with just one.

Having to estimate only one parameter can be beneficial for parametric inference. Specifically, by reducing the variance of the model and increasing the statistical power. On the other hand, it can be too limiting of an assumption. Alternatives like the Negative Binomial distribution can alleviate this limitation. We’ll explore that later.

Breaking down the probability mass function

Now that we know the smallest building blocks, let’s zoom out one step: what is λᵏ, 𝑒^⁻λ, and 𝑘!, and more importantly, what is each of these components’ function in the whole?

  • λᵏ is a weight that expresses how likely it is for 𝑘 events to happen, given that the expectation is λ. Note that “likely” here does not mean a probability, yet. It’s merely a signal strength.
  • 𝑘! is a combinatorial correction so that we can say that the order of the events is irrelevant. The events are interchangeable.
  • 𝑒^⁻λ normalises the integral of the PMF function to sum up to 1. It’s called the partition function of exponential-family distributions.

In more detail, λᵏ relates the observed value 𝑘 to the expected value of the random variable, λ. Intuitively, more probability mass lies around the expected value. Hence, if the observed value lies close to the expectation, the probability of occurring is larger than the probability of an observation far removed from the expectation. Before we can cross-check our intuition with the numerical behaviour of λᵏ, we need to consider what 𝑘! does.

Interchangeable events

Had we cared about the order of events, then each unique event could be ordered in 𝑘! ways. But because we don’t, and we deem each event interchangeable, we “divide out” 𝑘! from λᵏ to correct for the overcounting.

Since λᵏ is an exponential term, the output will always be larger as 𝑘 grows, holding λ constant. That is the opposite of our intuition that there is maximum probability when λ = 𝑘, as the output is larger when 𝑘 = λ + 1. But now that we know about the interchangeable events assumption — and the overcounting issue — we know that we have to factor in 𝑘! like so: λᵏ 𝑒^⁻λ / 𝑘!, to see the behaviour we expect.

Now let’s check the intuition of the relationship between λ and 𝑘 through λᵏ, corrected for 𝑘!. For the same λ, say λ = 4, we should see λᵏ 𝑒^⁻λ / 𝑘! to be smaller for values of 𝑘 that are far removed from 4, compared to values of 𝑘 that lie close to 4. Like so: inline code: 4²/2 = 8 is smaller than 4⁴/24 = 10.7. This is consistent with the intuition of a higher likelihood of 𝑘 when it’s near the expectation. The image below shows this relationship more generally, where you see that the output is larger as 𝑘 approaches λ.

Graph: The probability mass function without the normalising component e^-lambda.
The probability mass function without the normalising component e^-lambda.

The assumptions

First, let’s get one thing off the table: the difference between a Poisson process, and the Poisson distribution. The process is a stochastic continuous-time model of points happening in given interval: 1D, a line; 2D, an area, or higher dimensions. We, data scientists, most often deal with the one-dimensional case, where the “line” is time, and the points are the events of interest — I dare to say.

These are the assumptions of the Poisson process:

  1. The occurrence of one event does not affect the probability of a second event. Think of our seller going on to list another item tomorrow indifferently of having done so already today, or the one from five days ago for that matter. The point here is that there is no memory between events.
  2. The average rate at which events occur, is independent of any occurrence. In other words, no event that happened (or will happen) alters λ, which remains constant throughout the observed timeframe. In our seller example, this means that listing an item today does not increase or decrease the seller’s motivation or likelihood of listing another item tomorrow.
  3. Two events cannot occur at exactly the same instant. If we were to zoom at an infinite granular level on the timescale, no two listings could have been placed simultaneously; always sequentially.

From these assumptions — no memory, constant rate, events happening alone — it follows that 1) any interval’s number of events is Poisson-distributed with parameter λₜ and 2) that disjoint intervals are independent — two key properties of a Poisson process.

A Note on the distribution:
The distribution simply describes probabilities for various numbers of counts in an interval. Strictly speaking, one can use the distribution pragmatically whenever the data is nonnegative, can be unbounded on the right, has mean λ, and reasonably models the data. It would be just convenient if the underlying process is a Poisson one, and actually justifies using the distribution.

The marketplace example: Implications

So, can we justify using the Poisson distribution for our marketplace example? Let’s open up the assumptions of a Poisson process and take the test.

Constant λ

  • Why it may fail: The seller has patterned online activity; holidays; promotions; listings are seasonal goods.
  • Consequence: λ is not constant, leading to overdispersion (mean-to-variance ratio is larger than 1, or to temporal patterns.

Independence and memorylessness

  • Why it may fail: The propensity to list again is higher after a successful listing, or conversely, listing once depletes the stock and intervenes with the propensity of listing again.
  • Consequence: Two events are no longer independent, as the occurrence of one informs the occurrence of the other.

Simultaneous events

  • Why it may fail: Batch-listing, a new feature, was introduced to help the sellers.
  • Consequence: Multiple listings would come online at the same time, clumped together, and they would be counted simultaneously.

Balancing rigour and pragmatism

As Data Scientists on the job, we may feel trapped between rigour and pragmatism. The three steps below should give you a sound foundation to decide on which side to err, when the Poisson distribution falls short:

  1. Pinpoint your goal: is it inference, simulation or prediction, and is it about high-stakes output? List the worst thing that can happen, and the cost of it for the business.
  2. Identify the problem and solution: why does the Poisson distribution not fit, and what can you do about it? list 2-3 solutions, including changing nothing.
  3. Balance gains and costs: Will your workaround improve things, or make it worse? and at what cost: interpretability, new assumptions introduced and resources used. Does it help you in achieving your goal?

That said, here are some counters I use when needed.

When real life deviates from your model

Everything described so far pertains to the standard, or homogenous, Poisson process. But what if reality begs for something different?

In the next section, we’ll cover two extensions of the Poisson distribution when the constant λ assumption does not hold. These are not mutually exclusive, but neither they are the same:

  1. Time-varying λ: a single seller whose listing rate ramps up before holidays and slows down afterward
  2. Mixed Poisson distribution: multiple sellers listing items, each with their own λ can be seen as a mixture of various Poisson processes

Time-varying λ

The first extension allows λ to have its own value for each time t. The PMF then becomes

Where the number of events 𝐾(𝑇) in an interval 𝑇 follows the Poisson distribution with a rate no longer equal to a fixed λ, but one equal to:

More intuitively, integrating over the interval 𝑡 to 𝑡 + 𝑖 gives us a single number: the expected value of events over that interval. The integral will vary by each arbitrary interval, and that’s what makes λ change over time. To understand how that integration works, it was helpful for me to think of it like this: if the interval 𝑡 to 𝑡₁ integrates to 3, and 𝑡₁ to 𝑡₂ integrates to 5, then the interval 𝑡 to 𝑡₂ integrates to 8 = 3 + 5. That’s the two expectations summed up, and now the expectation of the entire interval.

Practical implication 
One may want to modeling the expected value of the Poisson distribution as a function of time. For instance, to model an overall change in trend, or seasonality. In generative model notation:

Time may be a continuous variable, or an arbitrary function of it.

Process-varying λ: Mixed Poisson distribution

But then there’s a gotcha. Remember when I said that λ has a dual role as the mean and variance? That still applies here. Looking at the “relaxed” PMF*, the only thing that changes is that λ can vary freely with time. But it’s still the one and only λ that orchestrates both the expected value and the dispersion of the PMF*. More precisely, 𝔼[𝑋] = Var(𝑋) still holds.

There are various reasons for this constraint not to hold in reality. Model misspecification, event interdependence and unaccounted for heterogeneity could be the issues at hand. I’d like to focus on the latter case, as it justifies the Negative Binomial distribution — one of the topics I promised to open up.

Heterogeneity and overdispersion
Imagine we are not dealing with one seller, but with 10 of them listing at different intensity levels, λᵢ, where 𝑖 = 1, 2, 3, …, 10 sellers. Then, essentially, we have 10 Poisson processes going on. If we unify the processes and estimate the grand λ, we simplify the mixture away. Meaning, we get a correct estimate of all sellers on average, but the resulting grand λ is naive and does not know about the original spread of λᵢ. It still assumes that the variance and mean are equal, as per the axioms of the distribution. This will lead to overdispersion and, in turn, to underestimated errors. Ultimately, it inflates the false positive rate and drives poor decision-making. We need a way to embrace the heterogeneity amongst sellers’ λᵢ.

Negative binomial: Extending the Poisson distribution
Among the few ways one can look at the Negative Binomial distribution, one way is to see it as a compound Poisson process — 10 sellers, sounds familiar yet? That means multiple independent Poisson processes are summed up to a single one. Mathematically, first we draw λ from a Gamma distribution: λ ~ Γ(r, θ), then we draw the count 𝑋 | λ ~ Poisson(λ).

In one image, it is as if we would sample from plenty Poisson distributions, corresponding to each seller.

A negative Binomial distribution arises from many Poisson distributions.
A negative Binomial distribution arises from many Poisson distributions.

The more exposing alias of the Negative binomial distribution is Gamma-Poisson mixture distribution, and now we know why: the dictating λ comes from a continuous mixture. That’s what we needed to explain the heterogeneity amongst sellers.

Let’s simulate this scenario to gain more intuition.

Gamma mixture of lambda.
Gamma mixture of lambda.

First, we draw λᵢ from a Gamma distribution: λᵢ ~ Γ(r, θ). Intuitively, the Gamma distribution tells us about the variety in the intensity — listing rate — amongst the sellers.

On a practical note, one can instill their assumptions about the degree of heterogeneity in this step of the model: how different are sellers? By varying the levels of heterogeneity, one can observe the impact on the final Poisson-like distribution. Doing this type of checks (i.e., posterior predictive check), is common in Bayesian modeling, where the assumptions are set explicitly.

Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions.
Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions.

In the second step, we plug the obtained λ into the Poisson distribution: 𝑋 | λ ~ Poisson(λ), and obtain a Poisson-like distribution that represents the summed subprocesses. Notably, this unified process has a larger dispersion than expected from a homogeneous Poisson distribution, but it is in line with the Gamma mixture of λ.

Heterogeneous λ and inference

A practical consequence of introducing flexibility into your assumed distribution is that inference becomes more challenging. More parameters (i.e., the Gamma parameters) need to be estimated. Parameters act as flexible explainers of the data, tending to overfit and explain away variance in your variable. The more parameters you have, the better the explanation may seem, but the model also becomes more susceptible to noise in the data. Higher variance reduces the power to identify a difference in means, if one exists, because — well — it gets lost in the variance.

Countering the loss of power

  1. Confirm whether you indeed need to extend the standard Poisson distribution. If not, simplify to the simplest, most fit model. A quick check on overdispersion may suffice for this.
  2. Pin down the estimates of the Gamma mixture distribution parameters using regularising, informative priors (think: Bayes).

During my research process for writing this blog, I learned a great deal about the connective tissue underlying all of this: how the binomial distribution plays a fundamental role in the processes we’ve discussed. And while I’d love to ramble on about this, I’ll save it for another post, perhaps. In the meantime, feel free to share your understanding in the comments section below 👍.

Conclusion

The Poisson distribution is a simple distribution that can be highly suitable for modelling count data. However, when the assumptions do not hold, one can extend the distribution by allowing the rate parameter to vary as a function of time or other factors, or by assuming subprocesses that collectively make up the count data. This added flexibility can address the limitations, but it comes at a cost: increased flexibility in your modelling raises the variance and, consequently, undermines the statistical power of your model.

If your end goal is inference, you may want to think twice and consider exploring simpler models for the data. Alternatively, switch to the Bayesian paradigm and leverage its built-in solution to regularise estimates: informative priors.

I hope this has given you what you came for — a better intuition about the Poisson distribution. I’d love to hear your thoughts about this in the comments!

Unless otherwise noted, all images are by the author.
Originally published at 
https://aalvarezperez.github.io on January 5, 2025.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Will Google throw gasoline on the AI chip arms race?

The Nvidia processors, he explains, are for processing massive, large language models (LLMs), while the Google TPU is used for inferencing, the next step after processing the LLM. So the two chips don’t compete with each other, they complement each other, according to Gold. Selling and supporting processors may not

Read More »

Nvidia moves deeper into AI infrastructure with SchedMD acquisition

“Slurm excels at orchestrating multi-node distributed training, where jobs span hundreds or thousands of GPUs,” said Lian Jye Su, chief analyst at Omdia. “The software can optimize data movement within servers by deciding where jobs should be placed based on resource availability. With strong visibility into the network topology, Slurm

Read More »

ExxonMobil bumps up 2030 target for Permian production

ExxonMobil Corp., Houston, is looking to grow production in the Permian basin to about 2.5 MMboe/d by 2030, an increase of 200,000 boe/d from executives’ previous forecasts and a jump of more than 45% from this year’s output. Helping drive that higher target is an expected 2030 cost profile that

Read More »

Strategists Forecast Week on Week USA Crude Build

In an oil and gas report sent to Rigzone by the Macquarie team this week, Macquarie strategists, including Walt Chancellor, revealed that they are forecasting that U.S. crude inventories will be up by 2.5 million barrels for the week ending December 12. “This follows a 1.8 million barrel draw in the prior week, with the crude balance realizing quite loose relative to our expectations amidst an apparent surge in Canadian imports,” the strategists said in the report. “While our balances point to a much looser fundamental picture this week, we note some potential for a ‘catch-up’ to the tighter side in this week’s data,” they added. “For this week’s balance, from refineries, we look for a minimal reduction in crude runs. Among net imports, we model a small increase, with exports lower (-0.1 million barrels per day) and imports higher (+0.1 million barrels per day) on a nominal basis,” they continued. The strategists warned in the report that the timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for an increase (+0.4 million barrels per day) on a nominal basis this week,” the strategists went on to note. “Rounding out the picture, we anticipate another small increase (+0.3 million barrels) in SPR [Strategic Petroleum Reserve] stocks this week,” they added. The analysts also stated in the report that, “among products”, they “again look for across the board builds (gasoline/ distillate/jet +5.2/+2.0/+1.5 million barrels)”. “We model implied demand for these three products at ~14.3 million barrels per day for the week ending December 12,” they said. In its latest weekly petroleum status report at the time of writing, which was released on December 10 and included data for the week ending December 5, the U.S. Energy Information Administration (EIA)

Read More »

SK On pivots to stationary energy storage after Ford joint venture ends

Dive Brief: Korean battery maker SK On says it remains committed to building out a Tennessee plant originally intended to supply electric vehicle batteries to Ford after a joint venture with the car maker was called off, the company said in a statement. The manufacturer will maintain its strategic partnership with Ford and continue to supply EV batteries for its future vehicles, SK Americas spokesperson Joe Guy Collier said in an email. However, going forward, SK On plans to focus more on “profitable and sustainable growth” in the U.S. by supplying batteries produced in the Tennessee plant to other customers, including for stationary energy storage systems, the company said. “This agreement allows SK On to strategically realign assets and production capacity to improve its operational efficiency,” the battery maker said in a statement. “It also enables the company to enhance productivity, operational flexibility, and respond more effectively to evolving market dynamics and diverse customer needs.” Dive Insight: Ford and SK On reached a mutual agreement to dissolve their electric vehicle battery joint venture, BlueOval SK, Collier confirmed in an email last week.  The joint venture was established in September 2021 as part of a planned $11.4 billion investment by the two companies to build three large-scale manufacturing plants — one in Tennessee and two in Kentucky —  to produce advanced batteries for Ford’s future EVs.  Under the terms of the dissolution agreement, each company will independently own and operate the joint venture’s former production facilities, Collier said. A Ford subsidiary will take full ownership of the two battery plants in Kentucky, and SK On will assume full ownership and operate the battery plant in Tennessee. “SK On is committed to the Tennessee plant long-term,” the company said. “We plan to make it a key part of our manufacturing base for advanced batteries

Read More »

Shell Adds New Gas Customer in Nigeria

Shell PLC, through Shell Nigeria Gas Ltd (SNG), has signed an agreement to supply natural gas to SG Industrial FZE. The new customer is “a leading steel company in the Guandong industrial zone in the state”, the British company said on its Nigerian website. “The agreement adds to a growing list of clients for SNG which has developed as a dependable supplier of gas through distribution pipelines of some 150 kilometers [93.21 miles], serving over 150 clients in Abia, Bayelsa, Ogun and Rivers states”, Shell said. Shell did not disclose the contract volume or value. SNG managing director Ralph Gbobo said, “Our commitment is clear – to build, operate and maintain a gas distribution system that is not only reliable but resilient, transparent and designed to fuel growth”. SG Industrial vice general manager Moya Shua said, “This collaboration marks a major step forward in securing reliable energy that will power our growth and long-term ambitions”. Shell said it had previously signed agreements to supply pipeline gas to Nigeria Distilleries Ltd III, Reliance Chemical Products Limited II, Rumbu Industries Nigeria Ltd and Ultimum Ltd. Expanding its gas operations in the West African country, Shell recently announced a final investment decision to develop the HI field to supply up to 350 million standard cubic feet of gas a day, equivalent to about 60,000 oil barrels per day, to Nigeria LNG. The project is part of a joint venture in which Shell owns 40 percent through Shell Nigeria Exploration and Production Co Ltd. Sunlink Energies and Resources Ltd holds 60 percent. At Nigeria LNG, which has a declared capacity of 22 million metric tons of liquefied natural gas a year, Shell owns 25.6 percent. “The increase in feedstock to NLNG, via the train VII project that aims to expand the Bonny Island terminal’s production capacity,

Read More »

Energy Secretary Ensures Washington Coal Plant Remains Open to Ensure Affordable, Reliable and Secure Power Heading into Winter

Emergency order addresses critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access WASHINGTON—U.S. Secretary of Energy Chris Wright today issued an emergency order to ensure Americans in the Northwestern region of the United States have access to affordable, reliable and secure electricity heading into the cold winter months. The order directs TransAlta to keep Unit 2 of the Centralia Generating Station in Centralia, Washington available to operate. Unit 2 of the coal plant was scheduled to shut down at the end of 2025. The reliable supply of power from the Centralia coal plant is essential for grid stability in the Northwest. The order prioritizes minimizing the risk and costs of blackouts. “The last administration’s energy subtraction policies had the United States on track to experience significantly more blackouts in the coming years — thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump administration will continue taking action to keep America’s coal plants running so we can stop the price spikes and ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to heat their homes all the time, regardless of whether the wind is blowing or the sun is shining.” According to DOE’s Resource Adequacy Report, blackouts were on track to potentially increase 100 times by 2030 if the U.S. continued to take reliable power offline as it did during the Biden administration. The North American Electric Reliability Corporation (NERC) determined in its 2025-2026 Winter Reliability Assessment that the WECC Northwest region is at elevated risk during periods of extreme weather, such as prolonged, far-reaching cold snaps.  This order is in effect beginning on December 16, 2025, and continuing until March 16, 2026.  Background:  The NERC Winter Reliability Assessment warns that “extreme winter conditions extending over

Read More »

Wood Says Mideast Contract Wins Exceeded $1B in 2025

John Wood Group PLC said Tuesday it has won more than $1 billion in contracts across the Middle East this year, exceeding last year’s company record. “Wood has seen a near 20 percent increase in awards compared to 2024, with wins across United Arab Emirates, Iraq, Kingdom of Saudi Arabia, Bahrain, Kuwait, Oman and Qatar”, the Aberdeen, Scotland-based engineering and consulting company said in an online statement. Ellis Renforth, president of operations for Europe, Middle East and Africa at Wood, said, “This year we’ve delivered critical solutions across the Middle East to improve asset reliability and cut emissions”. “In 2026, we’ll build on this success by expanding our operations and maintenance services in the region. Our focus is on proven approaches to asset management and modifications that improve efficiency and reduce downtime – practical steps that strengthen energy security and decarbonization”, Renforth added. Stuart Turl, Wood vice president for Middle East consulting, said, “Decarbonization and digitalization remain central to how we support clients in the Middle East. This year, we launched our specialist Middle East Energy Transition and Digital & AI Hubs to further support clients in accelerating emissions reduction while unlocking efficiencies through AI-driven solutions”. “This in-region advisory enables practical pathways to carbon reduction while supporting national visions for a sustainable energy future. Delivery has already spanned initiatives such as minerals procurement, hydrogen production facilities and carbon capture and storage infrastructure”, Turl said. On May 27 Wood said it had secured a contract from TA’ZIZ, a joint venture of Abu Dhabi National Oil Co (ADNOC) PJSC, TA’ZIZ to provide project management consultancy for the development of the UAE’s first methanol production facility, to rise in Al Ruwais Industrial City. “Construction will be completed by 2028 and the plant will be one of the largest methanol plants in the world, producing 1.8 million tonnes per year. It will be powered using the latest clean energy technology”, Wood noted. On June 10 Wood said it

Read More »

EU to Scrap Combustion Engine Ban

The European Union is set to propose softening emissions rules for new cars, scrapping an effective ban on combustion engines following months of pressure from the automotive industry. The proposal will allow carmakers to slow the rollout of electric vehicles in Europe and aligns the region more closely with the US, where President Donald Trump is tearing up efficiency standards for cars put in place by the previous administration. Globally, automakers are struggling to make the shift profitable, with Ford Motor Co. announcing it will take $19.5 billion in charges tied to a sweeping overhaul of its EV business. The European stepback – to be unveiled Tuesday – follows a global pullback from green policies as economic realities of major transformations set in. Mounting trade tensions with the US and China are pushing Europe to further prioritize shoring up its own industry. Although the bloc is legally bound to reach climate neutrality by 2050, governments and companies are intensifying calls for more flexibility, warning that rigid targets could jeopardize economic stability. Under the new proposal, the European Commission will lower the requirements that would have halted sales of new gasoline and diesel-fueled cars starting in 2035, instead allowing a number of plug-in hybrids and electric vehicles with fuel-powered range extenders, according to people with knowledge of the matter.  Tailpipe emissions will have to be reduced by 90 percent by the middle of the next decade compared with the current goal of a 100 percent reduction, said the people, who asked not to be identified because talks on the proposal are private. The commission will set a condition that carmakers need to compensate for the additional pollution by using low-carbon or renewable fuels or locally produced green steel. The European Commission declined to comment. The proposal is set to be adopted by EU commissioners on

Read More »

Uptime Institute’s Max Smolaks: Power, Racks, and the Economics of the AI Data Center Boom

The latest episode of the Data Center Frontier Show opens not with a sweeping thesis, but with a reminder of just how quickly the industry’s center of gravity has shifted. Editor in Chief Matt Vincent is joined by Max Smolaks, research analyst at Uptime Institute, whom DCF met in person earlier this year at the Open Compute Project (OCP) Global Summit 2025 in San Jose. Since then, Smolaks has been closely tracking several of the most consequential—and least obvious—threads shaping the AI infrastructure boom. What emerges over the course of the conversation is not a single narrative, but a set of tensions: between power and place, openness and vertical integration, hyperscale ambition and economic reality. From Crypto to Compute: An Unlikely On-Ramp One of the clearest structural patterns Smolaks sees in today’s AI buildout is the growing number of large-scale AI data center projects that trace their origins back to cryptocurrency mining. It is a transition few would have predicted even a handful of years ago. Generative AI was not an anticipated workload in traditional capacity planning cycles. Three years ago, ChatGPT did not exist, and the industry had not yet begun to grapple with the scale, power density, and energy intensity now associated with AI training and inference. When demand surged, developers were left with only a limited set of viable options. Many leaned heavily on on-site generation—most often natural gas—to bypass grid delays. Others ended up in geographies that had already been “discovered” by crypto miners. For years, cryptocurrency operators had been quietly mapping underutilized power capacity. Latency did not matter. Proximity to population centers did not matter. Cheap, abundant electricity did—often in remote or unconventional locations that would never have appeared on a traditional data center site-selection short list. As crypto markets softened, those same sites became

Read More »

Google’s TPU Roadmap: Challenging Nvidia’s Dominance in AI Infrastructure

Google’s roadmap for its Tensor Processing Units has quietly evolved into a meaningful counterweight to Nvidia’s GPU dominance in cloud AI infrastructure—particularly at hyperscale. While Nvidia sells physical GPUs and associated systems, Google sells accelerator services through Google Cloud Platform. That distinction matters: Google isn’t competing in the GPU hardware market, but it is increasingly competing in the AI compute services market, where accelerator mix and economics directly influence hyperscaler strategy. Over the past 18–24 months, Google has focused on identifying workloads that map efficiently onto TPUs and has introduced successive generations of the architecture, each delivering notable gains in performance, memory bandwidth, and energy efficiency. Currently, three major TPU generations are broadly available in GCP: v5e and v5p, the “5-series” workhorses tuned for cost-efficient training and scale-out learning. Trillium (v6), offering a 4–5× performance uplift over v5e with significant efficiency gains. Ironwood (v7 / TPU7x), a pod-scale architecture of 9,216 chips delivering more than 40 exaFLOPS FP8 compute, designed explicitly for the emerging “age of inference.” Google is also aggressively marketing TPU capabilities to external customers. The expanded Anthropic agreement (up to one million TPUs, representing ≥1 GW of capacity and tens of billions of dollars) marks the most visible sign of TPU traction. Reporting also suggests that Google and Meta are in advanced discussions for a multibillion-dollar arrangement in which Meta would lease TPUs beginning in 2026 and potentially purchase systems outright starting in 2027. At the same time, Google is broadening its silicon ambitions. The newly introduced Axion CPUs and the fully integrated AI Hypercomputer architecture frame TPUs not as a standalone option, but as part of a multi-accelerator environment that includes Nvidia H100/Blackwell GPUs, custom CPUs, optimized storage, and high-performance fabrics. What follows is a deeper look at how the TPU stack has evolved, and what

Read More »

DCF Trends Summit 2025: Beyond the Grid – Natural Gas, Speed, and the New Data Center Reality

By 2025, the data center industry’s power problem has become a site-selection problem, a finance problem, a permitting problem and, increasingly, a communications problem. That was the throughline of “Beyond the Grid: Natural Gas, Speed, and the New Data Center Reality,” a DCF Trends Summit panel moderated by Stu Dyer, First Vice President at CBRE, with Aad den Elzen, VP of Power Generation at Solar Turbines (a Caterpillar company); Creede Williams, CEO & President of Exigent Energy Partners; and Adam Michaelis, Vice President of Hyperscale Engineering at PointOne Data Centers. In an industry that once treated proximity to gas infrastructure as a red flag, Dyer opened with a blunt marker of the market shift: what used to be a “no-go” is now, for many projects, the shortest path to “yes.” Vacancy is tight, preleasing is high, and the center of gravity is moving both in scale and geography as developers chase power beyond the traditional core. From 48MW Campuses to Gigawatt Expectations Dyer framed the panel’s premise with a Northern Virginia memory: a “big” 48MW campus in Sterling that was expected to last five to seven years—until a hyperscale takedown effectively erased the runway. That was the early warning sign of what’s now a different era entirely. Today, Dyer said, the industry isn’t debating 72MW or even 150MW blocks. Increasingly, the conversation starts at 500MW critical and, for some customers, pushes past a gigawatt. Grid delivery timelines have not kept pace with that shift, and the mismatch is forcing alternative strategies into the mainstream. “If you’re interested in speed and scale… gas.” If there was a sharp edge to the panel, it came from Williams’ assertion that for near-term speed-to-power at meaningful scale, natural gas is the only broadly viable option. Williams spoke as an independent power producer (IPP) operator who

Read More »

Roundtable: The Economics of Acceleration

Ben Rapp, Rehlko: The pace of AI deployment is outpacing grid capacity in many regions, which means power strategy is now directly tied to deployment timelines. To move fast without sacrificing lifecycle cost or reliability, operators are adopting modular power systems that can be installed and commissioned quickly, then expanded or adapted as loads grow. From an energy perspective, this requires architectures that support multiple pathways: traditional generation, cleaner fuels like HVO, battery energy storage, and eventually hydrogen or renewable integrations where feasible. Backup power is no longer a static insurance policy, it’s a dynamic part of the operating model, supporting uptime, compliance, and long-term cost management. Rehlko’s global footprint and broad energy portfolio enable us to support operators through these transitions with scalable solutions that meet existing technical needs while providing a roadmap for future adaptation.

Read More »

DCF Trends Summit 2025: Bridging the Data Center Power Gap – Utilities, On-Site Power, and the AI Buildout

The second installment in our recap series from the 2025 Data Center Frontier Trends Summit highlights a panel that brought unusual candor—and welcome urgency—to one of the defining constraints of the AI era: power availability. Moderated by Buddy Rizer, Executive Director of Economic Development for Loudoun County, Bridging the Data Center Power Gap: Ways to Streamline the Energy Supply Chain convened a powerhouse group of energy and data center executives representing on-site generation, independent power markets, regulated utilities, and hyperscale operators: Jeff Barber, VP of Global Data Centers, Bloom Energy Bob Kinscherf, VP of National Accounts, Constellation Stan Blackwell, Director, Data Center Practice, Dominion Energy Joel Jansen, SVP Regulated Commercial Operations, American Electric Power David McCall, VP of Innovation, QTS Data Centers As presented on September 26, 2025 in Reston, Virginia, the discussion quickly revealed that while no single answer exists to the industry’s power crunch, a more collaborative, multi-path playbook is now emerging—and evolving faster than many realize. A Grid Designed for Yesterday Meets AI-Era Demand Curves Rizer opened with context familiar to anyone operating in Northern Virginia: this region sits at the epicenter of globally scaled digital infrastructure, but its once-ample headroom has evaporated under the weight of AI scaling cycles. Across the panel, the message was consistent: demand curves have shifted permanently, and the step-changes in load growth require new thinking across the entire energy supply chain. Joel Jansen (AEP) underscored the pace of change. A decade ago, utilities faced flat or declining load growth. Now, “our load curve is going straight up,” driven by hyperscale and AI training clusters that are large, high-density, and intolerant of slow development cycles. AEP’s 40,000 miles of transmission and 225,000 miles of distribution infrastructure give it perspective: generation is challenging, but transmission and interconnection timelines are becoming decisive gating factors.

Read More »

DCF Trends Summit 2025 – Scaling AI: Adaptive Reuse, Power-Rich Sites, and the New GPU Frontier

When Jones Lang LaSalle (JLL)’s Sean Farney walked back on stage after lunch at the Data Center Frontier Trends Summit 2025, he didn’t bother easing into the topic. “This is the best one of the day,” he joked, “and it’s got the most buzzwords in the title.” The session, “Scaling AI: The Role of Adaptive Reuse and Power-Rich Sites in GPU Deployment,” lived up to that billing. Over the course of the hour, Farney and his panel of experts dug into the hard constraints now shaping AI infrastructure—and the unconventional sites and power strategies needed to overcome them. Joining Farney on stage were: Lovisa Tedestedt, Strategic Account Executive – Cloud & Service Providers, Schneider Electric Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers Scott Johns, Chief Commercial Officer, Sapphire Gas Solutions Together, they painted a picture of an industry running flat-out, where adaptive reuse, modular buildouts, and behind-the-meter power are becoming the fastest path to AI revenue. The Perfect Storm: 2.3% Vacancy, Power-Constrained Revenue Farney opened with fresh JLL research that set the stakes in stark terms. U.S. colo vacancy is down to 2.3% – roughly 98% utilization. Just five years ago, vacancy was about 10%. The industry is tracking to over 5.4 GW of colocation absorption this year, with 63% of first-half absorption concentrated in just two markets: Northern Virginia and Dallas. There’s roughly 8 GW of build pipeline, but about 73% of that is already pre-leased, largely by hyperscalers and “Mag 7” cloud and AI giants. “We are the envy of every industry on the planet,” Farney said. “That’s fantastic if you’re in the data center business. It’s a really bad thing if you’re a customer.” The message to CIOs and CTOs was blunt: if you don’t have a capacity strategy dialed in, your growth may be constrained

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »