Stay Ahead, Stay ONMINE

Mastering the Poisson Distribution: Intuition and Foundations

You’ve probably used the normal distribution one or two times too many. We all have — It’s a true workhorse. But sometimes, we run into problems. For instance, when predicting or forecasting values, simulating data given a particular data-generating process, or when we try to visualise model output and explain them intuitively to non-technical stakeholders. Suddenly, things don’t make much sense: can a user really have made -8 clicks on the banner? Or even 4.3 clicks? Both are examples of how count data doesn’t behave. I’ve found that better encapsulating the data generating process into my modelling has been key to having sensible model output. Using the Poisson distribution when it was appropriate has not only helped me convey more meaningful insights to stakeholders, but it has also enabled me to produce more accurate error estimates, better Inference, and sound decision-making. In this post, my aim is to help you get a deep intuitive feel for the Poisson distribution by walking through example applications, and taking a dive into the foundations — the maths. I hope you learn not just how it works, but also why it works, and when to apply the distribution. If you know of a resource that has helped you grasp the concepts in this blog particularly well, you’re invited to share it in the comments! Outline Examples and use cases: Let’s walk through some use cases and sharpen the intuition I just mentioned. Along the way, the relevance of the Poisson Distribution will become clear. The foundations: Next, let’s break down the equation into its individual components. By studying each part, we’ll uncover why the distribution works the way it does. The assumptions: Equipped with some formality, it will be easier to understand the assumptions that power the distribution, and at the same time set the boundaries for when it works, and when not. When real life deviates from the model: Finally, let’s explore the special links that the Poisson distribution has with the Negative Binomial distribution. Understanding these relationships can deepen our understanding, and provide alternatives when the Poisson distribution is not suited for the job. Example in an online marketplace I chose to deep dive into the Poisson distribution because it frequently appears in my day-to-day work. Online marketplaces rely on binary user choices from two sides: a seller deciding to list an item and a buyer deciding to make a purchase. These micro-behaviours drive supply and demand, both in the short and long term. A marketplace is born. Binary choices aggregate into counts — the sum of many such decisions as they occur. Attach a timeframe to this counting process, and you’ll start seeing Poisson distributions everywhere. Let’s explore a concrete example next. Consider a seller on a platform. In a given month, the seller may or may not list an item for sale (a binary choice). We would only know if she did because then we’d have a measurable count of the event. Nothing stops her from listing another item in the same month. If she does, we count those events. The total could be zero for an inactive seller or, say, 120 for a highly engaged seller. Over several months, we would observe a varying number of listed items by this seller — sometimes fewer, sometimes more — hovering around an average monthly listing rate. That is essentially a Poisson process. When we get to the assumptions section, you’ll see what we had to assume away to make this example work. Other examples Other phenomena that can be modelled with a Poisson distribution include: Sports analytics: The number of goals scored in a match between two teams. Queuing: Customers arriving at a help desk or customer support calls. Insurance: The number of claims made within a given period. Each of these examples warrants further inspection, but for the remainder of this post, we’ll use the marketplace example to illustrate the inner workings of the distribution. The mathy bit … or foundations. I find opening up the probability mass function (PMF) of distributions helpful to understanding why things work as they do. The PMF of the Poisson distribution goes like: Where λ is the rate parameter, and 𝑘 is the manifested count of the random variable (𝑘 = 0, 1, 2, 3, … events). Very neat and compact. The probability mass function of the Poisson distribution, for a few different lambdas. Contextualising λ and k: the marketplace example In the context of our earlier example — a seller listing items on our platform — λ represents the seller’s average monthly listings. As the expected monthly value for this seller, λ orchestrates the number of items she would list in a month. Note that λ is a Greek letter, so read: λ is a parameter that we can estimate from data. On the other hand, 𝑘 does not hold any information about the seller’s idiosyncratic behaviour. It’s the target value we set for the number of events that may happen to learn about its probability. The dual role of λ as the mean and variance When I said that λ orchestrates the number of monthly listings for the seller, I meant it quite literally. Namely, λ is both the expected value and variance of the distribution, indifferently, for all values of λ. This means that the mean-to-variance ratio (index of dispersion) is always 1. To put this into perspective, the normal distribution requires two parameters — 𝜇 and 𝜎², the average and variance respectively — to fully describe it. The Poisson distribution achieves the same with just one. Having to estimate only one parameter can be beneficial for parametric inference. Specifically, by reducing the variance of the model and increasing the statistical power. On the other hand, it can be too limiting of an assumption. Alternatives like the Negative Binomial distribution can alleviate this limitation. We’ll explore that later. Breaking down the probability mass function Now that we know the smallest building blocks, let’s zoom out one step: what is λᵏ, 𝑒^⁻λ, and 𝑘!, and more importantly, what is each of these components’ function in the whole? λᵏ is a weight that expresses how likely it is for 𝑘 events to happen, given that the expectation is λ. Note that “likely” here does not mean a probability, yet. It’s merely a signal strength. 𝑘! is a combinatorial correction so that we can say that the order of the events is irrelevant. The events are interchangeable. 𝑒^⁻λ normalises the integral of the PMF function to sum up to 1. It’s called the partition function of exponential-family distributions. In more detail, λᵏ relates the observed value 𝑘 to the expected value of the random variable, λ. Intuitively, more probability mass lies around the expected value. Hence, if the observed value lies close to the expectation, the probability of occurring is larger than the probability of an observation far removed from the expectation. Before we can cross-check our intuition with the numerical behaviour of λᵏ, we need to consider what 𝑘! does. Interchangeable events Had we cared about the order of events, then each unique event could be ordered in 𝑘! ways. But because we don’t, and we deem each event interchangeable, we “divide out” 𝑘! from λᵏ to correct for the overcounting. Since λᵏ is an exponential term, the output will always be larger as 𝑘 grows, holding λ constant. That is the opposite of our intuition that there is maximum probability when λ = 𝑘, as the output is larger when 𝑘 = λ + 1. But now that we know about the interchangeable events assumption — and the overcounting issue — we know that we have to factor in 𝑘! like so: λᵏ 𝑒^⁻λ / 𝑘!, to see the behaviour we expect. Now let’s check the intuition of the relationship between λ and 𝑘 through λᵏ, corrected for 𝑘!. For the same λ, say λ = 4, we should see λᵏ 𝑒^⁻λ / 𝑘! to be smaller for values of 𝑘 that are far removed from 4, compared to values of 𝑘 that lie close to 4. Like so: inline code: 4²/2 = 8 is smaller than 4⁴/24 = 10.7. This is consistent with the intuition of a higher likelihood of 𝑘 when it’s near the expectation. The image below shows this relationship more generally, where you see that the output is larger as 𝑘 approaches λ. The probability mass function without the normalising component e^-lambda. The assumptions First, let’s get one thing off the table: the difference between a Poisson process, and the Poisson distribution. The process is a stochastic continuous-time model of points happening in given interval: 1D, a line; 2D, an area, or higher dimensions. We, data scientists, most often deal with the one-dimensional case, where the “line” is time, and the points are the events of interest — I dare to say. These are the assumptions of the Poisson process: The occurrence of one event does not affect the probability of a second event. Think of our seller going on to list another item tomorrow indifferently of having done so already today, or the one from five days ago for that matter. The point here is that there is no memory between events. The average rate at which events occur, is independent of any occurrence. In other words, no event that happened (or will happen) alters λ, which remains constant throughout the observed timeframe. In our seller example, this means that listing an item today does not increase or decrease the seller’s motivation or likelihood of listing another item tomorrow. Two events cannot occur at exactly the same instant. If we were to zoom at an infinite granular level on the timescale, no two listings could have been placed simultaneously; always sequentially. From these assumptions — no memory, constant rate, events happening alone — it follows that 1) any interval’s number of events is Poisson-distributed with parameter λₜ and 2) that disjoint intervals are independent — two key properties of a Poisson process. A Note on the distribution:The distribution simply describes probabilities for various numbers of counts in an interval. Strictly speaking, one can use the distribution pragmatically whenever the data is nonnegative, can be unbounded on the right, has mean λ, and reasonably models the data. It would be just convenient if the underlying process is a Poisson one, and actually justifies using the distribution. The marketplace example: Implications So, can we justify using the Poisson distribution for our marketplace example? Let’s open up the assumptions of a Poisson process and take the test. Constant λ Why it may fail: The seller has patterned online activity; holidays; promotions; listings are seasonal goods. Consequence: λ is not constant, leading to overdispersion (mean-to-variance ratio is larger than 1, or to temporal patterns. Independence and memorylessness Why it may fail: The propensity to list again is higher after a successful listing, or conversely, listing once depletes the stock and intervenes with the propensity of listing again. Consequence: Two events are no longer independent, as the occurrence of one informs the occurrence of the other. Simultaneous events Why it may fail: Batch-listing, a new feature, was introduced to help the sellers. Consequence: Multiple listings would come online at the same time, clumped together, and they would be counted simultaneously. Balancing rigour and pragmatism As Data Scientists on the job, we may feel trapped between rigour and pragmatism. The three steps below should give you a sound foundation to decide on which side to err, when the Poisson distribution falls short: Pinpoint your goal: is it inference, simulation or prediction, and is it about high-stakes output? List the worst thing that can happen, and the cost of it for the business. Identify the problem and solution: why does the Poisson distribution not fit, and what can you do about it? list 2-3 solutions, including changing nothing. Balance gains and costs: Will your workaround improve things, or make it worse? and at what cost: interpretability, new assumptions introduced and resources used. Does it help you in achieving your goal? That said, here are some counters I use when needed. When real life deviates from your model Everything described so far pertains to the standard, or homogenous, Poisson process. But what if reality begs for something different? In the next section, we’ll cover two extensions of the Poisson distribution when the constant λ assumption does not hold. These are not mutually exclusive, but neither they are the same: Time-varying λ: a single seller whose listing rate ramps up before holidays and slows down afterward Mixed Poisson distribution: multiple sellers listing items, each with their own λ can be seen as a mixture of various Poisson processes Time-varying λ The first extension allows λ to have its own value for each time t. The PMF then becomes Where the number of events 𝐾(𝑇) in an interval 𝑇 follows the Poisson distribution with a rate no longer equal to a fixed λ, but one equal to: More intuitively, integrating over the interval 𝑡 to 𝑡 + 𝑖 gives us a single number: the expected value of events over that interval. The integral will vary by each arbitrary interval, and that’s what makes λ change over time. To understand how that integration works, it was helpful for me to think of it like this: if the interval 𝑡 to 𝑡₁ integrates to 3, and 𝑡₁ to 𝑡₂ integrates to 5, then the interval 𝑡 to 𝑡₂ integrates to 8 = 3 + 5. That’s the two expectations summed up, and now the expectation of the entire interval. Practical implication One may want to modeling the expected value of the Poisson distribution as a function of time. For instance, to model an overall change in trend, or seasonality. In generative model notation: Time may be a continuous variable, or an arbitrary function of it. Process-varying λ: Mixed Poisson distribution But then there’s a gotcha. Remember when I said that λ has a dual role as the mean and variance? That still applies here. Looking at the “relaxed” PMF*, the only thing that changes is that λ can vary freely with time. But it’s still the one and only λ that orchestrates both the expected value and the dispersion of the PMF*. More precisely, 𝔼[𝑋] = Var(𝑋) still holds. There are various reasons for this constraint not to hold in reality. Model misspecification, event interdependence and unaccounted for heterogeneity could be the issues at hand. I’d like to focus on the latter case, as it justifies the Negative Binomial distribution — one of the topics I promised to open up. Heterogeneity and overdispersionImagine we are not dealing with one seller, but with 10 of them listing at different intensity levels, λᵢ, where 𝑖 = 1, 2, 3, …, 10 sellers. Then, essentially, we have 10 Poisson processes going on. If we unify the processes and estimate the grand λ, we simplify the mixture away. Meaning, we get a correct estimate of all sellers on average, but the resulting grand λ is naive and does not know about the original spread of λᵢ. It still assumes that the variance and mean are equal, as per the axioms of the distribution. This will lead to overdispersion and, in turn, to underestimated errors. Ultimately, it inflates the false positive rate and drives poor decision-making. We need a way to embrace the heterogeneity amongst sellers’ λᵢ. Negative binomial: Extending the Poisson distributionAmong the few ways one can look at the Negative Binomial distribution, one way is to see it as a compound Poisson process — 10 sellers, sounds familiar yet? That means multiple independent Poisson processes are summed up to a single one. Mathematically, first we draw λ from a Gamma distribution: λ ~ Γ(r, θ), then we draw the count 𝑋 | λ ~ Poisson(λ). In one image, it is as if we would sample from plenty Poisson distributions, corresponding to each seller. A negative Binomial distribution arises from many Poisson distributions. The more exposing alias of the Negative binomial distribution is Gamma-Poisson mixture distribution, and now we know why: the dictating λ comes from a continuous mixture. That’s what we needed to explain the heterogeneity amongst sellers. Let’s simulate this scenario to gain more intuition. Gamma mixture of lambda. First, we draw λᵢ from a Gamma distribution: λᵢ ~ Γ(r, θ). Intuitively, the Gamma distribution tells us about the variety in the intensity — listing rate — amongst the sellers. On a practical note, one can instill their assumptions about the degree of heterogeneity in this step of the model: how different are sellers? By varying the levels of heterogeneity, one can observe the impact on the final Poisson-like distribution. Doing this type of checks (i.e., posterior predictive check), is common in Bayesian modeling, where the assumptions are set explicitly. Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions. In the second step, we plug the obtained λ into the Poisson distribution: 𝑋 | λ ~ Poisson(λ), and obtain a Poisson-like distribution that represents the summed subprocesses. Notably, this unified process has a larger dispersion than expected from a homogeneous Poisson distribution, but it is in line with the Gamma mixture of λ. Heterogeneous λ and inference A practical consequence of introducing flexibility into your assumed distribution is that inference becomes more challenging. More parameters (i.e., the Gamma parameters) need to be estimated. Parameters act as flexible explainers of the data, tending to overfit and explain away variance in your variable. The more parameters you have, the better the explanation may seem, but the model also becomes more susceptible to noise in the data. Higher variance reduces the power to identify a difference in means, if one exists, because — well — it gets lost in the variance. Countering the loss of power Confirm whether you indeed need to extend the standard Poisson distribution. If not, simplify to the simplest, most fit model. A quick check on overdispersion may suffice for this. Pin down the estimates of the Gamma mixture distribution parameters using regularising, informative priors (think: Bayes). During my research process for writing this blog, I learned a great deal about the connective tissue underlying all of this: how the binomial distribution plays a fundamental role in the processes we’ve discussed. And while I’d love to ramble on about this, I’ll save it for another post, perhaps. In the meantime, feel free to share your understanding in the comments section below 👍. Conclusion The Poisson distribution is a simple distribution that can be highly suitable for modelling count data. However, when the assumptions do not hold, one can extend the distribution by allowing the rate parameter to vary as a function of time or other factors, or by assuming subprocesses that collectively make up the count data. This added flexibility can address the limitations, but it comes at a cost: increased flexibility in your modelling raises the variance and, consequently, undermines the statistical power of your model. If your end goal is inference, you may want to think twice and consider exploring simpler models for the data. Alternatively, switch to the Bayesian paradigm and leverage its built-in solution to regularise estimates: informative priors. I hope this has given you what you came for — a better intuition about the Poisson distribution. I’d love to hear your thoughts about this in the comments! Unless otherwise noted, all images are by the author.Originally published at https://aalvarezperez.github.io on January 5, 2025.

You’ve probably used the normal distribution one or two times too many. We all have — It’s a true workhorse. But sometimes, we run into problems. For instance, when predicting or forecasting values, simulating data given a particular data-generating process, or when we try to visualise model output and explain them intuitively to non-technical stakeholders. Suddenly, things don’t make much sense: can a user really have made -8 clicks on the banner? Or even 4.3 clicks? Both are examples of how count data doesn’t behave.

I’ve found that better encapsulating the data generating process into my modelling has been key to having sensible model output. Using the Poisson distribution when it was appropriate has not only helped me convey more meaningful insights to stakeholders, but it has also enabled me to produce more accurate error estimates, better Inference, and sound decision-making.

In this post, my aim is to help you get a deep intuitive feel for the Poisson distribution by walking through example applications, and taking a dive into the foundations — the maths. I hope you learn not just how it works, but also why it works, and when to apply the distribution.

If you know of a resource that has helped you grasp the concepts in this blog particularly well, you’re invited to share it in the comments!

Outline

  1. Examples and use cases: Let’s walk through some use cases and sharpen the intuition I just mentioned. Along the way, the relevance of the Poisson Distribution will become clear.
  2. The foundations: Next, let’s break down the equation into its individual components. By studying each part, we’ll uncover why the distribution works the way it does.
  3. The assumptions: Equipped with some formality, it will be easier to understand the assumptions that power the distribution, and at the same time set the boundaries for when it works, and when not.
  4. When real life deviates from the model: Finally, let’s explore the special links that the Poisson distribution has with the Negative Binomial distribution. Understanding these relationships can deepen our understanding, and provide alternatives when the Poisson distribution is not suited for the job.

Example in an online marketplace

I chose to deep dive into the Poisson distribution because it frequently appears in my day-to-day work. Online marketplaces rely on binary user choices from two sides: a seller deciding to list an item and a buyer deciding to make a purchase. These micro-behaviours drive supply and demand, both in the short and long term. A marketplace is born.

Binary choices aggregate into counts — the sum of many such decisions as they occur. Attach a timeframe to this counting process, and you’ll start seeing Poisson distributions everywhere. Let’s explore a concrete example next.

Consider a seller on a platform. In a given month, the seller may or may not list an item for sale (a binary choice). We would only know if she did because then we’d have a measurable count of the event. Nothing stops her from listing another item in the same month. If she does, we count those events. The total could be zero for an inactive seller or, say, 120 for a highly engaged seller.

Over several months, we would observe a varying number of listed items by this seller — sometimes fewer, sometimes more — hovering around an average monthly listing rate. That is essentially a Poisson process. When we get to the assumptions section, you’ll see what we had to assume away to make this example work.

Other examples

Other phenomena that can be modelled with a Poisson distribution include:

  • Sports analytics: The number of goals scored in a match between two teams.
  • Queuing: Customers arriving at a help desk or customer support calls.
  • Insurance: The number of claims made within a given period.

Each of these examples warrants further inspection, but for the remainder of this post, we’ll use the marketplace example to illustrate the inner workings of the distribution.

The mathy bit

… or foundations.

I find opening up the probability mass function (PMF) of distributions helpful to understanding why things work as they do. The PMF of the Poisson distribution goes like:

Where λ is the rate parameter, and 𝑘 is the manifested count of the random variable (𝑘 = 0, 1, 2, 3, … events). Very neat and compact.

Graph: The probability mass function of the Poisson distribution, for a few different lambdas.
The probability mass function of the Poisson distribution, for a few different lambdas.

Contextualising λ and k: the marketplace example

In the context of our earlier example — a seller listing items on our platform — λ represents the seller’s average monthly listings. As the expected monthly value for this seller, λ orchestrates the number of items she would list in a month. Note that λ is a Greek letter, so read: λ is a parameter that we can estimate from data. On the other hand, 𝑘 does not hold any information about the seller’s idiosyncratic behaviour. It’s the target value we set for the number of events that may happen to learn about its probability.

The dual role of λ as the mean and variance

When I said that λ orchestrates the number of monthly listings for the seller, I meant it quite literally. Namely, λ is both the expected value and variance of the distribution, indifferently, for all values of λ. This means that the mean-to-variance ratio (index of dispersion) is always 1.

To put this into perspective, the normal distribution requires two parameters — 𝜇 and 𝜎², the average and variance respectively — to fully describe it. The Poisson distribution achieves the same with just one.

Having to estimate only one parameter can be beneficial for parametric inference. Specifically, by reducing the variance of the model and increasing the statistical power. On the other hand, it can be too limiting of an assumption. Alternatives like the Negative Binomial distribution can alleviate this limitation. We’ll explore that later.

Breaking down the probability mass function

Now that we know the smallest building blocks, let’s zoom out one step: what is λᵏ, 𝑒^⁻λ, and 𝑘!, and more importantly, what is each of these components’ function in the whole?

  • λᵏ is a weight that expresses how likely it is for 𝑘 events to happen, given that the expectation is λ. Note that “likely” here does not mean a probability, yet. It’s merely a signal strength.
  • 𝑘! is a combinatorial correction so that we can say that the order of the events is irrelevant. The events are interchangeable.
  • 𝑒^⁻λ normalises the integral of the PMF function to sum up to 1. It’s called the partition function of exponential-family distributions.

In more detail, λᵏ relates the observed value 𝑘 to the expected value of the random variable, λ. Intuitively, more probability mass lies around the expected value. Hence, if the observed value lies close to the expectation, the probability of occurring is larger than the probability of an observation far removed from the expectation. Before we can cross-check our intuition with the numerical behaviour of λᵏ, we need to consider what 𝑘! does.

Interchangeable events

Had we cared about the order of events, then each unique event could be ordered in 𝑘! ways. But because we don’t, and we deem each event interchangeable, we “divide out” 𝑘! from λᵏ to correct for the overcounting.

Since λᵏ is an exponential term, the output will always be larger as 𝑘 grows, holding λ constant. That is the opposite of our intuition that there is maximum probability when λ = 𝑘, as the output is larger when 𝑘 = λ + 1. But now that we know about the interchangeable events assumption — and the overcounting issue — we know that we have to factor in 𝑘! like so: λᵏ 𝑒^⁻λ / 𝑘!, to see the behaviour we expect.

Now let’s check the intuition of the relationship between λ and 𝑘 through λᵏ, corrected for 𝑘!. For the same λ, say λ = 4, we should see λᵏ 𝑒^⁻λ / 𝑘! to be smaller for values of 𝑘 that are far removed from 4, compared to values of 𝑘 that lie close to 4. Like so: inline code: 4²/2 = 8 is smaller than 4⁴/24 = 10.7. This is consistent with the intuition of a higher likelihood of 𝑘 when it’s near the expectation. The image below shows this relationship more generally, where you see that the output is larger as 𝑘 approaches λ.

Graph: The probability mass function without the normalising component e^-lambda.
The probability mass function without the normalising component e^-lambda.

The assumptions

First, let’s get one thing off the table: the difference between a Poisson process, and the Poisson distribution. The process is a stochastic continuous-time model of points happening in given interval: 1D, a line; 2D, an area, or higher dimensions. We, data scientists, most often deal with the one-dimensional case, where the “line” is time, and the points are the events of interest — I dare to say.

These are the assumptions of the Poisson process:

  1. The occurrence of one event does not affect the probability of a second event. Think of our seller going on to list another item tomorrow indifferently of having done so already today, or the one from five days ago for that matter. The point here is that there is no memory between events.
  2. The average rate at which events occur, is independent of any occurrence. In other words, no event that happened (or will happen) alters λ, which remains constant throughout the observed timeframe. In our seller example, this means that listing an item today does not increase or decrease the seller’s motivation or likelihood of listing another item tomorrow.
  3. Two events cannot occur at exactly the same instant. If we were to zoom at an infinite granular level on the timescale, no two listings could have been placed simultaneously; always sequentially.

From these assumptions — no memory, constant rate, events happening alone — it follows that 1) any interval’s number of events is Poisson-distributed with parameter λₜ and 2) that disjoint intervals are independent — two key properties of a Poisson process.

A Note on the distribution:
The distribution simply describes probabilities for various numbers of counts in an interval. Strictly speaking, one can use the distribution pragmatically whenever the data is nonnegative, can be unbounded on the right, has mean λ, and reasonably models the data. It would be just convenient if the underlying process is a Poisson one, and actually justifies using the distribution.

The marketplace example: Implications

So, can we justify using the Poisson distribution for our marketplace example? Let’s open up the assumptions of a Poisson process and take the test.

Constant λ

  • Why it may fail: The seller has patterned online activity; holidays; promotions; listings are seasonal goods.
  • Consequence: λ is not constant, leading to overdispersion (mean-to-variance ratio is larger than 1, or to temporal patterns.

Independence and memorylessness

  • Why it may fail: The propensity to list again is higher after a successful listing, or conversely, listing once depletes the stock and intervenes with the propensity of listing again.
  • Consequence: Two events are no longer independent, as the occurrence of one informs the occurrence of the other.

Simultaneous events

  • Why it may fail: Batch-listing, a new feature, was introduced to help the sellers.
  • Consequence: Multiple listings would come online at the same time, clumped together, and they would be counted simultaneously.

Balancing rigour and pragmatism

As Data Scientists on the job, we may feel trapped between rigour and pragmatism. The three steps below should give you a sound foundation to decide on which side to err, when the Poisson distribution falls short:

  1. Pinpoint your goal: is it inference, simulation or prediction, and is it about high-stakes output? List the worst thing that can happen, and the cost of it for the business.
  2. Identify the problem and solution: why does the Poisson distribution not fit, and what can you do about it? list 2-3 solutions, including changing nothing.
  3. Balance gains and costs: Will your workaround improve things, or make it worse? and at what cost: interpretability, new assumptions introduced and resources used. Does it help you in achieving your goal?

That said, here are some counters I use when needed.

When real life deviates from your model

Everything described so far pertains to the standard, or homogenous, Poisson process. But what if reality begs for something different?

In the next section, we’ll cover two extensions of the Poisson distribution when the constant λ assumption does not hold. These are not mutually exclusive, but neither they are the same:

  1. Time-varying λ: a single seller whose listing rate ramps up before holidays and slows down afterward
  2. Mixed Poisson distribution: multiple sellers listing items, each with their own λ can be seen as a mixture of various Poisson processes

Time-varying λ

The first extension allows λ to have its own value for each time t. The PMF then becomes

Where the number of events 𝐾(𝑇) in an interval 𝑇 follows the Poisson distribution with a rate no longer equal to a fixed λ, but one equal to:

More intuitively, integrating over the interval 𝑡 to 𝑡 + 𝑖 gives us a single number: the expected value of events over that interval. The integral will vary by each arbitrary interval, and that’s what makes λ change over time. To understand how that integration works, it was helpful for me to think of it like this: if the interval 𝑡 to 𝑡₁ integrates to 3, and 𝑡₁ to 𝑡₂ integrates to 5, then the interval 𝑡 to 𝑡₂ integrates to 8 = 3 + 5. That’s the two expectations summed up, and now the expectation of the entire interval.

Practical implication 
One may want to modeling the expected value of the Poisson distribution as a function of time. For instance, to model an overall change in trend, or seasonality. In generative model notation:

Time may be a continuous variable, or an arbitrary function of it.

Process-varying λ: Mixed Poisson distribution

But then there’s a gotcha. Remember when I said that λ has a dual role as the mean and variance? That still applies here. Looking at the “relaxed” PMF*, the only thing that changes is that λ can vary freely with time. But it’s still the one and only λ that orchestrates both the expected value and the dispersion of the PMF*. More precisely, 𝔼[𝑋] = Var(𝑋) still holds.

There are various reasons for this constraint not to hold in reality. Model misspecification, event interdependence and unaccounted for heterogeneity could be the issues at hand. I’d like to focus on the latter case, as it justifies the Negative Binomial distribution — one of the topics I promised to open up.

Heterogeneity and overdispersion
Imagine we are not dealing with one seller, but with 10 of them listing at different intensity levels, λᵢ, where 𝑖 = 1, 2, 3, …, 10 sellers. Then, essentially, we have 10 Poisson processes going on. If we unify the processes and estimate the grand λ, we simplify the mixture away. Meaning, we get a correct estimate of all sellers on average, but the resulting grand λ is naive and does not know about the original spread of λᵢ. It still assumes that the variance and mean are equal, as per the axioms of the distribution. This will lead to overdispersion and, in turn, to underestimated errors. Ultimately, it inflates the false positive rate and drives poor decision-making. We need a way to embrace the heterogeneity amongst sellers’ λᵢ.

Negative binomial: Extending the Poisson distribution
Among the few ways one can look at the Negative Binomial distribution, one way is to see it as a compound Poisson process — 10 sellers, sounds familiar yet? That means multiple independent Poisson processes are summed up to a single one. Mathematically, first we draw λ from a Gamma distribution: λ ~ Γ(r, θ), then we draw the count 𝑋 | λ ~ Poisson(λ).

In one image, it is as if we would sample from plenty Poisson distributions, corresponding to each seller.

A negative Binomial distribution arises from many Poisson distributions.
A negative Binomial distribution arises from many Poisson distributions.

The more exposing alias of the Negative binomial distribution is Gamma-Poisson mixture distribution, and now we know why: the dictating λ comes from a continuous mixture. That’s what we needed to explain the heterogeneity amongst sellers.

Let’s simulate this scenario to gain more intuition.

Gamma mixture of lambda.
Gamma mixture of lambda.

First, we draw λᵢ from a Gamma distribution: λᵢ ~ Γ(r, θ). Intuitively, the Gamma distribution tells us about the variety in the intensity — listing rate — amongst the sellers.

On a practical note, one can instill their assumptions about the degree of heterogeneity in this step of the model: how different are sellers? By varying the levels of heterogeneity, one can observe the impact on the final Poisson-like distribution. Doing this type of checks (i.e., posterior predictive check), is common in Bayesian modeling, where the assumptions are set explicitly.

Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions.
Gamma-Poisson mixture distribution versus homogenous Poisson distribution. Τhe dashed line reflects λ, which is 4 for both distributions.

In the second step, we plug the obtained λ into the Poisson distribution: 𝑋 | λ ~ Poisson(λ), and obtain a Poisson-like distribution that represents the summed subprocesses. Notably, this unified process has a larger dispersion than expected from a homogeneous Poisson distribution, but it is in line with the Gamma mixture of λ.

Heterogeneous λ and inference

A practical consequence of introducing flexibility into your assumed distribution is that inference becomes more challenging. More parameters (i.e., the Gamma parameters) need to be estimated. Parameters act as flexible explainers of the data, tending to overfit and explain away variance in your variable. The more parameters you have, the better the explanation may seem, but the model also becomes more susceptible to noise in the data. Higher variance reduces the power to identify a difference in means, if one exists, because — well — it gets lost in the variance.

Countering the loss of power

  1. Confirm whether you indeed need to extend the standard Poisson distribution. If not, simplify to the simplest, most fit model. A quick check on overdispersion may suffice for this.
  2. Pin down the estimates of the Gamma mixture distribution parameters using regularising, informative priors (think: Bayes).

During my research process for writing this blog, I learned a great deal about the connective tissue underlying all of this: how the binomial distribution plays a fundamental role in the processes we’ve discussed. And while I’d love to ramble on about this, I’ll save it for another post, perhaps. In the meantime, feel free to share your understanding in the comments section below 👍.

Conclusion

The Poisson distribution is a simple distribution that can be highly suitable for modelling count data. However, when the assumptions do not hold, one can extend the distribution by allowing the rate parameter to vary as a function of time or other factors, or by assuming subprocesses that collectively make up the count data. This added flexibility can address the limitations, but it comes at a cost: increased flexibility in your modelling raises the variance and, consequently, undermines the statistical power of your model.

If your end goal is inference, you may want to think twice and consider exploring simpler models for the data. Alternatively, switch to the Bayesian paradigm and leverage its built-in solution to regularise estimates: informative priors.

I hope this has given you what you came for — a better intuition about the Poisson distribution. I’d love to hear your thoughts about this in the comments!

Unless otherwise noted, all images are by the author.
Originally published at 
https://aalvarezperez.github.io on January 5, 2025.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

SATORP halts processing activities at Jubail refinery

Saudi Aramco Total Refinery & Petrochemicals Co.—a joint venture of Saudi Aramco (62.5%) and TotalEnergies SE (37.5%)—has temporarily shuttered units at its 460,000 b/d full-conversion refinery complex at Jubail, on Saudi Arabia’s eastern coast, following disruptions resulting from the ongoing war in the Middle East. In an Apr. 10 update

Read More »

Intel secures Google cloud and AI infrastructure deal

“Scaling AI requires more than accelerators – it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand,” said Lip-Bu Tan, CEO  of Intel in a statement. Google does offer custom Armv9-based Axion processors as an alternative to x86 based instances

Read More »

Broadcom strikes chip deals with Google, Anthropic

Anthropic said this week that the AI startup’s annual revenue run rate has now crossed $30 billion, up from about $9 billion the previous year. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth,” said Krishna Rao, CFO of Anthropic, in a

Read More »

BW Energy granted 25-year extension of license offshore Gabon

BW Energy Gabon has received approval from the Ministry of Oil and Gas of the Gabonese Republic to extend the Dussafu Marin production license offshore Gabon, West Africa. The license period has been extended to 2053 from 2028, inclusive of three 5-year option periods from 2038 onwards. The prior contract was until 2038 inclusive of two 5-year option periods from 2028 onwards. The extra time “provides long-term visibility for production, investments, and reserve development” of the operator’s “core producing asset,” the company said in a release Apr. 7. Ongoing license projects include MaBoMo Phase 2, with planned first oil in second-half 2026, and the Bourdon development following its discovery last year. The timeline also “strengthens the foundation for future infrastructure‑led growth opportunities across the adjacent Niosi and Guduma licenses, both operated by BW Energy,” the company continued. The Dussafu Marin permit is a development and exploitation license with multiple discoveries and prospects lying within a proven oil and gas play fairway within Southern Gabon basin. To the northwest of the block is the Etame-Ebouri Trend, a collection of fields producing from the pre-salt Gamba and Dentale sandstones, and to the north are Lucina and M’Bya fields which produce from the syn-rift Lucina sandstones beneath the Gamba. Oil fields within the Dussafu Permit include Moubenga, Walt Whitman, Ruche, Ruche North East, Tortue, Hibiscus, and Hibiscus North. BW Energy Gabon is operator at Dussafu (73.50%) with partners Panoro Energy ASA (17.5%) and Gabon Oil Co. (9%). Dussafu.

Read More »

Santos plans development of North Slope’s Quokka Unit

Santos Ltd. has started development planning in the Quokka Unit on Alaska’s North Slope after further delineating the Nanushuk reservoir. The Quokka-1 appraisal well spudded on Jan. 1, 2026, about 6 six miles from the Mitquq-1 discovery well drilled in 2020. It was drilled to 4,787 ft TD and encountered a high-quality reservoir with about 143 ft of net oil pay in the Nanushuk formation, demonstrating an average porosity of 19%. Following a single stage fracture stimulation, the well achieved a flow rate of 2,190 bo/d. Reservoir sands correlated between the two discoveries, coupled with fluid analyses, confirm the presence of high‑quality, light‑gravity oil, supporting strong well performance and improved pricing relative to Pikka oil. Together with additional geological data, these results underpin the potential for a two‑drill‑site development with production capacity comparable to Pikka phase 1, the company said.  Rate and resource potential for the two-drill-site development is being evaluated. Resource estimation is ongoing and appraisal results will be evaluated as part of the FY26 contingent resource assessment. In FY25, Santos reported 2C contingent resources of 177 MMboe for the Quokka Unit. Based on these results, Santos has started development planning, including the initiation of key permitting activities. Santos is operator of the Quokka Unit (51%) with partner Repsol (49%).

Read More »

Fluor, Axens secure contracts for US grassroots refinery project

Fluor Corp. and Axens Group have been awarded key contracts for America First Refining’s (AFR) proposed grassroots refinery at the Port of Brownsville, Tex., advancing development of what would be the first new US refinery to be built in more than 50 years. Fluor will execute front-end engineering and design (FEED) for the project, while Axens will serve as technology licensor of core refining process technologies to be used at the site, the service providers said in separate Apr. 7 releases. The AFR refinery is designed to process more than 60 million bbl/year—or about 164,400 b/d—of US light shale crude into transportation fuels, including gasoline, diesel, and jet fuel. Contract details Without disclosing a specific value of its contract, Fluor said the scope of its FEED study will cover early-stage engineering and design required to define project execution, cost, and schedule based on a complex that will incorporate commercially proven technologies to improve efficiency and emissions performance while processing domestic shale crude. As technology licensor, Axens said it will deliver process technologies for key refining units at the site, including those for: Naphtha, diesel hydrotreating. Continuous catalytic reforming. Isomerization. Alongside supporting improved fuel-quality specifications, the unspecified technologies to be supplied for the refinery will also help to reduce overall energy consumption at the site. Axens—which confirmed its involvement since 2017 in working with AFR on early-stage development of the project—said this latest licensing agreement will also cover engineering support, equipment, catalysts, and services across the refinery’s process configuration. Project background, commercial framework Upon first announcing the project in March 2026, AFR said the proposed development came alongside an already signed 20-year offtake agreement with a global integrated oil company covering 1.2 billion bbl of US light shale crude, as well as capital investment to support construction. As part of the

Read More »

EIA: US crude inventories up 3.1 million bbl

US crude oil inventories for the week ended Apr. 3, excluding the Strategic Petroleum Reserve, increased by 3.1 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). At 464.7 million bbl, US crude oil inventories are about 2% above the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 1.6 million bbl from last week and are about 3% above the 5-year average for this time of year. Finished gasoline inventories increased while blending components inventories decreased last week. Distillate fuel inventories decreased by 3.1 million bbl last week and are about 5% below the 5-year average for this time of year. Propane-propylene inventories increased by 600,000 bbl from last week and are 71% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.3 million b/d for the week ended Apr. 3, which was 129,000 b/d less than the previous week’s average. Refineries operated at 92% of capacity. Gasoline production decreased, averaging 9.4 million b/d. Distillate fuel production increased, averaging 5.0 million b/d. US crude oil imports averaged 6.3 million b/d, down 130,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 6.6 million b/d, 9.1% more than the same 4-week period last year. Total motor gasoline imports averaged 571,000 b/d. Distillate fuel imports averaged 152,000 b/d.

Read More »

Oil prices plunge as Iran war tensions ease amid tentative Hormuz reopening

Crude oil prices plunged sharply on Apr. 7 after US President Donald Trump announced a conditional 2-week ceasefire agreement with Iran, contingent on reopening the Strait of Hormuz and restoring safe passage for energy shipments. Both Brent and WTI crude oil fell towards $95/bbl, marking their largest single-day decline since 2020. Under the agreement, Iran signaled willingness to halt attacks on shipping and allow transit through Hormuz while broader negotiations continue. The US also indicated it would assist in clearing a backlog of tankers and stabilizing maritime traffic. Benchmark crude prices initially surged above $110/bbl in early April amid fears of prolonged supply disruption after Iran effectively restricted traffic through the strait—a corridor responsible for roughly 20% of global oil flows. The blockade, triggered by escalating US-Iran hostilities, caused tanker traffic to collapse and stranded millions of barrels of crude and refined products in the region. Despite the price correction, analysts caution that supply disruptions and infrastructure damage will continue to constrain markets. The conflict has already impaired regional energy assets, including LNG infrastructure in Qatar, and forced producers across the Middle East to curtail output or delay exports. The US Energy Information Administration (EIA) warned that fuel prices may remain elevated for months even if flows normalize, citing logistical bottlenecks, depleted inventories, and continued geopolitical uncertainty. “In theory, the 10–13 million b/d of crude oil and product supply stranded behind the Strait should now be gradually released. Whether the pre-March status quo will be re-established depends entirely on whether the truce can be turned into a permanent peace during the negotiations in Pakistan,” said Tamas Varga, analyst, PVM Oil Associates. “What appears evident, at least for now, is that the current quarter, the April–June period, will be the tightest, as the scarcity of available oil, both crude and refined

Read More »

EIA: Brent crude to reach $115/bbl in second-quarter 2026

Global oil markets have entered a period of acute volatility, with prices expected to surge into second-quarter 2026 as war-driven supply disruptions in the Middle East constrain flows through the Strait of Hormuz, according to the US Energy Information Administration (EIA)’s April Short-Term Energy Outlook. The agency estimates that Brent crude averaged $103/bbl in March and will climb further to a quarterly peak of about $115/bbl in second-quarter 2026, reflecting a sharp tightening in global supply following widespread production shut-ins across key Gulf producers. The disruption stems from the effective closure of the Strait of Hormuz, a critical chokepoint that typically carries nearly 20% of global oil supply. The US-Iran war in the region has forced producers including Saudi Arabia, Iraq, Kuwait, and the UAE to curtail output significantly. EIA estimates that crude production shut-ins averaged 7.5 million b/d in March and will rise to a peak of 9.1 million b/d in April. In this outlook, EIA assumes the conflict does not persist past April and that traffic through the Strait of Hormuz gradually resumes. Under those assumptions, EIA expects production shut-ins will fall to 6.7 million b/d in May and return close to pre-conflict levels in late 2026. The scale of the outage has rapidly flipped the market from prior expectations of oversupply into a pronounced deficit, with global inventories drawing sharply during the second quarter. Despite an assumption that the conflict does not persist beyond April, the agency warns that supply chains will take months to normalize, keeping a geopolitical risk premium embedded in prices through late 2026. EIA forecasts the Brent crude oil price will fall below $90/bbl in fourth-quarter 2026 and average $76/bbl in 2027, about $23/bbl higher than in its February STEO forecast. This price forecast is highly dependent on EIA’s assumptions of both the

Read More »

OpenAI puts part of Stargate project on hold over runaway power costs

OpenAI has postponed plans to open one of the data centers central to its Stargate project. It announced its plan to open the data center in the UK with great fanfare last September, when it was regarded as a major boost for the country’s nascent AI industry, as well as proving a step up for OpenAI’s international credentials. At the time, Sam Altman, CEO of OpenAI, said, “The UK has been a longstanding pioneer of AI, and is now home to world-class researchers, millions of ChatGPT users, and a government that quickly recognized the potential of this technology.” All of that has been quietly forgotten. The plans for the data center in Northumberland, in the Northeast of England, have been put on hold, with the project ready to be revived when the conditions are ripe for major infrastructure investment, according to a report by the BBC.

Read More »

Neoclouds gain momentum in a supply-constrained world

And since they used the same hardware, both neoclouds and traditional cloud providers are subject to the same shortage problem. Component suppliers are reporting significant shortages due to demand for AI data centers and Synergy sees neoclouds also experiencing delays just like traditional cloud providers. “Demand is currently outstripping supply,” said Dinsmore. “It will take a while before that starts to come into more balance.” Among neoclouds, CoreWeave stands out as the most direct challenger to traditional hyperscale cloud providers. Meanwhile, OpenAI and Anthropic represent a distinct but increasingly important category, and that is platform-centric providers offering cloud-like access to foundational models and AI development environments. Synergy says that as demand for AI infrastructure accelerates, neoclouds are positioning themselves as focused alternatives to traditional hyperscale providers such as Amazon, Microsoft and Google.

Read More »

What is AI networking? How it adds intelligence to your infrastructure

The end goal is to make networks more reliable, efficient and performant. Enterprises are already seeing notable results when AI is applied to IT operations, including shorter deployment times, a decrease in trouble tickets, and faster time to resolution. With the help of AI, networks  will become more autonomous and self-healing (that is, able to address issues without the need for human intervention). In fact, Tier 1 and Tier 2 infrastructure is moving toward ‘no human in the loop,’ Nick Lippis, co-founder and co-chair of enterprise user community ONUG, recently told Network World. In time, humans will only need to step in for policy exceptions and high-risk decisions. “Layering in AI capabilities makes LAN management applications easier to use and more accessible across an organization,” Dell’Oro Group analyst Sian Morgan said. Gartner predicts that, by 2030, AI agents will drive most network activities, up from “minimal adoption” in 2025. The firm emphasizes that leaders who overlook the AI networking shift “risk higher MTTR [meantime to repair], rising costs, and growing security exposure.” The core components of AI networking It’s important to note that the use of AI and machine learning (ML) in network management is not new. AI for IT operations (AIOps), for instance, is a common practice that uses automation to improve broader IT operations. AI networking is specific to the network itself, covering domains including multi-cloud software, wired and wireless LAN, data center switching, SD-WAN and managed network services (MNS). The incorporation of generative AI, in particular, has brought AI networking to the fore, as enterprise leaders are rethinking every single aspect of their business, networking included.

Read More »

Aria Networks raises $125M and debuts its approach for AI-optimized networks

That embedded telemetry feeds adaptive tuning of Dynamic Load Balancing parameters, Data Center Quantized Congestion Notification (DCQCN) and failover logic without waiting for a threshold breach or a manual intervention. The platform architecture is layered. At the lowest levels, agents react in microseconds to link-level events such as transceiver flaps, rerouting leaf-spine traffic in milliseconds. At higher layers, agents make more strategic decisions about flow placement across the cluster. At the cloud layer, a large language model-based agent surfaces correlated insights to operators in natural language, allowing them to ask questions about specific jobs or alert conditions and receive context-aware responses. Karam argued that simply bolting an LLM onto an existing architecture does not deliver the same result. “If you ask it to do anything, it could hallucinate and bring down the network,” he said. “It doesn’t have any of the context or the data that’s required for this approach to be made safe.” Aria also exposes an MCP server, allowing external systems such as job schedulers and LLM routers to query network state directly and integrate it into their own decision-making. MFU and token efficiency as the target metrics Traditional networking is often evaluated in terms of bandwidth and latency. Aria is centering its platform around two metrics: Model FLOPS Utilization (MFU) and token efficiency. MFU is defined as the ratio of achieved FLOPS per accelerator to the theoretical peak. In practice, Karam said, MFU for training workloads typically runs between 33% and 45%, and inference often comes in below 30%. “The network has a major impact on the MFU, and therefore the token efficiency, because the network touches every aspect, every other component in your cluster,” Karam said.

Read More »

New v2 UALink specification aims to catch up to NVLink

But given there are no products currently available using UALink 1.0, UALink 2.0 might be viewed as a premature launch Need to play catch up David Harold, senior analyst with Jon Peddie Research, was guarded in his reaction. “While 2.0 is a significant step forward from 1.0, we need to bear in mind that even 1.0 solutions aren’t shipping yet – they aren’t due until later this year. So, Nvidia is way ahead of the open alternatives on connectivity, indeed ahead of the proprietary or Ethernet based solutions too,” he said. What this means, he added, is that non-Nvidia alternatives are currently lagging in the market. “They need to play catch up on several fronts, not just networking. … I can’t think of a single shipping product that meaningfully has advantages over a Nvidia solution,” he said. “Ultimately UALink remains desirable since it will enable heterogeneous, multi-vendor environments but it’s quite a way behind NVLink today. ” There are plenty of signs that organizations will find it hard to break free of the Nvidia dominance, however. A couple of months ago, RISC-V pioneer SiFive signed a deal with Nvidia to incorporate Nvidia NVLink Fusion into its data center products, a departure for RISC companies. According to Harold, other companies could be joining it. “Custom ASIC company MediaTek is an NVLink partner, and they told me last week that they are planning to integrate it directly into next-generation custom silicon for AI applications,” he said. “This will enable a wider range of companies to use NVLink as their high-speed interconnect.” Other options And, Harold noted, Nvidia is already looking at other options. “Nvidia is now shifting to look at the copper limit for networking speed, with an interest in using optical connectivity instead,” said Harold.

Read More »

Nvidia’s SchedMD acquisition puts open-source AI scheduling under scrutiny

Is the concern valid? Dr. Danish Faruqui, CEO of Fab Economics, a US-based AI hardware and datacenter advisory, said the risk was real. “The skepticism that Nvidia may prioritize its own hardware in future software updates, potentially delaying or under-optimizing support for rivals, is a feasible outcome,” he said. As the primary developer, Nvidia now controls Slurm’s official development roadmap and code review process, Faruqui said, “which could influence how quickly competing chips are integrated on new development or continuous improvement elements.” Owning the control plane alongside GPUs and networking infrastructure such as InfiniBand, he added, allows Nvidia to create a tightly vertically integrated stack that can lead to what he described as “shallow moats, where advanced features are only available or performant on Nvidia hardware.” One concrete test of that, industry observers say, will be how quickly Nvidia integrates support for AMD’s next-generation chips into Slurm’s codebase compared with how quickly it integrates its own forthcoming hardware and networking technologies, such as InfiniBand. Does the Bright Computing precedent hold? Analysts point to Nvidia’s 2022 acquisition of Bright Computing as a reference point, saying the software became optimized for Nvidia chips in ways that disadvantaged users of competing hardware. Nvidia disputed that characterization, saying Bright Computing supports “nearly any CPU or GPU-accelerated cluster.” Rawat said the comparison was instructive but imperfect. “Nvidia’s acquisition of Bright Computing highlights its preference for vertical integration, embedding Bright tightly into DGX and AI Factory stacks rather than maintaining a neutral, multi-vendor orchestration role,” he said. “This reflects a broader strategic pattern — Nvidia seeks to control the full-stack AI infrastructure experience.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Financial services

This page brings together essential resources to help financial institutions evaluate, adopt, and scale AI in regulated environments. Whether you’re exploring early use cases or

Read More »