Stay Ahead, Stay ONMINE

😲 Quantifying Surprise – A Data Scientist’s Intro To Information Theory – Part 1/4: Foundations

Surprise! Generated using Gemini. During the telecommunication boom, Claude Shannon, in his seminal 1948 paper¹, posed a question that would revolutionise technology: How can we quantify communication? Shannon’s findings remain fundamental to expressing information quantification, storage, and communication. These insights made major contributions to the creation of technologies ranging from signal processing, data compression (e.g., Zip files and compact discs) to the Internet and artificial intelligence. More broadly, his work has significantly impacted diverse fields such as neurobiology, statistical physics and computer science (e.g, cybersecurity, cloud computing, and machine learning). [Shannon’s paper is the] Magna Carta of the Information Age Scientific American This is the first article in a series that explores information quantification – an essential tool for data scientists. Its applications range from enhancing statistical analyses to serving as a go-to decision heuristic in cutting-edge machine learning algorithms. Broadly speaking, quantifying information is assessing uncertainty, which may be phrased as: “how surprising is an outcome?”. This article idea quickly grew into a series since I found this topic both fascinating and diverse. Most researchers, at one stage or another, come across commonly used metrics such as entropy, cross-entropy/KL-divergence and mutual-information. Diving into this topic I found that in order to fully appreciate these one needs to learn a bit about the basics which we cover in this first article. By reading this series you will gain an intuition and tools to quantify: Bits/Nats – Unit measures of information. Self-Information – **** The amount of information in a specific event. Pointwise Mutual Information – The amount of information shared between two specific events. Entropy – The average amount of information of a variable’s outcome. Cross-entropy – The misalignment between two probability distributions (also expressed by its derivative KL-Divergence – a distance measure). Mutual Information – The co-dependency of two variables by their conditional probability distributions. It expresses the information gain of one variable given another. No prior knowledge is required – just a basic understanding of probabilities. I demonstrate using common statistics such as coin and dice 🎲 tosses as well as machine learning applications such as in supervised classification, feature selection, model monitoring and clustering assessment. As for real world applications I’ll discuss a case study of quantifying DNA diversity 🧬. Finally, for fun, I also apply to the popular brain twister commonly known as the Monty Hall problem 🚪🚪 🐐 . Throughout I provide python code 🐍 , and try to keep formulas as intuitive as possible. If you have access to an integrated development environment (IDE) 🖥 you might want to plug 🔌 and play 🕹 around with the numbers to gain a better intuition. This series is divided into four articles, each exploring a key aspect of Information Theory: 😲 Quantifying Surprise: 👈 👈 👈 YOU ARE HERE In this opening article, you’ll learn how to quantify the “surprise” of an event using _self-informatio_n and understand its units of measurement, such as _bit_s and _nat_s. Mastering self-information is essential for building intuition about the subsequent concepts, as all later heuristics are derived from it. 🤷 Quantifying Uncertainty: Building on self-information, this article shifts focus to the uncertainty – or “average surprise” – associated with a variable, known as entropy. We’ll dive into entropy’s wide-ranging applications, from Machine Learning and data analysis to solving fun puzzles, showcasing its adaptability. 📏 Quantifying Misalignment: Here, we’ll explore how to measure the distance between two probability distributions using entropy-based metrics like cross-entropy and KL-divergence. These measures are particularly valuable for tasks like comparing predicted versus true distributions, as in classification loss functions and other alignment-critical scenarios. 💸 Quantifying Gain: Expanding from single-variable measures, this article investigates the relationships between two. You’ll discover how to quantify the information gained about one variable (e.g, target Y) by knowing another (e.g., predictor X). Applications include assessing variable associations, feature selection, and evaluating clustering performance. Each article is crafted to stand alone while offering cross-references for deeper exploration. Together, they provide a practical, data-driven introduction to information theory, tailored for data scientists, analysts and machine learning practitioners. Disclaimer: Unless otherwise mentioned the formulas analysed are for categorical variables with c≥2 classes (2 meaning binary). Continuous variables will be addressed in a separate article. 🚧 Articles (3) and (4) are currently under construction. I will share links once available. Follow me to be notified 🚧 Quantifying Surprise with Self-Information Self-information is considered the building block of information quantification. It is a way of quantifying the amount of “surprise” of a specific outcome. Formally self-information, or also referred to as Shannon Information or information content, quantifies the surprise of an event x occurring based on its probability, p(x). Here we denote it as hₓ: Self-information _h_ₓ is the information of event x that occurs with probability p(x). The units of measure are called bits. One bit (binary digit) is the amount of information for an event x that has probability of p(x)=½. Let’s plug in to verify: hₓ=-log₂(½)= log₂(2)=1 bit. This heuristic serves as an alternative to probabilities, odds and log-odds, with certain mathematical properties which are advantageous for information theory. We discuss these below when learning about Shannon’s axioms behind this choice. It’s always informative to explore how an equation behaves with a graph: Bernoulli trial self-information h(p). Key features: Monotonic, h(p=1)=0, h(p →)→∞. To deepen our understanding of self-information, we’ll use this graph to explore the said axioms that justify its logarithmic formulation. Along the way, we’ll also build intuition about key features of this heuristic. To emphasise the logarithmic nature of self-information, I’ve highlighted three points of interest on the graph: At p=1 an event is guaranteed, yielding no surprise and hence zero bits of information (zero bits). A useful analogy is a trick coin (where both sides show HEAD). Reducing the probability by a factor of two (p=½​) increases the information to _hₓ=_1 bit. This, of course, is the case of a fair coin. Further reducing it by a factor of four results in hₓ(p=⅛)=3 bits. If you are interested in coding the graph here is a python script: To summarise this section: Self-Information hₓ=-log₂(p(x)) quantifies the amount of “surprise” of a specific outcome x. Three Axioms Referencing prior work by Ralph Hartley, Shannon chose -log₂(p) as a manner to meet three axioms. We’ll use the equation and graph to examine how these are manifested: An event with probability 100% is not surprising and hence does not yield any information. In the trick coin case this is evident by p(x)=1 yielding hₓ=0. Less probable events are more surprising and provide more information. This is apparent by self-information decreasing monotonically with increasing probability. The property of Additivity – the total self-information of two independent events equals the sum of individual contributions. This will be explored further in the upcoming fourth article on Mutual Information. There are mathematical proofs (which are beyond the scope of this series) that show that only the log function adheres to all three². The application of these axioms reveals several intriguing and practical properties of self-information: Important properties : Minimum bound: The first axiom hₓ(p=1)=0 establishes that self-information is non-negative, with zero as its lower bound. This is highly practical for many applications. Monotonically decreasing: The second axiom ensures that self-information decreases monotonically with increasing probability. No Maximum bound: At the extreme where _p→_0, monotonicity leads to self-information growing without bound hₓ(_p→0) →_ ∞, a feature that requires careful consideration in some contexts. However, when averaging self-information – as we will later see in the calculation of entropy – probabilities act as weights, effectively limiting the contribution of highly improbable events to the overall average. This relationship will become clearer when we explore entropy in detail. It is useful to understand the close relationship to log-odds. To do so we define p(x) as the probability of event x to happen and p(¬x)=1-p(x) of it not to happen. log-odds(x) = log₂(p(x)/p(¬x))= h(¬x) – h(x). The main takeaways from this section are Axiom 1: An event with probability 100% is not surprising Axiom 2: Less probable events are more surprising and, when they occur, provide more information. Self information (1) monotonically decreases (2) with a minimum bound of zero and (3) no upper bound. In the next two sections we further discuss units of measure and choice of normalisation. Information Units of Measure Bits or Shannons? A bit, as mentioned, represents the amount of information associated with an event that has a 50% probability of occurring. The term is also sometimes referred to as a Shannon, a naming convention proposed by mathematician and physicist David MacKay to avoid confusion with the term ‘bit’ in the context of digital processing and storage. After some deliberation, I decided to use ‘bit’ throughout this series for several reasons: This series focuses on quantifying information, not on digital processing or storage, so ambiguity is minimal. Shannon himself, encouraged by mathematician and statistician John Tukey, used the term ‘bit’ in his landmark paper. ‘Bit’ is the standard term in much of the literature on information theory. For convenience – it’s more concise Normalisation: Log Base 2 vs. Natural Throughout this series we use base 2 for logarithms, reflecting the intuitive notion of a 50% chance of an event as a fundamental unit of information. An alternative commonly used in machine learning is the natural logarithm, which introduces a different unit of measure called nats (short for natural units of information). One nat corresponds to the information gained from an event occurring with a probability of 1/e where e is Euler’s number (≈2.71828). In other words, 1 nat = -ln(p=(1/e)). The relationship between bits (base 2) and nats (natural log) is as follows: 1 bit = ln(2) nats ≈ 0.693 nats. Think of it as similar to a monetary current exchange or converting centimeters to inches. In his seminal publication Shanon explained that the optimal choice of base depends on the specific system being analysed (paraphrased slightly from his original work): “A device with two stable positions […] can store one bit of information” (bit as in binary digit). “A digit wheel on a desk computing machine that has ten stable positions […] has a storage capacity of one decimal digit.”³ “In analytical work where integration and differentiation are involved the base e is sometimes useful. The resulting units of information will be called natural units.” Key aspects of machine learning, such as popular loss functions, often rely on integrals and derivatives. The natural logarithm is a practical choice in these contexts because it can be derived and integrated without introducing additional constants. This likely explains why the machine learning community frequently uses nats as the unit of information – it simplifies the mathematics by avoiding the need to account for factors like ln(2). As shown earlier, I personally find base 2 more intuitive for interpretation. In cases where normalisation to another base is more convenient, I will make an effort to explain the reasoning behind the choice. To summarise this section of units of measure: bit = amount of information to distinguish between two equally likely outcomes. Now that we are familiar with self-information and its unit of measure let’s examine a few use cases. Quantifying Event Information with Coins and Dice In this section, we’ll explore examples to help internalise the self-information axioms and key features demonstrated in the graph. Gaining a solid understanding of self-information is essential for grasping its derivatives, such as entropy, cross-entropy (or KL divergence), and mutual information – all of which are averages over self-information. The examples are designed to be simple, approachable, and lighthearted, accompanied by practical Python code to help you experiment and build intuition. Note: If you feel comfortable with self-information, feel free to skip these examples and go straight to the Quantifying Uncertainty article. Generated using Gemini. To further explore the self-information and bits, I find analogies like coin flips and dice rolls particularly effective, as they are often useful analogies for real-world phenomena. Formally, these can be described as multinomial trials with n=1 trial. Specifically: A coin flip is a Bernoulli trial, where there are c=2 possible outcomes (e.g., heads or tails). Rolling a die represents a categorical trial, where c≥3 outcomes are possible (e.g., rolling a six-sided or eight-sided die). As a use case we’ll use simplistic weather reports limited to featuring sun 🌞 , rain 🌧 , and snow ⛄️. Now, let’s flip some virtual coins 👍 and roll some funky-looking dice 🎲 … Fair Coins and Dice Generated using Gemini. We’ll start with the simplest case of a fair coin (i.e, 50% chance for success/Heads or failure/Tails). Imagine an area for which at any given day there is a 50:50 chance for sun or rain. We can write the probability of each event be: p(🌞 )=p(🌧 )=½. As seen above, according the the self-information formulation, when 🌞 or 🌧 is reported we are provided with h(🌞 __ )=h(🌧 )=-log₂(½)=1 bit of information. We will continue to build on this analogy later on, but for now let’s turn to a variable that has more than two outcomes (c≥3). Before we address the standard six sided die, to simplify the maths and intuition, let’s assume an 8 sided one (_c=_8) as in Dungeons Dragons and other tabletop games. In this case each event (i.e, landing on each side) has a probability of p(🔲 ) = ⅛. When a die lands on one side facing up, e.g, value 7️⃣, we are provided with h(🔲 =7️⃣)=-log₂(⅛)=3 bits of information. For a standard six sided fair die: p(🔲 ) = ⅙ → an event yields __ h(🔲 )=-log₂(⅙)=2.58 bits. Comparing the amount of information from the fair coin (1 bit), 6 sided die (2.58 bits) and 8 sided (3 bits) we identify the second axiom: The less probable an event is, the more surprising it is and the more information it yields. Self information becomes even more interesting when probabilities are skewed to prefer certain events. Loaded Coins and Dice Generated using Gemini. Let’s assume a region where p(🌞 ) = ¾ and p(🌧 )= ¼. When rain is reported the amount of information conveyed is not 1 bit but rather h(🌧 )=-log₂(¼)=2 bits. When sun is reported less information is conveyed: h(🌞 )=-log₂(¾)=0.41 bits. As per the second axiom— a rarer event, like p(🌧 )=¼, reveals more information than a more likely one, like p(🌞 )=¾ – and vice versa. To further drive this point let’s now assume a desert region where p(🌞 ) =99% and p(🌧 )= 1%. If sunshine is reported – that is kind of expected – so nothing much is learnt (“nothing new under the sun” 🥁) and this is quantified as h(🌞 )=0.01 bits. If rain is reported, however, you can imagine being quite surprised. This is quantified as h(🌧 )=6.64 bits. In the following python scripts you can examine all the above examples, and I encourage you to play with your own to get a feeling. First let’s define the calculation and printout function: import numpy as np def print_events_self_information(probs): for ps in probs: print(f”Given distribution {ps}”) for event in ps: if ps[event] != 0: self_information = -np.log2(ps[event]) #same as: -np.log(ps[event])/np.log(2) text_ = f’When `{event}` occurs {self_information:0.2f} bits of information is communicated’ print(text_) else: print(f’a `{event}` event cannot happen p=0 ‘) print(“=” * 20) Next we’ll set a few example distributions of weather frequencies # Setting multiple probability distributions (each sums to 100%) # Fun fact – 🐍 💚 Emojis! probs = [{‘🌞 ‘: 0.5, ‘🌧 ‘: 0.5}, # half-half {‘🌞 ‘: 0.75, ‘🌧 ‘: 0.25}, # more sun than rain {‘🌞 ‘: 0.99, ‘🌧 ‘: 0.01} , # mostly sunshine ] print_events_self_information(probs) This yields printout Given distribution {‘🌞 ‘: 0.5, ‘🌧 ‘: 0.5} When `🌞 ` occurs 1.00 bits of information is communicated When `🌧 ` occurs 1.00 bits of information is communicated ==================== Given distribution {‘🌞 ‘: 0.75, ‘🌧 ‘: 0.25} When `🌞 ` occurs 0.42 bits of information is communicated When `🌧 ` occurs 2.00 bits of information is communicated ==================== Given distribution {‘🌞 ‘: 0.99, ‘🌧 ‘: 0.01} When `🌞 ` occurs 0.01 bits of information is communicated When `🌧 ` occurs 6.64 bits of information is communicated Let’s examine a case of a loaded three sided die. E.g, information of a weather in an area that reports sun, rain and snow at uneven probabilities: p(🌞 ) = 0.2, p(🌧 )=0.7, p(⛄️)=0.1. Running the following print_events_self_information([{‘🌞 ‘: 0.2, ‘🌧 ‘: 0.7, ‘⛄️’: 0.1}]) yields Given distribution {‘🌞 ‘: 0.2, ‘🌧 ‘: 0.7, ‘⛄️’: 0.1} When `🌞 ` occurs 2.32 bits of information is communicated When `🌧 ` occurs 0.51 bits of information is communicated When `⛄️` occurs 3.32 bits of information is communicated What we saw for the binary case applies to higher dimensions. To summarise – we clearly see the implications of the second axiom: When a highly expected event occurs – we do not learn much, the bit count is low. When an unexpected event occurs – we learn a lot, the bit count is high. Event Information Summary In this article we embarked on a journey into the foundational concepts of information theory, defining how to measure the surprise of an event. Notions introduced serve as the bedrock of many tools in information theory, from assessing data distributions to unraveling the inner workings of machine learning algorithms. Through simple yet insightful examples like coin flips and dice rolls, we explored how self-information quantifies the unpredictability of specific outcomes. Expressed in bits, this measure encapsulates Shannon’s second axiom: rarer events convey more information. While we’ve focused on the information content of specific events, this naturally leads to a broader question: what is the average amount of information associated with all possible outcomes of a variable? In the next article, Quantifying Uncertainty, we build on the foundation of self-information and bits to explore entropy – the measure of average uncertainty. Far from being just a beautiful theoretical construct, it has practical applications in data analysis and machine learning, powering tasks like decision tree optimisation, estimating diversity and more. Claude Shannon. Credit: Wikipedia Loved this post? ❤️🍕 💌 Follow me here, join me on LinkedIn or 🍕 buy me a pizza slice! About This Series Even though I have twenty years of experience in data analysis and predictive modelling I always felt quite uneasy about using concepts in information theory without truly understanding them. The purpose of this series was to put me more at ease with concepts of information theory and hopefully provide for others the explanations I needed. 🤷 Quantifying Uncertainty – A Data Scientist’s Intro To Information Theory – Part 2/4: EntropyGa_in intuition into Entropy and master its applications in Machine Learning and Data Analysis. Python code included. 🐍 me_dium.com Check out my other articles which I wrote to better understand Causality and Bayesian Statistics: Footnotes ¹ A Mathematical Theory of Communication, Claude E. Shannon, Bell System Technical Journal 1948. It was later renamed to a book The Mathematical Theory of Communication in 1949. [Shannon’s “A Mathematical Theory of Communication”] the blueprint for the digital era – Historian James Gleick ² See Wikipedia page on Information Content (i.e, self-information) for a detailed derivation that only the log function meets all three axioms. ³ The decimal-digit was later renamed to a hartley (symbol Hart), a ban or a dit. See Hartley (unit) Wikipedia page. Credits Unless otherwise noted, all images were created by the author. Many thanks to Will Reynolds and Pascal Bugnion for their useful comments.
Surprise! Generated using Gemini.
Surprise! Generated using Gemini.

During the telecommunication boom, Claude Shannon, in his seminal 1948 paper¹, posed a question that would revolutionise technology:

How can we quantify communication?

Shannon’s findings remain fundamental to expressing information quantification, storage, and communication. These insights made major contributions to the creation of technologies ranging from signal processing, data compression (e.g., Zip files and compact discs) to the Internet and artificial intelligence. More broadly, his work has significantly impacted diverse fields such as neurobiology, statistical physics and computer science (e.g, cybersecurity, cloud computing, and machine learning).

[Shannon’s paper is the]

Magna Carta of the Information Age

  • Scientific American

This is the first article in a series that explores information quantification – an essential tool for data scientists. Its applications range from enhancing statistical analyses to serving as a go-to decision heuristic in cutting-edge machine learning algorithms.

Broadly speaking, quantifying information is assessing uncertainty, which may be phrased as: “how surprising is an outcome?”.

This article idea quickly grew into a series since I found this topic both fascinating and diverse. Most researchers, at one stage or another, come across commonly used metrics such as entropy, cross-entropy/KL-divergence and mutual-information. Diving into this topic I found that in order to fully appreciate these one needs to learn a bit about the basics which we cover in this first article.

By reading this series you will gain an intuition and tools to quantify:

  • Bits/Nats – Unit measures of information.
  • Self-Information – **** The amount of information in a specific event.
  • Pointwise Mutual Information – The amount of information shared between two specific events.
  • Entropy – The average amount of information of a variable’s outcome.
  • Cross-entropy – The misalignment between two probability distributions (also expressed by its derivative KL-Divergence – a distance measure).
  • Mutual Information – The co-dependency of two variables by their conditional probability distributions. It expresses the information gain of one variable given another.

No prior knowledge is required – just a basic understanding of probabilities.

I demonstrate using common statistics such as coin and dice 🎲 tosses as well as machine learning applications such as in supervised classification, feature selection, model monitoring and clustering assessment. As for real world applications I’ll discuss a case study of quantifying DNA diversity 🧬. Finally, for fun, I also apply to the popular brain twister commonly known as the Monty Hall problem 🚪🚪 🐐 .

Throughout I provide python code 🐍 , and try to keep formulas as intuitive as possible. If you have access to an integrated development environment (IDE) 🖥 you might want to plug 🔌 and play 🕹 around with the numbers to gain a better intuition.

This series is divided into four articles, each exploring a key aspect of Information Theory:

  1. 😲 Quantifying Surprise: 👈 👈 👈 YOU ARE HERE
    In this opening article, you’ll learn how to quantify the “surprise” of an event using _self-informatio_n and understand its units of measurement, such as _bit_s and _nat_s. Mastering self-information is essential for building intuition about the subsequent concepts, as all later heuristics are derived from it.

  2. 🤷 Quantifying Uncertainty: Building on self-information, this article shifts focus to the uncertainty – or “average surprise” – associated with a variable, known as entropy. We’ll dive into entropy’s wide-ranging applications, from Machine Learning and data analysis to solving fun puzzles, showcasing its adaptability.
  3. 📏 Quantifying Misalignment: Here, we’ll explore how to measure the distance between two probability distributions using entropy-based metrics like cross-entropy and KL-divergence. These measures are particularly valuable for tasks like comparing predicted versus true distributions, as in classification loss functions and other alignment-critical scenarios.
  4. 💸 Quantifying Gain: Expanding from single-variable measures, this article investigates the relationships between two. You’ll discover how to quantify the information gained about one variable (e.g, target Y) by knowing another (e.g., predictor X). Applications include assessing variable associations, feature selection, and evaluating clustering performance.

Each article is crafted to stand alone while offering cross-references for deeper exploration. Together, they provide a practical, data-driven introduction to information theory, tailored for data scientists, analysts and machine learning practitioners.

Disclaimer: Unless otherwise mentioned the formulas analysed are for categorical variables with c≥2 classes (2 meaning binary). Continuous variables will be addressed in a separate article.

🚧 Articles (3) and (4) are currently under construction. I will share links once available. Follow me to be notified 🚧


Quantifying Surprise with Self-Information

Self-information is considered the building block of information quantification.

It is a way of quantifying the amount of “surprise” of a specific outcome.

Formally self-information, or also referred to as Shannon Information or information content, quantifies the surprise of an event x occurring based on its probability, p(x). Here we denote it as hₓ:

Self-information _h_ₓ is the information of event x that occurs with probability p(x).
Self-information _h_ₓ is the information of event x that occurs with probability p(x).

The units of measure are called bits. One bit (binary digit) is the amount of information for an event x that has probability of p(x)=½. Let’s plug in to verify: hₓ=-log₂(½)= log₂(2)=1 bit.

This heuristic serves as an alternative to probabilities, odds and log-odds, with certain mathematical properties which are advantageous for information theory. We discuss these below when learning about Shannon’s axioms behind this choice.

It’s always informative to explore how an equation behaves with a graph:

Bernoulli trial self-information h(p). Key features: Monotonic, h(p=1)=0, h(p →)→∞.
Bernoulli trial self-information h(p). Key features: Monotonic, h(p=1)=0, h(p →)→∞.

To deepen our understanding of self-information, we’ll use this graph to explore the said axioms that justify its logarithmic formulation. Along the way, we’ll also build intuition about key features of this heuristic.

To emphasise the logarithmic nature of self-information, I’ve highlighted three points of interest on the graph:

  • At p=1 an event is guaranteed, yielding no surprise and hence zero bits of information (zero bits). A useful analogy is a trick coin (where both sides show HEAD).
  • Reducing the probability by a factor of two (p=½​) increases the information to _hₓ=_1 bit. This, of course, is the case of a fair coin.
  • Further reducing it by a factor of four results in hₓ(p=⅛)=3 bits.

If you are interested in coding the graph here is a python script:

To summarise this section:

Self-Information hₓ=-log₂(p(x)) quantifies the amount of “surprise” of a specific outcome x.

Three Axioms

Referencing prior work by Ralph Hartley, Shannon chose -log₂(p) as a manner to meet three axioms. We’ll use the equation and graph to examine how these are manifested:

  1. An event with probability 100% is not surprising and hence does not yield any information.
    In the trick coin case this is evident by p(x)=1 yielding hₓ=0.

  2. Less probable events are more surprising and provide more information.
    This is apparent by self-information decreasing monotonically with increasing probability.

  3. The property of Additivity – the total self-information of two independent events equals the sum of individual contributions. This will be explored further in the upcoming fourth article on Mutual Information.

There are mathematical proofs (which are beyond the scope of this series) that show that only the log function adheres to all three².

The application of these axioms reveals several intriguing and practical properties of self-information:

Important properties :

  • Minimum bound: The first axiom hₓ(p=1)=0 establishes that self-information is non-negative, with zero as its lower bound. This is highly practical for many applications.
  • Monotonically decreasing: The second axiom ensures that self-information decreases monotonically with increasing probability.
  • No Maximum bound: At the extreme where _p→_0, monotonicity leads to self-information growing without bound hₓ(_p→0) →_ ∞, a feature that requires careful consideration in some contexts. However, when averaging self-information – as we will later see in the calculation of entropy – probabilities act as weights, effectively limiting the contribution of highly improbable events to the overall average. This relationship will become clearer when we explore entropy in detail.

It is useful to understand the close relationship to log-odds. To do so we define p(x) as the probability of event x to happen and px)=1-p(x) of it not to happen. log-odds(x) = log₂(p(x)/px))= hx) – h(x).

The main takeaways from this section are

Axiom 1: An event with probability 100% is not surprising

Axiom 2: Less probable events are more surprising and, when they occur, provide more information.

Self information (1) monotonically decreases (2) with a minimum bound of zero and (3) no upper bound.

In the next two sections we further discuss units of measure and choice of normalisation.

Information Units of Measure

Bits or Shannons?

A bit, as mentioned, represents the amount of information associated with an event that has a 50% probability of occurring.

The term is also sometimes referred to as a Shannon, a naming convention proposed by mathematician and physicist David MacKay to avoid confusion with the term ‘bit’ in the context of digital processing and storage.

After some deliberation, I decided to use ‘bit’ throughout this series for several reasons:

  • This series focuses on quantifying information, not on digital processing or storage, so ambiguity is minimal.
  • Shannon himself, encouraged by mathematician and statistician John Tukey, used the term ‘bit’ in his landmark paper.
  • ‘Bit’ is the standard term in much of the literature on information theory.
  • For convenience – it’s more concise

Normalisation: Log Base 2 vs. Natural

Throughout this series we use base 2 for logarithms, reflecting the intuitive notion of a 50% chance of an event as a fundamental unit of information.

An alternative commonly used in machine learning is the natural logarithm, which introduces a different unit of measure called nats (short for natural units of information). One nat corresponds to the information gained from an event occurring with a probability of 1/e where e is Euler’s number (≈2.71828). In other words, 1 nat = -ln(p=(1/e)).

The relationship between bits (base 2) and nats (natural log) is as follows:

1 bit = ln(2) nats ≈ 0.693 nats.

Think of it as similar to a monetary current exchange or converting centimeters to inches.

In his seminal publication Shanon explained that the optimal choice of base depends on the specific system being analysed (paraphrased slightly from his original work):

  • “A device with two stable positions […] can store one bit of information” (bit as in binary digit).
  • “A digit wheel on a desk computing machine that has ten stable positions […] has a storage capacity of one decimal digit.”³
  • “In analytical work where integration and differentiation are involved the base e is sometimes useful. The resulting units of information will be called natural units.

Key aspects of machine learning, such as popular loss functions, often rely on integrals and derivatives. The natural logarithm is a practical choice in these contexts because it can be derived and integrated without introducing additional constants. This likely explains why the machine learning community frequently uses nats as the unit of information – it simplifies the mathematics by avoiding the need to account for factors like ln(2).

As shown earlier, I personally find base 2 more intuitive for interpretation. In cases where normalisation to another base is more convenient, I will make an effort to explain the reasoning behind the choice.

To summarise this section of units of measure:

bit = amount of information to distinguish between two equally likely outcomes.

Now that we are familiar with self-information and its unit of measure let’s examine a few use cases.

Quantifying Event Information with Coins and Dice

In this section, we’ll explore examples to help internalise the self-information axioms and key features demonstrated in the graph. Gaining a solid understanding of self-information is essential for grasping its derivatives, such as entropy, cross-entropy (or KL divergence), and mutual information – all of which are averages over self-information.

The examples are designed to be simple, approachable, and lighthearted, accompanied by practical Python code to help you experiment and build intuition.

Note: If you feel comfortable with self-information, feel free to skip these examples and go straight to the Quantifying Uncertainty article.

Generated using Gemini.
Generated using Gemini.

To further explore the self-information and bits, I find analogies like coin flips and dice rolls particularly effective, as they are often useful analogies for real-world phenomena. Formally, these can be described as multinomial trials with n=1 trial. Specifically:

  • A coin flip is a Bernoulli trial, where there are c=2 possible outcomes (e.g., heads or tails).
  • Rolling a die represents a categorical trial, where c≥3 outcomes are possible (e.g., rolling a six-sided or eight-sided die).

As a use case we’ll use simplistic weather reports limited to featuring sun 🌞 , rain 🌧 , and snow ⛄️.

Now, let’s flip some virtual coins 👍 and roll some funky-looking dice 🎲 …

Fair Coins and Dice

Generated using Gemini.
Generated using Gemini.

We’ll start with the simplest case of a fair coin (i.e, 50% chance for success/Heads or failure/Tails).

Imagine an area for which at any given day there is a 50:50 chance for sun or rain. We can write the probability of each event be: p(🌞 )=p(🌧 )=½.

As seen above, according the the self-information formulation, when 🌞 or 🌧 is reported we are provided with h(🌞 __ )=h(🌧 )=-log₂(½)=1 bit of information.

We will continue to build on this analogy later on, but for now let’s turn to a variable that has more than two outcomes (c≥3).

Before we address the standard six sided die, to simplify the maths and intuition, let’s assume an 8 sided one (_c=_8) as in Dungeons Dragons and other tabletop games. In this case each event (i.e, landing on each side) has a probability of p(🔲 ) = ⅛.

When a die lands on one side facing up, e.g, value 7️⃣, we are provided with h(🔲 =7️⃣)=-log₂(⅛)=3 bits of information.

For a standard six sided fair die: p(🔲 ) = ⅙ → an event yields __ h(🔲 )=-log₂(⅙)=2.58 bits.

Comparing the amount of information from the fair coin (1 bit), 6 sided die (2.58 bits) and 8 sided (3 bits) we identify the second axiom: The less probable an event is, the more surprising it is and the more information it yields.

Self information becomes even more interesting when probabilities are skewed to prefer certain events.

Loaded Coins and Dice

Generated using Gemini.
Generated using Gemini.

Let’s assume a region where p(🌞 ) = ¾ and p(🌧 )= ¼.

When rain is reported the amount of information conveyed is not 1 bit but rather h(🌧 )=-log₂(¼)=2 bits.

When sun is reported less information is conveyed: h(🌞 )=-log₂(¾)=0.41 bits.

As per the second axiom— a rarer event, like p(🌧 )=¼, reveals more information than a more likely one, like p(🌞 )=¾ – and vice versa.

To further drive this point let’s now assume a desert region where p(🌞 ) =99% and p(🌧 )= 1%.

If sunshine is reported – that is kind of expected – so nothing much is learnt (“nothing new under the sun” 🥁) and this is quantified as h(🌞 )=0.01 bits. If rain is reported, however, you can imagine being quite surprised. This is quantified as h(🌧 )=6.64 bits.

In the following python scripts you can examine all the above examples, and I encourage you to play with your own to get a feeling.

First let’s define the calculation and printout function:

import numpy as np

def print_events_self_information(probs):
    for ps in probs:
        print(f"Given distribution {ps}")
        for event in ps:
            if ps[event] != 0:
                self_information = -np.log2(ps[event]) #same as: -np.log(ps[event])/np.log(2) 
                text_ = f'When `{event}` occurs {self_information:0.2f} bits of information is communicated'
                print(text_)
            else:
                print(f'a `{event}` event cannot happen p=0 ')
        print("=" * 20)

Next we’ll set a few example distributions of weather frequencies

# Setting multiple probability distributions (each sums to 100%)
# Fun fact - 🐍  💚  Emojis!
probs = [{'🌞   ': 0.5, '🌧   ': 0.5},   # half-half
        {'🌞   ': 0.75, '🌧   ': 0.25},  # more sun than rain
        {'🌞   ': 0.99, '🌧   ': 0.01} , # mostly sunshine
]

print_events_self_information(probs)

This yields printout

Given distribution {'🌞      ': 0.5, '🌧      ': 0.5}
When `🌞      ` occurs 1.00 bits of information is communicated 
When `🌧      ` occurs 1.00 bits of information is communicated 
====================
Given distribution {'🌞      ': 0.75, '🌧      ': 0.25}
When `🌞      ` occurs 0.42 bits of information is communicated 
When `🌧      ` occurs 2.00 bits of information is communicated 
====================
Given distribution {'🌞      ': 0.99, '🌧      ': 0.01}
When `🌞      ` occurs 0.01 bits of information is communicated 
When `🌧      ` occurs 6.64 bits of information is communicated  

Let’s examine a case of a loaded three sided die. E.g, information of a weather in an area that reports sun, rain and snow at uneven probabilities: p(🌞 ) = 0.2, p(🌧 )=0.7, p(⛄️)=0.1.

Running the following

print_events_self_information([{'🌞 ': 0.2, '🌧 ': 0.7, '⛄️': 0.1}])

yields

Given distribution {'🌞  ': 0.2, '🌧  ': 0.7, '⛄️': 0.1}
When `🌞  ` occurs 2.32 bits of information is communicated 
When `🌧  ` occurs 0.51 bits of information is communicated 
When `⛄️` occurs 3.32 bits of information is communicated 

What we saw for the binary case applies to higher dimensions.

To summarise – we clearly see the implications of the second axiom:

  • When a highly expected event occurs – we do not learn much, the bit count is low.
  • When an unexpected event occurs – we learn a lot, the bit count is high.

Event Information Summary

In this article we embarked on a journey into the foundational concepts of information theory, defining how to measure the surprise of an event. Notions introduced serve as the bedrock of many tools in information theory, from assessing data distributions to unraveling the inner workings of machine learning algorithms.

Through simple yet insightful examples like coin flips and dice rolls, we explored how self-information quantifies the unpredictability of specific outcomes. Expressed in bits, this measure encapsulates Shannon’s second axiom: rarer events convey more information.

While we’ve focused on the information content of specific events, this naturally leads to a broader question: what is the average amount of information associated with all possible outcomes of a variable?

In the next article, Quantifying Uncertainty, we build on the foundation of self-information and bits to explore entropy – the measure of average uncertainty. Far from being just a beautiful theoretical construct, it has practical applications in data analysis and machine learning, powering tasks like decision tree optimisation, estimating diversity and more.

Claude Shannon. Credit: Wikipedia
Claude Shannon. Credit: Wikipedia

Loved this post? ❤️🍕

💌 Follow me here, join me on LinkedIn or 🍕 buy me a pizza slice!

About This Series

Even though I have twenty years of experience in data analysis and predictive modelling I always felt quite uneasy about using concepts in information theory without truly understanding them.

The purpose of this series was to put me more at ease with concepts of information theory and hopefully provide for others the explanations I needed.

🤷 Quantifying Uncertainty – A Data Scientist’s Intro To Information Theory – Part 2/4: EntropyGa_in intuition into Entropy and master its applications in Machine Learning and Data Analysis. Python code included. 🐍 me_dium.com

Check out my other articles which I wrote to better understand Causality and Bayesian Statistics:

Footnotes

¹ A Mathematical Theory of Communication, Claude E. Shannon, Bell System Technical Journal 1948.

It was later renamed to a book The Mathematical Theory of Communication in 1949.

[Shannon’s “A Mathematical Theory of Communication”] the blueprint for the digital era – Historian James Gleick

² See Wikipedia page on Information Content (i.e, self-information) for a detailed derivation that only the log function meets all three axioms.

³ The decimal-digit was later renamed to a hartley (symbol Hart), a ban or a dit. See Hartley (unit) Wikipedia page.

Credits

Unless otherwise noted, all images were created by the author.

Many thanks to Will Reynolds and Pascal Bugnion for their useful comments.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

North America Goes Back to Adding Rigs

North America added six rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was published on November 7. The total U.S. rig count increased by two week on week and the total Canada rig count increased by four during the same period, taking the total North America rig count up to 739, comprising 548 rigs from the U.S. and 191 rigs from Canada, the count outlined. Of the total U.S. rig count of 548, 527 rigs are categorized as land rigs, 19 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 414 oil rigs, 128 gas rigs, and six miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 478 horizontal rigs, 59 directional rigs, and 11 vertical rigs. Week on week, the U.S. offshore and inland water rig counts remained unchanged, and the country’s land rig count increased by two, Baker Hughes highlighted. The U.S. oil rig count remained unchanged, its gas rig count increased by three, and its miscellaneous rig count dropped by one, week on week, the count showed. The U.S. horizontal and vertical rig counts remained unchanged week on week, while the country’s directional rig count increased by two during the period, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Louisiana added two rigs, Alaska and California each added one rig, and Texas and Wyoming each dropped one rig. A major state variances subcategory included in the rig count showed that, week on week, the Haynesville basin added one rig and the Cana Woodford, Eagle Ford, and Granite Wash basins each dropped one rig week on week. Canada’s total rig count of 191

Read More »

Oil Rises on Shutdown Hopes

Oil rose as a push to end the US government shutdown buoyed wider markets, with crude traders also looking toward a data-heavy week that will yield insights into whether a long-awaited global surplus is forming. West Texas Intermediate rose around 0.6% to settle above $60 a barrel after two weekly declines, while Brent closed around $64. In the US, the White House expressed support for a bipartisan deal to reopen the US government after its longest-ever shutdown. Markets took the progress as a breakthrough, with tech shares driving the equities rally. Crude has dropped in five of the past six weeks as jitters over surplus supply gained greater traction. The Organization of the Petroleum Exporting Countries and its allies have been loosening output curbs in an apparent effort to gain market share, while drillers from outside the alliance, including the US, have also been adding barrels. OPEC is due to release its monthly analysis on Wednesday, with the International Energy Agency issuing an annual energy outlook the same day, followed by its regular monthly snapshot on Thursday. US sanctions also remain in focus after the Trump administration last month targeted Russia’s Rosneft PJSC and Lukoil PJSC in a bid to raise pressure on the Kremlin to end its war in Ukraine. Governments across Europe and the Middle East are rushing to ensure Lukoil’s sprawling oil operations can keep running after the US sanctions and a quashed bid by energy merchant Gunvor Group for its assets last week. Iraq is said to have transferred operations at Lukoil’s West Qurna 2 field to two state firms in an effort to ensure production continues. Earlier in the day Lukoil declared force majeure, allowing it to exercise the right to skip contractual obligations on the field, according to a person familiar with the matter.

Read More »

After rate case, Con Edison Q3 electric revenues up 10.6% on flat sales

By the numbers: Consolidated Edison Q3 2025 -1.5% Consolidated Edison Co. of New York sold 15,692 million kWh in the third quarter of 2025, down slightly from 15,923 million kWh in the same period of 2024. After adjusting for weather and other variations, the utility said delivery volumes increased 0.3%. +10.6% CECONY’s sales of electricity reached $3.73 billion in the third quarter of 2025, up from $3.38 billion in the same period last year. $688 million Utility holding company Consolidated Edison saw its third quarter net income reach $688 million compared with $588 million in the same period of 2024. Rate case drives revenues Consolidated Edison Co. of New York, the electric utility serving New York City, warned customers heading into the summer that their bills would be going up. Now those sales are showing up on the utility parent company’s bottom line. Parent company Consolidated Edison also owns Orange and Rockland Utilities and Rockland Electric Co., as well as transmission and clean energy development businesses. “Third quarter 2025 results reflect [an] increase in electric rate base at CECONY,” the full name of the New York City distribution utility, the company said in its third quarter earnings presentation. Residential sales were flat but third quarter electric sale revenues at CECONY rose more than 5%, year over year. Commercial sales rose 3% and revenues jumped 13.9%. Retrieved from Consolidated Edison. And the utility has proposed spending almost $17 billion in New York City and Westchester County to build out its gas and electric systems from 2026 to 2028. Electric system spending is about $12 billion of the total.  If regulators approve, the spending plan “will fund critical infrastructure investments while keeping affordability and reliability front and center,” Chairman and CEO Tim Cawley said in a statement. “At the same time, the settlement

Read More »

Shell Cancels Plans to Build 2 Wind Farms Off Scotland

Shell Plc canceled plans to build two wind farms off the coast of Scotland as the British oil major pulls back from significant investments in the sector.  Shell had previously been in two joint ventures with Iberdrola SA’s ScottishPower Renewables division to develop the CampionWind farm and the MarramWind farm. The companies swapped stakes in the projects, leaving Shell as the sole owner of the CampionWind project. It subsequently returned the lease for the wind farm to Crown Estate Scotland, according to an emailed statement from Shell. “Shell believes that returning the CampionWind lease to CES will offer the best opportunity for any potential future the site may have,” the statement said. “Substantial pre-investment work has already been undertaken to de-risk the site, which Shell hopes will support any possible future.” Under Chief Executive Officer Wael Sawan, Shell has pulled back from previous ambitions to be a major developer of offshore wind farms. It also canceled plans earlier this year for a project in the US that has faced opposition from the Trump administration.  Shell won the leases for the two sites in a massive auction held in 2022. Since then, the cost of offshore wind has risen sharply, including for the nascent floating technology that will likely be deployed at the CampionWind and MarramWind projects. Bloomberg News previously reported that the company had sought to sell its stakes in Scottish wind projects.  ScottishPower Renewables will continue to develop the larger of the two sites, the 3-gigawatt MarramWind project off the northeast coast. It could be one of the first commercial floating wind farms in the world, the developer said in a statement.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will

Read More »

USA Natural Gas Price Pulls Back Before Skyrocketing

In an EBW Analytics Group report sent to Rigzone by the EBW team on Monday, Eli Rubin, an energy analyst at the company, noted that the December natural gas contract “pulled back to $4.268 [per million British thermal units (MMBtu)] intraday Friday before skyrocketing to test $4.509 [per MMBtu] this morning”. Rubin highlighted in the report that “a cold spell lifted weekend Henry Hub prices to a seven-month high of $3.76 [per MMBtu], LNG exports pushed a step-change higher to a record 18.1 Bcf/d [billion cubic feet per day], and weekend weather forecasts added eight HDDs [heating degree days] since Friday”. The EBW energy analyst went on to note in the report, however, that gas production set a record high on Sunday as Marcellus producers raised output into the first cold spell of the season. “Further, current cold weather may end mid-week, with daily weather-driven demand to slide 12.7 Bcf/d by Thursday,” Rubin added. “U.S. natural gas storage may reach Thanksgiving only slightly below 3,900 Bcf [billion cubic feet],” Rubin continued. Rubin went on to warn in the report that technical momentum may fizzle after having achieved the $4.50 per MMBtu target. “Chances for a cold December lie beyond the 1-15 day forecast window,” Rubin pointed out. “While another test of support is probable within the next 7-10 days, the bullish long-term structural outlook has limited both the duration and magnitude of any pullbacks – and may continue to offer support,” he added. In an exclusive interview with Rigzone on Monday morning, Art Hogan, Chief Market Strategist at B. Riley Wealth, said the fundamental backdrop continues to move natural gas prices higher. “Futures rose nearly three percent to around $4.45 per MMBtu, the highest since March and close to levels last seen in December 2022, lifted by strong export demand

Read More »

EOG Completes $5.7B Purchase of Encino Acquisition Partners

EOG Resources Inc has consummated its takeover of Encino Acquisition Partners (EAP) from the Canada Pension Plan (CPP) Investment Board and Encino Energy for $5.7 billion subject to post-closing adjustments. “In the Utica, the integration of the Encino assets is proceeding exceptionally well, with continued incremental efficiency gains”, EOG chair and chief executive Ezra Yacob said in the company’s quarterly report. The transaction involved the purchase of CPP’s 98 percent stake and Encino Energy’s two percent stake in EAP, which the two formed 2017, CPP said May 30 announcing the deal. The acquisition grows EOG’s Utica shale position by 675,000 net acres to 1.1 million net acres with over two billion net barrels of oil equivalent undeveloped resources, Houston, Texas-based EOG said in a separate statement May 30. “Pro forma production totals 275,000 barrels of oil equivalent per day creating a leading producer in the Utica shale play”, EOG said then. “The acquisition expands EOG’s core acreage in the volatile oil window, which averages 65 percent liquids production, by 235,000 net acres for a combined contiguous position of 485,000 net acres”, EOG said at the time. “In the natural gas window, the acquisition adds 330,000 net acres along with existing natural gas production with firm transportation exposed to premium end markets. “In the northern acreage, where the company has delivered outstanding well results, EOG increases its existing average working interest by more than 20 percent”. EOG raised its regular dividend by five percent to $1.02 per share in light of the transaction. “EOG expects to generate more than $150 million of synergies in the first year driven by lower capital, operating and debt financing costs”, the May statement said. Yacob said then, “This acquisition combines large, premier acreage positions in the Utica, creating a third foundational play for EOG alongside our Delaware Basin and Eagle Ford assets”. The acquisition

Read More »

Buyer’s guide to AI networking technology

Extreme Networks: AI management over AI hardware Extreme deliberately prioritizes AI-powered network management over building specialized hyperscale AI infrastructure, a pragmatic positioning for a vendor targeting enterprise and mid-market.Named a Leader in IDC MarketScape: Worldwide Enterprise Wireless LAN 2025 (October 2025) for AI-powered automation, flexible deployment options and expertise in high-density environments. The company specializes in challenging wireless environments including stadiums, airports and historic venues (Fenway Park, Lambeau Field, Dubai World Trade Center, Liverpool FC’s Anfield Stadium). Key AI networking hardware 8730 Switch: 32×400GbE QSFP-DD fixed configuration delivering 12.8 Tbps throughput in 2RU for IP fabric spine/leaf designs. Designed for AI and HPC workloads with low latency, robust traffic management and power efficiency. Runs Extreme ONE OS (microservices architecture). Supports integrated application hosting with dedicated CPU for VM-based apps. Available Q3 2025. 7830 Switch: High-density 100G/400G fixed-modular core switch delivering 32×100Gb QSFP28 + 8×400Gb QSFP-DD ports with two VIM expansion slots. VIM modules enable up to 64×100Gb or 24×400Gb total capacity with 12.8 Tbps throughput in 2RU. Powered by Fabric Engine OS. Announced May 2025, available Q3 2025. Wi-Fi 7 access points: AP4020 (indoor) and AP4060 (outdoor with external antenna support, GA September 2025) completing premium Wi-Fi 7 portfolio. Extreme Platform ONE:Generally available Q3 2025 with 265+ customers. Integrates conversational, multimodal and agentic AI with three agents (AI Expert, AI Canvas, Service AI Agent) cutting resolution times 98%. Includes embedded Universal ZTNA and two-tier simplified licensing. ExtremeCloud IQ: Cloud-based network management integrating wireless, wired and SD-WAN with AI/ML capabilities and digital twin support for testing configurations before deployment. Extreme Fabric: Native SPB-based Layer 2 fabric with sub-second convergence, automated macro and micro-segmentation and free licensing (no controllers required). Multi-area fabric architecture solves traditional SPB scaling limitations. Analyst Rankings: Market leadership in AI networking Foundry Each of the vendors has its

Read More »

Microsoft’s In-Chip Microfluidics Technology Resets the Limits of AI Cooling

Raising the Thermal Ceiling for AI Hardware As Microsoft positions it, the significance of in-chip microfluidics goes well beyond a novel way to cool silicon. By removing heat at its point of generation, the technology raises the thermal ceiling that constrains today’s most power-dense compute devices. That shift could redefine how next-generation accelerators are designed, packaged, and deployed across hyperscale environments. Impact of this cooling change: Higher-TDP accelerators and tighter packing. Where thermal density has been the limiting factor, in-chip microfluidics could enable denser server sleds—such as NVL- or NVL-like trays—or allow higher per-GPU power budgets without throttling. 3D-stacked and HBM-heavy silicon. Microsoft’s documentation explicitly ties microfluidic cooling to future 3D-stacked and high-bandwidth-memory (HBM) architectures, which would otherwise be heat-limited. By extracting heat inside the package, the approach could unlock new levels of performance and packaging density for advanced AI accelerators. Implications for the AI Data Center If microfluidics can be scaled from prototype to production, its influence will ripple through every layer of the data center, from the silicon package to the white space and plant. The technology touches not only chip design but also rack architecture, thermal planning, and long-term cost models for AI infrastructure. Rack densities, white space topology, and facility thermals Raising thermal efficiency at the chip level has a cascading effect on system design: GPU TDP trajectory. Press materials and analysis around Microsoft’s collaboration with Corintis suggest the feasibility of far higher thermal design power (TDP) envelopes than today’s roughly 1–2 kW per device. Corintis executives have publicly referenced dissipation targets in the 4 kW to 10 kW range, highlighting how in-chip cooling could sustain next-generation GPU power levels without throttling. Rack, ring, and row design. By removing much of the heat directly within the package, microfluidics could reduce secondary heat spread into boards and

Read More »

Designing the AI Century: 7×24 Exchange Fall ’25 Charts the New Data Center Industrial Stack

SMRs and the AI Power Gap: Steve Fairfax Separates Promise from Physics If NVIDIA’s Sean Young made the case for AI factories, Steve Fairfax offered a sobering counterweight: even the smartest factories can’t run without power—and not just any power, but constant, high-availability, clean generation at a scale utilities are increasingly struggling to deliver. In his keynote “Small Modular Reactors for Data Centers,” Fairfax, president of Oresme and one of the data center industry’s most seasoned voices on reliability, walked through the long arc from nuclear fusion research to today’s resurgent interest in fission at modular scale. His presentation blended nuclear engineering history with pragmatic counsel for AI-era infrastructure leaders: SMRs are promising, but their road to reality is paved with physics, fuel, and policy—not PowerPoint. From Fusion Research to Data Center Reliability Fairfax began with his own story—a career that bridges nuclear reliability and data center engineering. As a young physicist and electrical engineer at MIT, he helped build the Alcator C-MOD fusion reactor, a 400-megawatt research facility that heated plasma to 100 million degrees with 3 million amps of current. The magnet system alone drew 265,000 amps at 1,400 volts, producing forces measured in millions of pounds. It was an extreme experiment in controlled power, and one that shaped his later philosophy: design for failure, test for truth, and assume nothing lasts forever. When the U.S. cooled on fusion power in the 1990s, Fairfax applied nuclear reliability methods to data center systems—quantifying uptime and redundancy with the same math used for reactor safety. By 1994, he was consulting for hyperscale pioneers still calling 10 MW “monstrous.” Today’s 400 MW campuses, he noted, are beginning to look a lot more like reactors in their energy intensity—and increasingly, in their regulatory scrutiny. Defining the Small Modular Reactor Fairfax defined SMRs

Read More »

Top network and data center events 2025 & 2026

Denise Dubie is a senior editor at Network World with nearly 30 years of experience writing about the tech industry. Her coverage areas include AIOps, cybersecurity, networking careers, network management, observability, SASE, SD-WAN, and how AI transforms enterprise IT. A seasoned journalist and content creator, Denise writes breaking news and in-depth features, and she delivers practical advice for IT professionals while making complex technology accessible to all. Before returning to journalism, she held senior content marketing roles at CA Technologies, Berkshire Grey, and Cisco. Denise is a trusted voice in the world of enterprise IT and networking.

Read More »

Google’s cheaper, faster TPUs are here, while users of other AI processors face a supply crunch

Opportunities for the AI industry LLM vendors such as OpenAI and Anthropic, which still have relatively young code bases and are continuously evolving them, also have much to gain from the arrival of Ironwood for training their models, said Forrester vice president and principal analyst Charlie Dai. In fact, Anthropic has already agreed to procure 1 million TPUs for training and its models and using them for inferencing. Other, smaller vendors using Google’s TPUs for training models include Lightricks and Essential AI. Google has seen a steady increase in demand for its TPUs (which it also uses to run interna services), and is expected to buy $9.8 billion worth of TPUs from Broadcom this year, compared to $6.2 billion and $2.04 billion in 2024 and 2023 respectively, according to Harrowell. “This makes them the second-biggest AI chip program for cloud and enterprise data centers, just tailing Nvidia, with approximately 5% of the market. Nvidia owns about 78% of the market,” Harrowell said. The legacy problem While some analysts were optimistic about the prospects for TPUs in the enterprise, IDC research director Brandon Hoff said enterprises will most likely to stay away from Ironwood or TPUs in general because of their existing code base written for other platforms. “For enterprise customers who are writing their own inferencing, they will be tied into Nvidia’s software platform,” Hoff said, referring to CUDA, the software platform that runs on Nvidia GPUs. CUDA was released to the public in 2007, while the first version of TensorFlow has only been around since 2015.

Read More »

Cisco launches AI infrastructure, AI practitioner certifications

“This new certification focuses on artificial intelligence and machine learning workloads, helping technical professionals become AI-ready and successfully embed AI into their workflows,” said Pat Merat, vice president at Learn with Cisco, in a blog detailing the new AI Infrastructure Specialist certification. “The certification validates a candidate’s comprehensive knowledge in designing, implementing, operating, and troubleshooting AI solutions across Cisco infrastructure.” Separately, the AITECH certification is part of the Cisco AI Infrastructure track, which complements its existing networking, data center, and security certifications. Cisco says the AITECH cert training is intended for network engineers, system administrators, solution architects, and other IT professionals who want to learn how AI impacts enterprise infrastructure. The training curriculum covers topics such as: Utilizing AI for code generation, refactoring, and using modern AI-assisted coding workflows. Using generative AI for exploratory data analysis, data cleaning, transformation, and generating actionable insights. Designing and implementing multi-step AI-assisted workflows and understanding complex agentic systems for automation. Learning AI-powered requirements, evaluating customization approaches, considering deployment strategies, and designing robust AI workflows. Evaluating, fine-tuning, and deploying pre-trained AI models, and implementing Retrieval Augmented Generation (RAG) systems. Monitoring, maintaining, and optimizing AI-powered workflows, ensuring data integrity and security. AITECH certification candidates will learn how to use AI to enhance productivity, automate routine tasks, and support the development of new applications. The training program includes hands-on labs and simulations to demonstrate practical use cases for AI within Cisco and multi-vendor environments.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »