Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

When You Just Can’t Decide on a Single Action

In Game Theory, the players typically have to make assumptions about the other players’ actions. What will the other player do? Will he use rock, paper or scissors? You never know, but in some cases, you might have an idea of the probability of some actions being higher than others. Adding such a notion of probability or randomness opens up a new chapter in game theory that lets us analyse more complicated scenarios. 

This article is the third in a four-chapter series on the fundamentals of game theory. If you haven’t checked out the first two chapters yet, I’d encourage you to do that to become familiar with the basic terms and concepts used in the following. If you feel ready, let’s go ahead!

Mixed Strategies

To the best of my knowledge, soccer is all about hitting the goal, although that happens very infrequently. Photo by Zainu Color on Unsplash

So far we have always considered games where each player chooses exactly one action. Now we will extend our games by allowing each player to select different actions with given probabilities, which we call a mixed strategy. If you play rock-paper-scissors, you do not know which action your opponent takes, but you might guess that they select each action with a probability of 33%, and if you play 99 games of rock-paper-scissors, you might indeed find your opponent to choose each action roughly 33 times. With this example, you directly see the main reasons why we want to introduce probability. First, it allows us to describe games that are played multiple times, and second, it enables us to consider a notion of the (assumed) likelihood of a player’s actions. 

Let me demonstrate the later point in more detail. We come back to the soccer game we saw in chapter 2, where the keeper decides on a corner to jump into and the other player decides on a corner to aim for.

A game matrix for a penalty shooting.

If you are the keeper, you win (reward of 1) if you choose the same corner as the opponent and you lose (reward of -1) if you choose the other one. For your opponent, it is the other way round: They win, if you select different corners. This game only makes sense, if both the keeper and the opponent select a corner randomly. To be precise, if one player knows that the other always selects the same corner, they know exactly what to do to win. So, the key to success in this game is to choose the corner by some random mechanism. The main question now is, what probability should the keeper and the opponent assign to both corners? Would it be a good strategy to choose the right corner with a probability of 80%? Probably not. 

To find the best strategy, we need to find the Nash equilibrium, because that is the state where no player can get any better by changing their behaviour. In the case of mixed strategies, such a Nash Equilibrium is described by a probability distribution over the actions, where no player wants to increase or decrease any probability anymore. In other words, it is optimal (because if it were not optimal, one player would like to change). We can find this optimal probability distribution if we consider the expected reward. As you might guess, the expected reward is composed of the reward (also called utility) the players get (which is given in the matrix above) times the likelihood of that reward. Let’s say the shooter chooses the left corner with probability p and the right corner with probability 1-p. What reward can the keeper expect? Well, if they choose the left corner, they can expect a reward of p*1 + (1-p)*(-1). Do you see how this is derived from the game matrix? If the keeper chooses the left corner, there is a probability of p, that the shooter chooses the same corner, which is good for the keeper (reward of 1). But with a chance of (1-p), the shooter chooses the other corner and the keeper loses (reward of -1). In a likewise fashion, if the keeper chooses the right corner, he can expect a reward of (1-p)*1 + p*(-1). Consequently, if the keeper chooses the left corner with probability q and the right corner with probability (1-q), the overall expected reward for the keeper is q times the expected reward for the left corner plus (1-q) times the reward for the right corner. 

Now let’s take the perspective of the shooter. He wants the keeper to be indecisive between the corners. In other words, he wants the keeper to see no advantage in any corner so he chooses randomly. Mathematically that means that the expected rewards for both corners should be equal, i.e.

which can be solved to p=0.5. So the optimal strategy for the shooter to keep the keeper indecisive is to choose the right corner with a Probability of p=0.5 and hence choose the left corner with an equal probability of p=0.5. 

But now imagine a shooter who is well known for his tendency to choose the right corner. You might not expect a 50/50 probability for each corner, but you assume he will choose the right corner with a probability of 70%. If the keeper stays with their 50/50 split for choosing a corner, their expected reward is 0.5 times the expected reward for the left corner plus 0.5 times the expected reward for the right corner:

That does not sound too bad, but there is a better option still. If the keeper always chooses the right corner (i.e., q=1), they get a reward of 0.4, which is better than 0. In this case, there is a clear best answer for the keeper which is to favour the corner the shooter prefers. That, however, would lower the shooter’s reward. If the keeper always chooses the right corner, the shooter would get a reward of -1 with a probability of 70% (because the shooter themself chooses the right corner with a probability of 70%) and a reward of 1 in the remaining 30% of cases, which yields an expected reward of 0.7*(-1) + 0.3*1 = -0.4. That is worse than the reward of 0 they got when they chose 50/50. Do you remember that a Nash equilibrium is a state, where no player has any reason to change his action unless any other player does? This scenario is not a Nash equilibrium, because the shooter has an incentive to change his action more towards a 50/50 split, even if the keeper does not change his strategy. This 50/50 split, however, is a Nash equilibrium, because in that scenario neither the shooter nor the keeper gains anything from changing their probability of choosing the one or the other corner. 

Fighting birds

Food can be a reason for birds to fight each other. Photo byViktor Keri on Unsplash

From the previous example we saw, that a player’s assumptions about the other player’s actions influence the first player’s action selection as well. If a player wants to behave rationally (and this is what we always expect in game theory), they would choose actions such that they maximize their expected reward given the other players’ mixed action strategies. In the soccer scenario it is quite simple to more often jump into a corner, if you assume that the opponent will choose that corner more often, so let us continue with a more complicated example, that takes us outside into nature. 

As we walk across the forest, we notice some interesting behaviour in wild animals. Say two birds come to a place where there is something to eat. If you were a bird, what would you do? Would you share the food with the other bird, which means less food for both of you? Or would you fight? If you threaten your opponent, they might give in and you have all the food for yourself. But if the other bird is as aggressive as you, you end up in a real fight and you hurt each other. Then again you might have preferred to give in in the first place and just leave without a fight. As you see, the outcome of your action depends on the other bird. Preparing to fight can be very rewarding if the opponent gives in, but very costly if the other bird is willing to fight as well. In matrix notation, this game looks like this:

A matrix for a game that is someties called hawk vs. dove.

The question is, what would be the rational behaviour for a given distribution of birds who fight or give in? If you are in a very dangerous environment, where most birds are known to be aggressive fighters, you might prefer giving in to not get hurt. But if you assume that most other birds are cowards, you might see a potential benefit in preparing for a fight to scare the others away. By calculating the expected reward, we can figure out the exact proportions of birds fighting and birds giving in, which forms an equilibrium. Say the probability to fight is denoted p for bird 1 and q for bird 2, then the probability for giving in is 1-p for bird 1 and 1-q for bird 2. In a Nash equilibrium, no player wants to change their strategies unless any other payer does. Formally that means, that both options need to yield the same expected reward. So, for bird 2 fighting with a probability of q needs to be as good as giving in with a probability of (1-q). This leads us to the following formula we can solve for q:

For bird 2 it would be optimal to fight with a probability of 1/3 and give in with a probability of 2/3, and the same holds for bird 1 because of the symmetry of the game. In a big population of birds, that would mean that a third of the birds are fighters, who usually seek the fight and the other two-thirds prefer giving in. As this is an equilibrium, these ratios will stay stable over time. If it were to happen that more birds became cowards who always give in, fighting would become more rewarding, as the chance of winning increased. Then, however, more birds would choose to fight and the number of cowardly birds decreases and the stable equilibrium is reached again. 

Report a crime

There is nothing to see here. Move on and learn more about game theory. Photo by JOSHUA COLEMAN on Unsplash

Now that we have understood that we can find optimal Nash equilibria by comparing the expected rewards for the different options, we will use this strategy on a more sophisticated example to unleash the power game theory analyses can have for realistic complex scenarios. 

Say a crime happened in the middle of the city centre and there are multiple witnesses to it. The question is, who calls the police now? As there are many people around, everybody might expect others to call the police and hence refrain from doing it themself. We can model this scenario as a game again. Let’s say we have n players and everybody has two options, namely calling the police or not calling it. And what is the reward? For the reward, we distinguish three cases. If nobody calls the police, the reward is zero, because then the crime is not reported. If you call the police, you have some costs (e.g. the time you have to spend to wait and tell the police what happened), but the crime is reported which helps keep your city safe. If somebody else reports the crime, the city would still be kept safe, but you didn’t have the costs of calling the police yourself. Formally, we can write this down as follows:

v is the reward of keeping the city safe, which you get either if somebody else calls the police (first row) or if you call the police yourself (second row). However, in the second case, your reward is diminished a little by the costs c you have to take. However, let us assume that c is smaller than v, which means, that the costs of calling the police never exceed the reward you get from keeping your city safe. In the last case, where nobody calls the police, your reward is zero.

This game looks a little different from the previous ones we had, mainly because we didn’t display it as a matrix. In fact, it is more complicated. We didn’t specify the exact number of players (we just called it n), and we also didn’t specify the rewards explicitly but just introduced some values v and c. However, this helps us model a quite complicated real situation as a game and will allow us to answer more interesting questions: First, what happens if more people witness the crime? Will it become more likely that somebody will report the crime? Second, how do the costs c influence the likelihood of the crime being reported? We can answer these questions with the game-theoretic concepts we have learned already. 

As in the previous examples, we will use the Nash equilibrium’s property that in an optimal state, nobody should want to change their action. That means, for every individual calling the police should be as good as not calling it, which leads us to the following formula:

On the left, you have the reward if you call the police yourself (v-c). This should be as good as a reward of v times the likelihood that anybody else calls the police. Now, the probability of anybody else calling the police is the same as 1 minus the probability that nobody else calls the police. If we denote the probability that an individual calls the police with p, the probability that a single individual does not call the police is 1-p. Consequently, the probability that two individuals don’t call the police is the product of the single probabilities, (1-p)*(1-p). For n-1 individuals (all individuals except you), this gives us the term 1-p to the power of n-1 in the last row. We can solve this equation and finally arrive at:

This last row gives you the probability of a single individual calling the police. What happens, if there are more witnesses to the crime? If n gets larger, the exponent becomes smaller (1/n goes towards 0), which finally leads to:

Given that x to the power of 0 is always 1, p becomes zero. In other words, the more witnesses are around (higher n), the less likely it becomes that you call the police, and for an infinite amount of other witnesses, the probability drops to zero. This sounds reasonable. The more other people around, the more likely you are to expect that anybody else will call the police and the smaller you see your responsibility. Naturally, all other individuals will have the same chain of thought. But that also sounds a little tragic, doesn’t it? Does this mean that nobody will call the police if there are many witnesses? 

Well, not necessarily. We just saw that the probability of a single person calling the police declines with higher n, but there are still more people around. Maybe the sheer number of people around counteracts this diminishing probability. A hundred people with a small probability of calling the police each might still be worth more than a few people with moderate individual probabilities. Let us now take a look at the probability that anybody calls the police.

The probability that anybody calls the police is equal to 1 minus the probability that nobody calls the police. Like in the example before, the probability of nobody calling the police is described by 1-p to the power of n. We then use an equation we derived previously (see formulas above) to replace (1-p)^(n-1) with c/v. 

When we look at the last line of our calculations, what happens for big n now? We already know that p drops to zero, leaving us with a probability of 1-c/v. This is the likelihood that anybody will call the police if there are many people around (note that this is different from the probability that a single individual calls the police). We see that this likelihood heavily depends on the ratio of c and v. The smaller c, the more likely it is that anybody calls the police. If c is (close to) zero, it is almost certain that the police will be called, but if c is almost as big as v (that is, the costs of calling the police eat up the reward of reporting the crime), it becomes unlikely that anybody calls the police. This gives us a lever to influence the probability of reporting crimes. Calling the police and reporting a crime should be as effortless and low-threshold as possible.

Summary

We have learned a lot about probabilities and choosing actions randomly today. Photo by Robert Stump on Unsplash

In this chapter on our journey through the realms of game theory, we have introduced so-called mixed strategies, which allowed us to describe games by the probabilities with which different actions are taken. We can summarize our key findings as follows: 

A mixed strategy is described by a probability distribution over the different actions.

In a Nash equilibrium, the expected reward for all actions a player can take must be equal.

In mixed strategies, a Nash equilibrium means that no player wants to change the probabilities of their actions

We can find out the probabilities of different actions in a Nash equilibrium by setting the expected rewards of two (or more) options equal.

Game-theoretic concepts allow us to analyze scenarios with an infinite amount of players. Such analyses can also tell us how the exact shaping of the reward can influence the probabilities in a Nash equilibrium. This can be used to inspire decisions in the real world, as we saw in the crime reporting example.

We are almost through with our series on the fundamentals of game theory. In the next and final chapter, we will introduce the idea of taking turns in games. Stay tuned!

References

The topics introduced here are typically covered in standard textbooks on game theory. I mainly used this one, which is written in German though:

Bartholomae, F., & Wiens, M. (2016). Spieltheorie. Ein anwendungsorientiertes Lehrbuch. Wiesbaden: Springer Fachmedien Wiesbaden.

An alternative in English language could be this one:

Espinola-Arredondo, A., & Muñoz-Garcia, F. (2023). Game Theory: An Introduction with Step-by-step Examples. Springer Nature.

Game theory is a rather young field of research, with the first main textbook being this one:

Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior.

Like this article? Follow me to be notified of my future posts.

Read More »

EVOL X Fugro International Women’s Day special

Join Energy Voice News Editor Erikka Askeland who speaks to two high profile energy industry business leaders for International Women’s Day. We speak to Nicola Welsh, UK Country Director at geo-data specialist Fugro alongsideLinda Stewart, Director Marine Geophysical Europe, also at Fugro. Tune in to hear Nicola discuss her route from mining camps in the Australian outback to a senior leadership role while Linda charts her 19-year career journey to become Fugro’s first female director in her role in Scotland. There’s serious discussion about leaning in, the “double bind” and what the IWD 2025 call to “accelerate action” really means. This special podcast also serves and the opening of Energy Voice’s highly anticipated Women in New Energy Event which takes place in Aberdeen in June. Recommended for you Celebrating International Women’s Day with Axis Network’s Emma Behjat

Read More »

Repsol to slash North Sea jobs

Repsol has blamed UK government tax “policies and adverse economic conditions” as it as confirmed plans to cut jobs in its North Sea operations. The Spanish energy firm said 21 in-house roles could be cut although it did not confirm how many jobs would have to go as it announced its “new and more efficient operating model”. However all of the operator’s 1,000 North Sea staff and contractor roles will be up for review, with Petrofac and Altrad the firm’s biggest employers. Many firms are citing the general market and UK fiscal policies for the cuts. This week North Sea decommissioning firm Well-Safe Solutions announced plans to cut dozens of jobs on shore as well as on its vessel, the Well-Safe Guardian. The firm which has invested tens of millions in repurposing drilling rigs into units that can remove subsea oil and gas infrastructure, said the cuts were due to a business down turn which was a “knock-on effects” of the windfall tax. “Repsol UK has undertaken a review of its operations at our offshore sites, which will result in a new and more efficient operating model.  The health and safety of our people and delivery of safe operations remain our priority. “We remain committed to thrive in the UK North Sea basin, but the UK government’s policies and adverse economic conditions make these changes necessary. “There will be organisational changes, and we are in dialogue with the affected employees and will seek to redeploy where possible.” More to follow. Recommended for you SeAH Wind brings in three contractors for Hornsea 3 work

Read More »

BP CEO Sees Pay Cut 30 Pct After Profit Miss, Elliott Intervention

BP Plc Chief Executive Officer Murray Auchincloss’ total compensation dropped to £5.36 million ($6.91 million) in 2024, about 30% less than the previous year, after the energy giant’s profits disappointed. The London-based company’s 2024 earnings results reported in February showed a steep drop in profits compared with the previous year. That set the stage for a subsequent strategic switch back to oil and gas after years of shifting away from fossil fuels, as it strives to catch up with rivals such as Shell Plc which were quicker to pivot back to core businesses. While Auchincloss saw his base salary rise to £1.45 million from £1.02 million, his share awards dropped to £2.75 million from £4.36 million, according to the annual report published on Thursday. His annual bonus was sharply reduced in his first full year as boss. Auchincloss is in the middle of a roadshow meeting with investors in London in the hope of enlisting support for the company’s new direction. Activist investor Elliott Investment Management, which had bought about 5% of the oil major, is ramping up pressure on the company’s management after the new strategy fell short of its expectations. BP’s shares have declined about 6% since the strategy announcement on Feb. 26.  BP chair Helge Lund is looking for new board members who can bring skills and experience that align with the company’s revised oil and gas-focused strategy, he said in the annual report. The board is particularly keen to recruit an oil and gas expert, according to a person familiar with the matter who asked not to be identified because the information is private. Grafton Group Chair Ian Tyler was appointed to BP’s board to lead the remuneration committee, the company said Thursday. Tyler is also a director at Anglo American Plc. BP’s previous strategy, unveiled in 2020, focused on shifting away from oil

Read More »

Lenovo introduces entry-level, liquid cooled AI edge server

Lenovo has announced the ThinkEdge SE100, an entry-level AI inferencing server, designed to make edge AI affordable for enterprises as well as small and medium-sized businesses. AI systems are not normally associated with being small and compact; they’re big, decked out servers with lots of memory, GPUs, and CPUs. But the server is for inferencing, which is the less compute intensive portion of AI processing, Lenovo stated.  GPUs are considered overkill for inferencing and there are multiple startups making small PC cards with inferencing chip on them instead of the more power-hungry CPU and GPU. This design brings AI to the data rather than the other way around. Instead of sending the data to the cloud or data center to be processed, edge computing uses devices located at the data source, reducing latency and the amount of data being sent up to the cloud for processing, Lenovo stated. 

Read More »

Mayo Clinic’s secret weapon against AI hallucinations: Reverse RAG in action

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Even as large language models (LLMs) become ever more sophisticated and capable, they continue to suffer from hallucinations: offering up inaccurate information, or, to put it more harshly, lying.  This can be particularly harmful in areas like healthcare, where wrong information can have dire results.  Mayo Clinic, one of the top-ranked hospitals in the U.S., has adopted a novel technique to address this challenge. To succeed, the medical facility must overcome the limitations of retrieval-augmented generation (RAG). That’s the process by which large language models (LLMs) pull information from specific, relevant data sources. The hospital has employed what is essentially backwards RAG, where the model extracts relevant information, then links every data point back to its original source content.  Remarkably, this has eliminated nearly all data-retrieval-based hallucinations in non-diagnostic use cases — allowing Mayo to push the model out across its clinical practice. “With this approach of referencing source information through links, extraction of this data is no longer a problem,” Matthew Callstrom, Mayo’s medical director for strategy and chair of radiology, told VentureBeat. Accounting for every single data point Dealing with healthcare data is a complex challenge — and it can be a time sink. Although vast amounts of data are collected in electronic health records (EHRs), data can be extremely difficult to find and parse out.  Mayo’s first use case for AI in wrangling all this data was discharge summaries (visit wrap-ups with post-care tips), with its models using traditional RAG. As Callstrom explained, that was a natural place to start because it is simple extraction and summarization, which is what LLMs generally excel at.  “In the first phase, we’re not trying to come up with a diagnosis, where

Read More »

When You Just Can’t Decide on a Single Action

In Game Theory, the players typically have to make assumptions about the other players’ actions. What will the other player do? Will he use rock, paper or scissors? You never know, but in some cases, you might have an idea of the probability of some actions being higher than others. Adding such a notion of probability or randomness opens up a new chapter in game theory that lets us analyse more complicated scenarios. 

This article is the third in a four-chapter series on the fundamentals of game theory. If you haven’t checked out the first two chapters yet, I’d encourage you to do that to become familiar with the basic terms and concepts used in the following. If you feel ready, let’s go ahead!

Mixed Strategies

To the best of my knowledge, soccer is all about hitting the goal, although that happens very infrequently. Photo by Zainu Color on Unsplash

So far we have always considered games where each player chooses exactly one action. Now we will extend our games by allowing each player to select different actions with given probabilities, which we call a mixed strategy. If you play rock-paper-scissors, you do not know which action your opponent takes, but you might guess that they select each action with a probability of 33%, and if you play 99 games of rock-paper-scissors, you might indeed find your opponent to choose each action roughly 33 times. With this example, you directly see the main reasons why we want to introduce probability. First, it allows us to describe games that are played multiple times, and second, it enables us to consider a notion of the (assumed) likelihood of a player’s actions. 

Let me demonstrate the later point in more detail. We come back to the soccer game we saw in chapter 2, where the keeper decides on a corner to jump into and the other player decides on a corner to aim for.

A game matrix for a penalty shooting.

If you are the keeper, you win (reward of 1) if you choose the same corner as the opponent and you lose (reward of -1) if you choose the other one. For your opponent, it is the other way round: They win, if you select different corners. This game only makes sense, if both the keeper and the opponent select a corner randomly. To be precise, if one player knows that the other always selects the same corner, they know exactly what to do to win. So, the key to success in this game is to choose the corner by some random mechanism. The main question now is, what probability should the keeper and the opponent assign to both corners? Would it be a good strategy to choose the right corner with a probability of 80%? Probably not. 

To find the best strategy, we need to find the Nash equilibrium, because that is the state where no player can get any better by changing their behaviour. In the case of mixed strategies, such a Nash Equilibrium is described by a probability distribution over the actions, where no player wants to increase or decrease any probability anymore. In other words, it is optimal (because if it were not optimal, one player would like to change). We can find this optimal probability distribution if we consider the expected reward. As you might guess, the expected reward is composed of the reward (also called utility) the players get (which is given in the matrix above) times the likelihood of that reward. Let’s say the shooter chooses the left corner with probability p and the right corner with probability 1-p. What reward can the keeper expect? Well, if they choose the left corner, they can expect a reward of p*1 + (1-p)*(-1). Do you see how this is derived from the game matrix? If the keeper chooses the left corner, there is a probability of p, that the shooter chooses the same corner, which is good for the keeper (reward of 1). But with a chance of (1-p), the shooter chooses the other corner and the keeper loses (reward of -1). In a likewise fashion, if the keeper chooses the right corner, he can expect a reward of (1-p)*1 + p*(-1). Consequently, if the keeper chooses the left corner with probability q and the right corner with probability (1-q), the overall expected reward for the keeper is q times the expected reward for the left corner plus (1-q) times the reward for the right corner. 

Now let’s take the perspective of the shooter. He wants the keeper to be indecisive between the corners. In other words, he wants the keeper to see no advantage in any corner so he chooses randomly. Mathematically that means that the expected rewards for both corners should be equal, i.e.

which can be solved to p=0.5. So the optimal strategy for the shooter to keep the keeper indecisive is to choose the right corner with a Probability of p=0.5 and hence choose the left corner with an equal probability of p=0.5. 

But now imagine a shooter who is well known for his tendency to choose the right corner. You might not expect a 50/50 probability for each corner, but you assume he will choose the right corner with a probability of 70%. If the keeper stays with their 50/50 split for choosing a corner, their expected reward is 0.5 times the expected reward for the left corner plus 0.5 times the expected reward for the right corner:

That does not sound too bad, but there is a better option still. If the keeper always chooses the right corner (i.e., q=1), they get a reward of 0.4, which is better than 0. In this case, there is a clear best answer for the keeper which is to favour the corner the shooter prefers. That, however, would lower the shooter’s reward. If the keeper always chooses the right corner, the shooter would get a reward of -1 with a probability of 70% (because the shooter themself chooses the right corner with a probability of 70%) and a reward of 1 in the remaining 30% of cases, which yields an expected reward of 0.7*(-1) + 0.3*1 = -0.4. That is worse than the reward of 0 they got when they chose 50/50. Do you remember that a Nash equilibrium is a state, where no player has any reason to change his action unless any other player does? This scenario is not a Nash equilibrium, because the shooter has an incentive to change his action more towards a 50/50 split, even if the keeper does not change his strategy. This 50/50 split, however, is a Nash equilibrium, because in that scenario neither the shooter nor the keeper gains anything from changing their probability of choosing the one or the other corner. 

Fighting birds

Food can be a reason for birds to fight each other. Photo byViktor Keri on Unsplash

From the previous example we saw, that a player’s assumptions about the other player’s actions influence the first player’s action selection as well. If a player wants to behave rationally (and this is what we always expect in game theory), they would choose actions such that they maximize their expected reward given the other players’ mixed action strategies. In the soccer scenario it is quite simple to more often jump into a corner, if you assume that the opponent will choose that corner more often, so let us continue with a more complicated example, that takes us outside into nature. 

As we walk across the forest, we notice some interesting behaviour in wild animals. Say two birds come to a place where there is something to eat. If you were a bird, what would you do? Would you share the food with the other bird, which means less food for both of you? Or would you fight? If you threaten your opponent, they might give in and you have all the food for yourself. But if the other bird is as aggressive as you, you end up in a real fight and you hurt each other. Then again you might have preferred to give in in the first place and just leave without a fight. As you see, the outcome of your action depends on the other bird. Preparing to fight can be very rewarding if the opponent gives in, but very costly if the other bird is willing to fight as well. In matrix notation, this game looks like this:

A matrix for a game that is someties called hawk vs. dove.

The question is, what would be the rational behaviour for a given distribution of birds who fight or give in? If you are in a very dangerous environment, where most birds are known to be aggressive fighters, you might prefer giving in to not get hurt. But if you assume that most other birds are cowards, you might see a potential benefit in preparing for a fight to scare the others away. By calculating the expected reward, we can figure out the exact proportions of birds fighting and birds giving in, which forms an equilibrium. Say the probability to fight is denoted p for bird 1 and q for bird 2, then the probability for giving in is 1-p for bird 1 and 1-q for bird 2. In a Nash equilibrium, no player wants to change their strategies unless any other payer does. Formally that means, that both options need to yield the same expected reward. So, for bird 2 fighting with a probability of q needs to be as good as giving in with a probability of (1-q). This leads us to the following formula we can solve for q:

For bird 2 it would be optimal to fight with a probability of 1/3 and give in with a probability of 2/3, and the same holds for bird 1 because of the symmetry of the game. In a big population of birds, that would mean that a third of the birds are fighters, who usually seek the fight and the other two-thirds prefer giving in. As this is an equilibrium, these ratios will stay stable over time. If it were to happen that more birds became cowards who always give in, fighting would become more rewarding, as the chance of winning increased. Then, however, more birds would choose to fight and the number of cowardly birds decreases and the stable equilibrium is reached again. 

Report a crime

There is nothing to see here. Move on and learn more about game theory. Photo by JOSHUA COLEMAN on Unsplash

Now that we have understood that we can find optimal Nash equilibria by comparing the expected rewards for the different options, we will use this strategy on a more sophisticated example to unleash the power game theory analyses can have for realistic complex scenarios. 

Say a crime happened in the middle of the city centre and there are multiple witnesses to it. The question is, who calls the police now? As there are many people around, everybody might expect others to call the police and hence refrain from doing it themself. We can model this scenario as a game again. Let’s say we have n players and everybody has two options, namely calling the police or not calling it. And what is the reward? For the reward, we distinguish three cases. If nobody calls the police, the reward is zero, because then the crime is not reported. If you call the police, you have some costs (e.g. the time you have to spend to wait and tell the police what happened), but the crime is reported which helps keep your city safe. If somebody else reports the crime, the city would still be kept safe, but you didn’t have the costs of calling the police yourself. Formally, we can write this down as follows:

v is the reward of keeping the city safe, which you get either if somebody else calls the police (first row) or if you call the police yourself (second row). However, in the second case, your reward is diminished a little by the costs c you have to take. However, let us assume that c is smaller than v, which means, that the costs of calling the police never exceed the reward you get from keeping your city safe. In the last case, where nobody calls the police, your reward is zero.

This game looks a little different from the previous ones we had, mainly because we didn’t display it as a matrix. In fact, it is more complicated. We didn’t specify the exact number of players (we just called it n), and we also didn’t specify the rewards explicitly but just introduced some values v and c. However, this helps us model a quite complicated real situation as a game and will allow us to answer more interesting questions: First, what happens if more people witness the crime? Will it become more likely that somebody will report the crime? Second, how do the costs c influence the likelihood of the crime being reported? We can answer these questions with the game-theoretic concepts we have learned already. 

As in the previous examples, we will use the Nash equilibrium’s property that in an optimal state, nobody should want to change their action. That means, for every individual calling the police should be as good as not calling it, which leads us to the following formula:

On the left, you have the reward if you call the police yourself (v-c). This should be as good as a reward of v times the likelihood that anybody else calls the police. Now, the probability of anybody else calling the police is the same as 1 minus the probability that nobody else calls the police. If we denote the probability that an individual calls the police with p, the probability that a single individual does not call the police is 1-p. Consequently, the probability that two individuals don’t call the police is the product of the single probabilities, (1-p)*(1-p). For n-1 individuals (all individuals except you), this gives us the term 1-p to the power of n-1 in the last row. We can solve this equation and finally arrive at:

This last row gives you the probability of a single individual calling the police. What happens, if there are more witnesses to the crime? If n gets larger, the exponent becomes smaller (1/n goes towards 0), which finally leads to:

Given that x to the power of 0 is always 1, p becomes zero. In other words, the more witnesses are around (higher n), the less likely it becomes that you call the police, and for an infinite amount of other witnesses, the probability drops to zero. This sounds reasonable. The more other people around, the more likely you are to expect that anybody else will call the police and the smaller you see your responsibility. Naturally, all other individuals will have the same chain of thought. But that also sounds a little tragic, doesn’t it? Does this mean that nobody will call the police if there are many witnesses? 

Well, not necessarily. We just saw that the probability of a single person calling the police declines with higher n, but there are still more people around. Maybe the sheer number of people around counteracts this diminishing probability. A hundred people with a small probability of calling the police each might still be worth more than a few people with moderate individual probabilities. Let us now take a look at the probability that anybody calls the police.

The probability that anybody calls the police is equal to 1 minus the probability that nobody calls the police. Like in the example before, the probability of nobody calling the police is described by 1-p to the power of n. We then use an equation we derived previously (see formulas above) to replace (1-p)^(n-1) with c/v. 

When we look at the last line of our calculations, what happens for big n now? We already know that p drops to zero, leaving us with a probability of 1-c/v. This is the likelihood that anybody will call the police if there are many people around (note that this is different from the probability that a single individual calls the police). We see that this likelihood heavily depends on the ratio of c and v. The smaller c, the more likely it is that anybody calls the police. If c is (close to) zero, it is almost certain that the police will be called, but if c is almost as big as v (that is, the costs of calling the police eat up the reward of reporting the crime), it becomes unlikely that anybody calls the police. This gives us a lever to influence the probability of reporting crimes. Calling the police and reporting a crime should be as effortless and low-threshold as possible.

Summary

We have learned a lot about probabilities and choosing actions randomly today. Photo by Robert Stump on Unsplash

In this chapter on our journey through the realms of game theory, we have introduced so-called mixed strategies, which allowed us to describe games by the probabilities with which different actions are taken. We can summarize our key findings as follows: 

A mixed strategy is described by a probability distribution over the different actions.

In a Nash equilibrium, the expected reward for all actions a player can take must be equal.

In mixed strategies, a Nash equilibrium means that no player wants to change the probabilities of their actions

We can find out the probabilities of different actions in a Nash equilibrium by setting the expected rewards of two (or more) options equal.

Game-theoretic concepts allow us to analyze scenarios with an infinite amount of players. Such analyses can also tell us how the exact shaping of the reward can influence the probabilities in a Nash equilibrium. This can be used to inspire decisions in the real world, as we saw in the crime reporting example.

We are almost through with our series on the fundamentals of game theory. In the next and final chapter, we will introduce the idea of taking turns in games. Stay tuned!

References

The topics introduced here are typically covered in standard textbooks on game theory. I mainly used this one, which is written in German though:

Bartholomae, F., & Wiens, M. (2016). Spieltheorie. Ein anwendungsorientiertes Lehrbuch. Wiesbaden: Springer Fachmedien Wiesbaden.

An alternative in English language could be this one:

Espinola-Arredondo, A., & Muñoz-Garcia, F. (2023). Game Theory: An Introduction with Step-by-step Examples. Springer Nature.

Game theory is a rather young field of research, with the first main textbook being this one:

Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior.

Like this article? Follow me to be notified of my future posts.

Read More »

EVOL X Fugro International Women’s Day special

Join Energy Voice News Editor Erikka Askeland who speaks to two high profile energy industry business leaders for International Women’s Day. We speak to Nicola Welsh, UK Country Director at geo-data specialist Fugro alongsideLinda Stewart, Director Marine Geophysical Europe, also at Fugro. Tune in to hear Nicola discuss her route from mining camps in the Australian outback to a senior leadership role while Linda charts her 19-year career journey to become Fugro’s first female director in her role in Scotland. There’s serious discussion about leaning in, the “double bind” and what the IWD 2025 call to “accelerate action” really means. This special podcast also serves and the opening of Energy Voice’s highly anticipated Women in New Energy Event which takes place in Aberdeen in June. Recommended for you Celebrating International Women’s Day with Axis Network’s Emma Behjat

Read More »

Repsol to slash North Sea jobs

Repsol has blamed UK government tax “policies and adverse economic conditions” as it as confirmed plans to cut jobs in its North Sea operations. The Spanish energy firm said 21 in-house roles could be cut although it did not confirm how many jobs would have to go as it announced its “new and more efficient operating model”. However all of the operator’s 1,000 North Sea staff and contractor roles will be up for review, with Petrofac and Altrad the firm’s biggest employers. Many firms are citing the general market and UK fiscal policies for the cuts. This week North Sea decommissioning firm Well-Safe Solutions announced plans to cut dozens of jobs on shore as well as on its vessel, the Well-Safe Guardian. The firm which has invested tens of millions in repurposing drilling rigs into units that can remove subsea oil and gas infrastructure, said the cuts were due to a business down turn which was a “knock-on effects” of the windfall tax. “Repsol UK has undertaken a review of its operations at our offshore sites, which will result in a new and more efficient operating model.  The health and safety of our people and delivery of safe operations remain our priority. “We remain committed to thrive in the UK North Sea basin, but the UK government’s policies and adverse economic conditions make these changes necessary. “There will be organisational changes, and we are in dialogue with the affected employees and will seek to redeploy where possible.” More to follow. Recommended for you SeAH Wind brings in three contractors for Hornsea 3 work

Read More »

BP CEO Sees Pay Cut 30 Pct After Profit Miss, Elliott Intervention

BP Plc Chief Executive Officer Murray Auchincloss’ total compensation dropped to £5.36 million ($6.91 million) in 2024, about 30% less than the previous year, after the energy giant’s profits disappointed. The London-based company’s 2024 earnings results reported in February showed a steep drop in profits compared with the previous year. That set the stage for a subsequent strategic switch back to oil and gas after years of shifting away from fossil fuels, as it strives to catch up with rivals such as Shell Plc which were quicker to pivot back to core businesses. While Auchincloss saw his base salary rise to £1.45 million from £1.02 million, his share awards dropped to £2.75 million from £4.36 million, according to the annual report published on Thursday. His annual bonus was sharply reduced in his first full year as boss. Auchincloss is in the middle of a roadshow meeting with investors in London in the hope of enlisting support for the company’s new direction. Activist investor Elliott Investment Management, which had bought about 5% of the oil major, is ramping up pressure on the company’s management after the new strategy fell short of its expectations. BP’s shares have declined about 6% since the strategy announcement on Feb. 26.  BP chair Helge Lund is looking for new board members who can bring skills and experience that align with the company’s revised oil and gas-focused strategy, he said in the annual report. The board is particularly keen to recruit an oil and gas expert, according to a person familiar with the matter who asked not to be identified because the information is private. Grafton Group Chair Ian Tyler was appointed to BP’s board to lead the remuneration committee, the company said Thursday. Tyler is also a director at Anglo American Plc. BP’s previous strategy, unveiled in 2020, focused on shifting away from oil

Read More »

Lenovo introduces entry-level, liquid cooled AI edge server

Lenovo has announced the ThinkEdge SE100, an entry-level AI inferencing server, designed to make edge AI affordable for enterprises as well as small and medium-sized businesses. AI systems are not normally associated with being small and compact; they’re big, decked out servers with lots of memory, GPUs, and CPUs. But the server is for inferencing, which is the less compute intensive portion of AI processing, Lenovo stated.  GPUs are considered overkill for inferencing and there are multiple startups making small PC cards with inferencing chip on them instead of the more power-hungry CPU and GPU. This design brings AI to the data rather than the other way around. Instead of sending the data to the cloud or data center to be processed, edge computing uses devices located at the data source, reducing latency and the amount of data being sent up to the cloud for processing, Lenovo stated. 

Read More »

Mayo Clinic’s secret weapon against AI hallucinations: Reverse RAG in action

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Even as large language models (LLMs) become ever more sophisticated and capable, they continue to suffer from hallucinations: offering up inaccurate information, or, to put it more harshly, lying.  This can be particularly harmful in areas like healthcare, where wrong information can have dire results.  Mayo Clinic, one of the top-ranked hospitals in the U.S., has adopted a novel technique to address this challenge. To succeed, the medical facility must overcome the limitations of retrieval-augmented generation (RAG). That’s the process by which large language models (LLMs) pull information from specific, relevant data sources. The hospital has employed what is essentially backwards RAG, where the model extracts relevant information, then links every data point back to its original source content.  Remarkably, this has eliminated nearly all data-retrieval-based hallucinations in non-diagnostic use cases — allowing Mayo to push the model out across its clinical practice. “With this approach of referencing source information through links, extraction of this data is no longer a problem,” Matthew Callstrom, Mayo’s medical director for strategy and chair of radiology, told VentureBeat. Accounting for every single data point Dealing with healthcare data is a complex challenge — and it can be a time sink. Although vast amounts of data are collected in electronic health records (EHRs), data can be extremely difficult to find and parse out.  Mayo’s first use case for AI in wrangling all this data was discharge summaries (visit wrap-ups with post-care tips), with its models using traditional RAG. As Callstrom explained, that was a natural place to start because it is simple extraction and summarization, which is what LLMs generally excel at.  “In the first phase, we’re not trying to come up with a diagnosis, where

Read More »

Enbridge to Invest $1.39 Billion until 2028 in Mainline Pipeline

Enbridge Inc. has earmarked an investment of up to CAD 2 billion ($1.39 billion) until 2028 for a Canada-United States liquids pipeline with a capacity of about 3 million barrels a day of crude oil. That will be spent on “further enhancing and sustaining reliability and efficiency aimed at ensuring the Mainline system continues to operate safely and at full capacity to support maximum throughput for years to come”, the Calgary, Canada-based energy transporter and gas utility said in an online statement. Mainline, which started service seven decades ago and has grown to be Canada’s biggest crude conveyor, carries production from the Canadian province of Alberta to eastern Canada and the U.S. Midwest. Besides petroleum, it also transports refined products and natural gas liquids. Mainline stretches nearly 8,600 miles, according to Enbridge. The optimization will “support the growing need for ratable egress out of Alberta”, said chief executive Greg Ebel. Enbridge also announced additional investments in two pipelines: CAD 400 million for the BC Pipeline and CAD 100 million for the T15 project. The investment for the BC Pipeline is for the Birch Grove project under the pipeline’s T-North section. Expected to raise the BC Pipeline’s capacity by 179 million cubic feet per day to about 3.7 billion cubic feet a day by 2028, the Birch Grove project will provide additional egress for gas producers in northeastern British Columbia to access markets for their growing production, driven by the Montney formation. The investment for T15 phase 2 is meant for the installment of additional compression to double the original pipeline’s capacity. Expected to go onstream 2027, the expanded pipeline will deliver around 510 million cubic feet a day of natural gas to Duke Energy Corp.’s Roxboro plant in North Carolina as it transitions from coal to gas-fired generation. The investments come despite President Donald Trump imposing tariffs

Read More »

ComEd offering $100M in rebates to drive EV growth in Illinois

Dive Brief: Chicago-based utility provider ComEd is offering $100 million in rebates to reduce the costs of installing electric vehicle charging hubs in homes, businesses and public sites around northern Illinois, the power company announced Feb. 6. The rebate program is part of a broader statewide initiative to promote widespread adoption of electric vehicles and get one million EVs on Illinois roads by 2030. “The ComEd rebates that support EV adoption and accelerate the expansion of charging infrastructure are pivotal in driving a sustainable future,” Megha Lakhchaura, state EV officer of Illinois, said in the announcement. “These initiatives will empower consumers to make cleaner choices and support the transition to zero emission transportation.” Dive Insight: ComEd’s rebate program is driven by the Climate and Equitable Jobs Act passed by Illinois lawmakers and signed by Gov. JB Pritzker in September 2021. In addition to pushing for EV adoption, it also required Illinois electric utilities to develop plans for rapid deployment of statewide charging infrastructure. The latest ComEd effort follows up its February 2024 program which provided $87 million in rebates to build out charging infrastructure in its service area, which includes the Chicago area and most of northern Illinois. The utility credits its initiative for helping to offset installation costs of nearly 4,000 residential and commercial charging Level 2 and Level 3 ports, as well as public and private charging stations. “ComEd is focused on ensuring that not only is the grid is equipped for increased electrification, but that our customers and communities have the support needed to navigate the transition to EVs and the benefits they provide for customers as well as the environment,” Melissa Washington, ComEd SVP of customer operations and strategic initiatives, said in the release. ComEd’s 2025 program is providing: $53 million in rebates for business and

Read More »

Avista, PG&E, Ameren AI demonstrations show big potential – but are other utilities ready?

Utilities and system operators are discovering new ways for artificial intelligence and machine learning to help meet reliability threats in the face of growing loads, utilities and analysts say. There has been an “explosion into public consciousness of generative AI models,” according to a 2024 Electric Power Research Institute, or EPRI, paper. The explosion has resulted in huge 2025 AI financial commitments like the $500 billion U.S. Stargate Project and the $206 billion European Union fund. And utilities are beginning to realize the new possibilities. “Utility executives who were skeptical of AI even five years ago are now using cloud computing, drones, and AI in innovative projects,” said Electric Power Research Institute Executive Director, AI and Quantum, Jeremy Renshaw. “Utilities rapid adoption may make what is impossible today standard operating practice in a few years.” Concerns remain that artificial intelligence and machine learning, or AI/ML, algorithms, could bypass human decision-making and cause the reliability failures they are intended to avoid. “But any company that has not taken its internal knowledge base into a generative AI model that can be queried as needed is not leveraging the data it has long paid to store,” said NVIDIA Senior Managing Director Marc Spieler. For now, humans will remain in the loop and AI/ML algorithms will allow better decision-making by making more, and more relevant, data available faster, he added. In real world demonstrations, utilities and software providers are using AI/ML algorithms to improve tasks as varied as nuclear power plant design and electric vehicle, or EV, charging. But utilities and regulators must face the conundrum of making proprietary data more accessible for the new digital intelligence to increase reliability and reduce customer costs while also protecting it.    The old renewed The power system has already put AI/ML algorithms to work in cybersecurity applications

Read More »

Norway Opens Application for One CO2 Storage Exploration Area

Norway’s Energy Ministry has designated another area of the North Sea for application for licenses to explore the potential of carbon dioxide (CO2) storage. The acreage comprises defined blocks on the Norwegian side of the sea, upstream regulator the Norwegian Offshore Directorate said in an online statement. This is the eighth time acreage is being offered for CO2 storage exploration or exploitation on the Norwegian continental shelf, it noted. The application window for the latest acreage offer closes April 23. “In line with the regulations on transportation and storage of CO2 into subsea reservoirs on the continental shelf, the ministry normally expects to award an exploration license prior to awarding an exploitation license in a relevant area”, the Energy Ministry said separately. Norway has so far awarded 13 CO2 storage licenses: 12 for exploration and one for exploitation. Energy Minister Terje Aasland commented, “The purpose of allocating land is to be able to offer stakeholders in Europe large-scale CO2 storage on commercial terms”. Licensing for CO2 storage is part of Norwegian regulations passed December 2014 to support CO2 storage to mitigate climate change.  “Norway has great potential for storage on the continental shelf”, the ministry added. The Norwegian continental shelf holds a theoretical CO2 storage capacity of 80 billion metric tons, representing about 1,600 years of Norwegian CO2 emissions at current levels, according to a statement by the ministry April 30, 2024. In the latest awards two consortiums with Norway’s majority state-owned Equinor ASA won two exploration licenses in the North Sea. Equinor and London-based Harbour Energy PLC together won a permit straddling blocks 15/8, 15/9, 15/11 and 15/12. The permit, EXL012, lasts four years with three phases. Harbour Energy Norge AS holds a 60 percent stake as operator while Equinor Low Carbon Solution AS has 40 percent, according to a work

Read More »

MP for Truro and Falmouth calls for Cornwall offshore wind strategy

A Labour politician in Cornwall has called for the region to ramp up its domestic offshore wind supply chain. Jayne Kirkham, member of parliament for Truro and Falmouth, said: “At a recent Celtic Sea Power event, I saw just how many brilliant companies are doing amazing things here.” She made the comments months after The Crown Estate entered the second stage of leasing acreage in the Celtic Seas last autumn. “Cornwall has a long history of industrial innovation,” Kirkham said while meeting with marine construction firm MintMech in Penryn. “We’ve got the heritage and the expertise, now we need a strategy that ensures Cornwall maximises the benefits of offshore wind.” The Crown Estate entered the latest phase in its fifth offshore wind leasing round to establish floating offshore wind farms in the Celtic Sea, off the south-west of England and South Wales coast, in August. The second phase of the leasing round was launched, in which bidders must lay out plans to deliver new wind farms and explain how they will benefit local communities. The round has the potential to source up to 4.5GW of new wind capacity and spur investment in the local supply chain. Kirkham expressed hope that Cornish companies will soon be busy on UK projects. She said there are ongoing conversations with the National Energy System Operator (NESO) about ensuring potential wind energy hubs are well connected to the grid. The minister also referenced The Crown Estate’s £50 million Supply Chain Development Fund, which was launched to ensure the UK is prepared to meet offshore wind demands. The first £5m from the fund was awarded in 2024. Kirkham met with directors of Penryn-based marine construction firm MintMech in Jubilee Wharf to discuss the role Cornwall can play in the expansion of the UK’s offshore wind industry.

Read More »

Payroll in USA Oil and Gas Totals $168 Billion in 2024

Payroll in the U.S. oil and gas industry totaled $168 billion in 2024. That’s what the Texas Independent Producers & Royalty Owners Association (TIPRO) said in its latest State of Energy report, which was released this week, highlighting that this figure was “an increase of nearly $5 billion compared to the previous year”. Texas had the highest oil and gas payroll in the country in 2024, according to the report, which pointed out that this figure stood at $62 billion. The report outlined that California was “a distant second” with an oil and gas payroll figure of $15 billion, and that Louisiana was third, with an oil and gas payroll figure of $10 billion. Gasoline Stations with Convenience Stores had the highest U.S. oil and gas payroll by industry figure last year, at $26.8 billion, the report showed. Support Activities for Oil and Gas Operations had the second highest U.S. oil and gas payroll by industry figure in 2024, at $23.9 billion, and Crude Petroleum Extraction had the third highest, at $19.1 billion, the report outlined. The number of U.S. oil and gas businesses totaled 165,110, subject to revisions, TIPRO’s latest report stated. It highlighted that direct oil and natural gas Gross Regional Product exceeded $1 trillion last year and said the U.S. oil and natural gas industry purchased goods and services from over 900 different U.S. industry sectors in the amount of $865 billion in 2024. According to the report, Texas had the highest number of oil and gas businesses in the nation last year, with 23,549. This was followed by California, with 9,486 oil and gas businesses, Florida, with 7,695 oil and gas businesses, Georgia, with 6,453 oil and gas businesses, and New York, with 5,768 oil and gas businesses, the report outlined. The report noted that, in

Read More »

National Grid, Con Edison urge FERC to adopt gas pipeline reliability requirements

The Federal Energy Regulatory Commission should adopt reliability-related requirements for gas pipeline operators to ensure fuel supplies during cold weather, according to National Grid USA and affiliated utilities Consolidated Edison Co. of New York and Orange and Rockland Utilities. In the wake of power outages in the Southeast and the near collapse of New York City’s gas system during Winter Storm Elliott in December 2022, voluntary efforts to bolster gas pipeline reliability are inadequate, the utilities said in two separate filings on Friday at FERC. The filings were in response to a gas-electric coordination meeting held in November by the Federal-State Current Issues Collaborative between FERC and the National Association of Regulatory Utility Commissioners. National Grid called for FERC to use its authority under the Natural Gas Act to require pipeline reliability reporting, coupled with enforcement mechanisms, and pipeline tariff reforms. “Such data reporting would enable the commission to gain a clearer picture into pipeline reliability and identify any problematic trends in the quality of pipeline service,” National Grid said. “At that point, the commission could consider using its ratemaking, audit, and civil penalty authority preemptively to address such identified concerns before they result in service curtailments.” On pipeline tariff reforms, FERC should develop tougher provisions for force majeure events — an unforeseen occurence that prevents a contract from being fulfilled — reservation charge crediting, operational flow orders, scheduling and confirmation enhancements, improved real-time coordination, and limits on changes to nomination rankings, National Grid said. FERC should support efforts in New England and New York to create financial incentives for gas-fired generators to enter into winter contracts for imported liquefied natural gas supplies, or other long-term firm contracts with suppliers and pipelines, National Grid said. Con Edison and O&R said they were encouraged by recent efforts such as North American Energy Standard

Read More »

US BOEM Seeks Feedback on Potential Wind Leasing Offshore Guam

The United States Bureau of Ocean Energy Management (BOEM) on Monday issued a Call for Information and Nominations to help it decide on potential leasing areas for wind energy development offshore Guam. The call concerns a contiguous area around the island that comprises about 2.1 million acres. The area’s water depths range from 350 meters (1,148.29 feet) to 2,200 meters (7,217.85 feet), according to a statement on BOEM’s website. Closing April 7, the comment period seeks “relevant information on site conditions, marine resources, and ocean uses near or within the call area”, the BOEM said. “Concurrently, wind energy companies can nominate specific areas they would like to see offered for leasing. “During the call comment period, BOEM will engage with Indigenous Peoples, stakeholder organizations, ocean users, federal agencies, the government of Guam, and other parties to identify conflicts early in the process as BOEM seeks to identify areas where offshore wind development would have the least impact”. The next step would be the identification of specific WEAs, or wind energy areas, in the larger call area. BOEM would then conduct environmental reviews of the WEAs in consultation with different stakeholders. “After completing its environmental reviews and consultations, BOEM may propose one or more competitive lease sales for areas within the WEAs”, the Department of the Interior (DOI) sub-agency said. BOEM Director Elizabeth Klein said, “Responsible offshore wind development off Guam’s coast offers a vital opportunity to expand clean energy, cut carbon emissions, and reduce energy costs for Guam residents”. Late last year the DOI announced the approval of the 2.4-gigawatt (GW) SouthCoast Wind Project, raising the total capacity of federally approved offshore wind power projects to over 19 GW. The project owned by a joint venture between EDP Renewables and ENGIE received a positive Record of Decision, the DOI said in

Read More »

Biden Bars Offshore Oil Drilling in USA Atlantic and Pacific

President Joe Biden is indefinitely blocking offshore oil and gas development in more than 625 million acres of US coastal waters, warning that drilling there is simply “not worth the risks” and “unnecessary” to meet the nation’s energy needs.  Biden’s move is enshrined in a pair of presidential memoranda being issued Monday, burnishing his legacy on conservation and fighting climate change just two weeks before President-elect Donald Trump takes office. Yet unlike other actions Biden has taken to constrain fossil fuel development, this one could be harder for Trump to unwind, since it’s rooted in a 72-year-old provision of federal law that empowers presidents to withdraw US waters from oil and gas leasing without explicitly authorizing revocations.  Biden is ruling out future oil and gas leasing along the US East and West Coasts, the eastern Gulf of Mexico and a sliver of the Northern Bering Sea, an area teeming with seabirds, marine mammals, fish and other wildlife that indigenous people have depended on for millennia. The action doesn’t affect energy development under existing offshore leases, and it won’t prevent the sale of more drilling rights in Alaska’s gas-rich Cook Inlet or the central and western Gulf of Mexico, which together provide about 14% of US oil and gas production.  The president cast the move as achieving a careful balance between conservation and energy security. “It is clear to me that the relatively minimal fossil fuel potential in the areas I am withdrawing do not justify the environmental, public health and economic risks that would come from new leasing and drilling,” Biden said. “We do not need to choose between protecting the environment and growing our economy, or between keeping our ocean healthy, our coastlines resilient and the food they produce secure — and keeping energy prices low.” Some of the areas Biden is protecting

Read More »

Biden Admin Finalizes Hydrogen Tax Credit Favoring Cleaner Production

The Biden administration has finalized rules for a tax incentive promoting hydrogen production using renewable power, with lower credits for processes using abated natural gas. The Clean Hydrogen Production Credit is based on carbon intensity, which must not exceed four kilograms of carbon dioxide equivalent per kilogram of hydrogen produced. Qualified facilities are those whose start of construction falls before 2033. These facilities can claim credits for 10 years of production starting on the date of service placement, according to the draft text on the Federal Register’s portal. The final text is scheduled for publication Friday. Established by the 2022 Inflation Reduction Act, the four-tier scheme gives producers that meet wage and apprenticeship requirements a credit of up to $3 per kilogram of “qualified clean hydrogen”, to be adjusted for inflation. Hydrogen whose production process makes higher lifecycle emissions gets less. The scheme will use the Energy Department’s Greenhouse Gases, Regulated Emissions and Energy Use in Transportation (GREET) model in tiering production processes for credit computation. “In the coming weeks, the Department of Energy will release an updated version of the 45VH2-GREET model that producers will use to calculate the section 45V tax credit”, the Treasury Department said in a statement announcing the finalization of rules, a process that it said had considered roughly 30,000 public comments. However, producers may use the GREET model that was the most recent when their facility began construction. “This is in consideration of comments that the prospect of potential changes to the model over time reduces investment certainty”, explained the statement on the Treasury’s website. “Calculation of the lifecycle GHG analysis for the tax credit requires consideration of direct and significant indirect emissions”, the statement said. For electrolytic hydrogen, electrolyzers covered by the scheme include not only those using renewables-derived electricity (green hydrogen) but

Read More »

Xthings unveils Ulticam home security cameras powered by edge AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Xthings announced that its Ulticam security camera brand has a new model out today: the Ulticam IQ Floodlight, an edge AI-powered home security camera. The company also plans to showcase two additional cameras, Ulticam IQ, an outdoor spotlight camera, and Ulticam Dot, a portable, wireless security camera. All three cameras offer free cloud storage (seven days rolling) and subscription-free edge AI-powered person detection and alerts. The AI at the edge means that it doesn’t have to go out to an internet-connected data center to tap AI computing to figure out what is in front of the camera. Rather, the processing for the AI is built into the camera itself, and that sets a new standard for value and performance in home security cameras. It can identify people, faces and vehicles. CES 2025 attendees can experience Ulticam’s entire lineup at Pepcom’s Digital Experience event on January 6, 2025, and at the Venetian Expo, Halls A-D, booth #51732, from January 7 to January 10, 2025. These new security cameras will be available for purchase online in the U.S. in Q1 and Q2 2025 at U-tec.com, Amazon, and Best Buy. The Ulticam IQ Series: smart edge AI-powered home security cameras Ulticam IQ home security camera. The Ulticam IQ Series, which includes IQ and IQ Floodlight, takes home security to the next level with the most advanced AI-powered recognition. Among the very first consumer cameras to use edge AI, the IQ Series can quickly and accurately identify people, faces and vehicles, without uploading video for server-side processing, which improves speed, accuracy, security and privacy. Additionally, the Ulticam IQ Series is designed to improve over time with over-the-air updates that enable new AI features. Both cameras

Read More »

Intel unveils new Core Ultra processors with 2X to 3X performance on AI apps

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Intel unveiled new Intel Core Ultra 9 processors today at CES 2025 with as much as two or three times the edge performance on AI apps as before. The chips under the Intel Core Ultra 9 and Core i9 labels were previously codenamed Arrow Lake H, Meteor Lake H, Arrow Lake S and Raptor Lake S Refresh. Intel said it is pushing the boundaries of AI performance and power efficiency for businesses and consumers, ushering in the next era of AI computing. In other performance metrics, Intel said the Core Ultra 9 processors are up to 5.8 times faster in media performance, 3.4 times faster in video analytics end-to-end workloads with media and AI, and 8.2 times better in terms of performance per watt than prior chips. Intel hopes to kick off the year better than in 2024. CEO Pat Gelsinger resigned last month without a permanent successor after a variety of struggles, including mass layoffs, manufacturing delays and poor execution on chips including gaming bugs in chips launched during the summer. Intel Core Ultra Series 2 Michael Masci, vice president of product management at the Edge Computing Group at Intel, said in a briefing that AI, once the domain of research labs, is integrating into every aspect of our lives, including AI PCs where the AI processing is done in the computer itself, not the cloud. AI is also being processed in data centers in big enterprises, from retail stores to hospital rooms. “As CES kicks off, it’s clear we are witnessing a transformative moment,” he said. “Artificial intelligence is moving at an unprecedented pace.” The new processors include the Intel Core 9 Ultra 200 H/U/S models, with up to

Read More »

How to Spot and Prevent Model Drift Before it Impacts Your Business

Despite the AI hype, many tech companies still rely heavily on machine learning to power critical applications, from personalized recommendations to fraud detection. 

I’ve seen firsthand how undetected drifts can result in significant costs — missed fraud detection, lost revenue, and suboptimal business outcomes, just to name a few. So, it’s crucial to have robust monitoring in place if your company has deployed or plans to deploy machine learning models into production.

Undetected Model Drift can lead to significant financial losses, operational inefficiencies, and even damage to a company’s reputation. To mitigate these risks, it’s important to have effective model monitoring, which involves:

Tracking model performance

Monitoring feature distributions

Detecting both univariate and multivariate drifts

A well-implemented monitoring system can help identify issues early, saving considerable time, money, and resources.

In this comprehensive guide, I’ll provide a framework on how to think about and implement effective Model Monitoring, helping you stay ahead of potential issues and ensure stability and reliability of your models in production.

What’s the difference between feature drift and score drift?

Score drift refers to a gradual change in the distribution of model scores. If left unchecked, this could lead to a decline in model performance, making the model less accurate over time.

On the other hand, feature drift occurs when one or more features experience changes in the distribution. These changes in feature values can affect the underlying relationships that the model has learned, and ultimately lead to inaccurate model predictions.

Simulating score shifts

To model real-world fraud detection challenges, I created a synthetic dataset with five financial transaction features.

The reference dataset represents the original distribution, while the production dataset introduces shifts to simulate an increase in high-value transactions without PIN verification on newer accounts, indicating an increase in fraud.

Each feature has different underlying distributions:

Transaction Amount: Log-normal distribution (right-skewed with a long tail)

Account Age (months): clipped normal distribution between 0 to 60 (assuming a 5-year-old company)

Time Since Last Transaction: Exponential distribution

Transaction Count: Poisson distribution

Entered PIN: Binomial distribution.

To approximate model scores, I randomly assigned weights to these features and applied a sigmoid function to constrain predictions between 0 to 1. This mimics how a logistic regression fraud model generates risk scores.

As shown in the plot below:

Drifted features: Transaction Amount, Account Age, Transaction Count, and Entered PIN all experienced shifts in distribution, scale, or relationships.

Distribution of drifted features (image by author)

Stable feature: Time Since Last Transaction remained unchanged.

Distribution of stable feature (image by author)

Drifted scores: As a result of the drifted features, the distribution in model scores has also changed.

Distribution of model scores (image by author)

This setup allows us to analyze how feature drift impacts model scores in production.

Detecting model score drift using PSI

To monitor model scores, I used population stability index (PSI) to measure how much model score distribution has shifted over time.

PSI works by binning continuous model scores and comparing the proportion of scores in each bin between the reference and production datasets. It compares the differences in proportions and their logarithmic ratios to compute a single summary statistic to quantify the drift.

Python implementation:

# Define function to calculate PSI given two datasets
def calculate_psi(reference, production, bins=10):
# Discretize scores into bins
min_val, max_val = 0, 1
bin_edges = np.linspace(min_val, max_val, bins + 1)

# Calculate proportions in each bin
ref_counts, _ = np.histogram(reference, bins=bin_edges)
prod_counts, _ = np.histogram(production, bins=bin_edges)

ref_proportions = ref_counts / len(reference)
prod_proportions = prod_counts / len(production)

# Avoid division by zero
ref_proportions = np.clip(ref_proportions, 1e-8, 1)
prod_proportions = np.clip(prod_proportions, 1e-8, 1)

# Calculate PSI for each bin
psi = np.sum((ref_proportions – prod_proportions) * np.log(ref_proportions / prod_proportions))

return psi

# Calculate PSI
psi_value = calculate_psi(ref_data[‘model_score’], prod_data[‘model_score’], bins=10)
print(f”PSI Value: {psi_value}”)

Below is a summary of how to interpret PSI values:

PSI < 0.1: No drift, or very minor drift (distributions are almost identical).

0.1 ≤ PSI < 0.25: Some drift. The distributions are somewhat different.

0.25 ≤ PSI < 0.5: Moderate drift. A noticeable shift between the reference and production distributions.

PSI ≥ 0.5: Significant drift. There is a large shift, indicating that the distribution in production has changed substantially from the reference data.

Histogram of model score distributions (image by author)

The PSI value of 0.6374 suggests a significant drift between our reference and production datasets. This aligns with the histogram of model score distributions, which visually confirms the shift towards higher scores in production — indicating an increase in risky transactions.

Detecting feature drift

Kolmogorov-Smirnov test for numeric features

The Kolmogorov-Smirnov (K-S) test is my preferred method for detecting drift in numeric features, because it is non-parametric, meaning it doesn’t assume a normal distribution.

The test compares a feature’s distribution in the reference and production datasets by measuring the maximum difference between the empirical cumulative distribution functions (ECDFs). The resulting K-S statistic ranges from 0 to 1:

0 indicates no difference between the two distributions.

Values closer to 1 suggest a greater shift.

Python implementation:

# Create an empty dataframe
ks_results = pd.DataFrame(columns=['Feature', 'KS Statistic', 'p-value', 'Drift Detected'])

# Loop through all features and perform the K-S test
for col in numeric_cols:
ks_stat, p_value = ks_2samp(ref_data[col], prod_data[col])
drift_detected = p_value < 0.05

# Store results in the dataframe
ks_results = pd.concat([
ks_results,
pd.DataFrame({
'Feature': [col],
'KS Statistic': [ks_stat],
'p-value': [p_value],
'Drift Detected': [drift_detected]
})
], ignore_index=True)

Below are ECDF charts of the four numeric features in our dataset:

ECDFs of four numeric features (image by author)

Let’s look at the account age feature as an example: the x-axis represents account age (0-50 months), while the y-axis shows the ECDF for both reference and production datasets. The production dataset skews towards newer accounts, as it has a larger proportion of observations have lower account ages.

Chi-Square test for categorical features

To detect shifts in categorical and boolean features, I like to use the Chi-Square test.

This test compares the frequency distribution of a categorical feature in the reference and production datasets, and returns two values:

Chi-Square statistic: A higher value indicates a greater shift between the reference and production datasets.

P-value: A p-value below 0.05 suggests that the difference between the reference and production datasets is statistically significant, indicating potential feature drift.

Python implementation:

# Create empty dataframe with corresponding column names
chi2_results = pd.DataFrame(columns=['Feature', 'Chi-Square Statistic', 'p-value', 'Drift Detected'])

for col in categorical_cols:
# Get normalized value counts for both reference and production datasets
ref_counts = ref_data[col].value_counts(normalize=True)
prod_counts = prod_data[col].value_counts(normalize=True)

# Ensure all categories are represented in both
all_categories = set(ref_counts.index).union(set(prod_counts.index))
ref_counts = ref_counts.reindex(all_categories, fill_value=0)
prod_counts = prod_counts.reindex(all_categories, fill_value=0)

# Create contingency table
contingency_table = np.array([ref_counts * len(ref_data), prod_counts * len(prod_data)])

# Perform Chi-Square test
chi2_stat, p_value, _, _ = chi2_contingency(contingency_table)
drift_detected = p_value < 0.05

# Store results in chi2_results dataframe
chi2_results = pd.concat([
chi2_results,
pd.DataFrame({
'Feature': [col],
'Chi-Square Statistic': [chi2_stat],
'p-value': [p_value],
'Drift Detected': [drift_detected]
})
], ignore_index=True)

The Chi-Square statistic of 57.31 with a p-value of 3.72e-14 confirms a large shift in our categorical feature, Entered PIN. This finding aligns with the histogram below, which visually illustrates the shift:

Distribution of categorical feature (image by author)

Detecting multivariate shifts

Spearman Correlation for shifts in pairwise interactions

In addition to monitoring individual feature shifts, it’s important to track shifts in relationships or interactions between features, known as multivariate shifts. Even if the distributions of individual features remain stable, multivariate shifts can signal meaningful differences in the data.

By default, Pandas’ .corr() function calculates Pearson correlation, which only captures linear relationships between variables. However, relationships between features are often non-linear yet still follow a consistent trend.

To capture this, we use Spearman correlation to measure monotonic relationships between features. It captures whether features change together in a consistent direction, even if their relationship isn’t strictly linear.

To assess shifts in feature relationships, we compare:

Reference correlation (ref_corr): Captures historical feature relationships in the reference dataset.

Production correlation (prod_corr): Captures new feature relationships in production.

Absolute difference in correlation: Measures how much feature relationships have shifted between the reference and production datasets. Higher values indicate more significant shifts.

Python implementation:

# Calculate correlation matrices
ref_corr = ref_data.corr(method='spearman')
prod_corr = prod_data.corr(method='spearman')

# Calculate correlation difference
corr_diff = abs(ref_corr – prod_corr)

Example: Change in correlation

Now, let’s look at the correlation between transaction_amount and account_age_in_months:

In ref_corr, the correlation is 0.00095, indicating a weak relationship between the two features.

In prod_corr, the correlation is -0.0325, indicating a weak negative correlation.

Absolute difference in the Spearman correlation is 0.0335, which is a small but noticeable shift.

The absolute difference in correlation indicates a shift in the relationship between transaction_amount and account_age_in_months.

There used to be no relationship between these two features, but the production dataset indicates that there is now a weak negative correlation, meaning that newer accounts have higher transaction accounts. This is spot on!

Autoencoder for complex, high-dimensional multivariate shifts

In addition to monitoring pairwise interactions, we can also look for shifts across more dimensions in the data.

Autoencoders are powerful tools for detecting high-dimensional multivariate shifts, where multiple features collectively change in ways that may not be apparent from looking at individual feature distributions or pairwise correlations.

An autoencoder is a neural network that learns a compressed representation of data through two components:

Encoder: Compresses input data into a lower-dimensional representation.

Decoder: Reconstructs the original input from the compressed representation.

To detect shifts, we compare the reconstructed output to the original input and compute the reconstruction loss.

Low reconstruction loss → The autoencoder successfully reconstructs the data, meaning the new observations are similar to it has seen and learned.

High reconstruction loss → The production data deviates significantly from the learned patterns, indicating potential drift.

Unlike traditional drift metrics that focus on individual features or pairwise relationships, autoencoders capture complex, non-linear dependencies across multiple variables simultaneously.

Python implementation:

ref_features = ref_data[numeric_cols + categorical_cols]
prod_features = prod_data[numeric_cols + categorical_cols]

# Normalize the data
scaler = StandardScaler()
ref_scaled = scaler.fit_transform(ref_features)
prod_scaled = scaler.transform(prod_features)

# Split reference data into train and validation
np.random.shuffle(ref_scaled)
train_size = int(0.8 * len(ref_scaled))
train_data = ref_scaled[:train_size]
val_data = ref_scaled[train_size:]

# Build autoencoder
input_dim = ref_features.shape[1]
encoding_dim = 3
# Input layer
input_layer = Input(shape=(input_dim, ))
# Encoder
encoded = Dense(8, activation="relu")(input_layer)
encoded = Dense(encoding_dim, activation="relu")(encoded)
# Decoder
decoded = Dense(8, activation="relu")(encoded)
decoded = Dense(input_dim, activation="linear")(decoded)
# Autoencoder
autoencoder = Model(input_layer, decoded)
autoencoder.compile(optimizer="adam", loss="mse")

# Train autoencoder
history = autoencoder.fit(
train_data, train_data,
epochs=50,
batch_size=64,
shuffle=True,
validation_data=(val_data, val_data),
verbose=0
)

# Calculate reconstruction error
ref_pred = autoencoder.predict(ref_scaled, verbose=0)
prod_pred = autoencoder.predict(prod_scaled, verbose=0)

ref_mse = np.mean(np.power(ref_scaled – ref_pred, 2), axis=1)
prod_mse = np.mean(np.power(prod_scaled – prod_pred, 2), axis=1)

The charts below show the distribution of reconstruction loss between both datasets.

Distribution of reconstruction loss between actuals and predictions (image by author)

The production dataset has a higher mean reconstruction error than that of the reference dataset, indicating a shift in the overall data. This aligns with the changes in the production dataset with a higher number of newer accounts with high-value transactions.

Summarizing

Model monitoring is an essential, yet often overlooked, responsibility for data scientists and machine learning engineers.

All the statistical methods led to the same conclusion, which aligns with the observed shifts in the data: they detected a trend in production towards newer accounts making higher-value transactions. This shift resulted in higher model scores, signaling an increase in potential fraud.

In this post, I covered techniques for detecting drift on three different levels:

Model score drift: Using Population Stability Index (PSI)

Individual feature drift: Using Kolmogorov-Smirnov test for numeric features and Chi-Square test for categorical features

Multivariate drift: Using Spearman correlation for pairwise interactions and autoencoders for high-dimensional, multivariate shifts.

These are just a few of the techniques I rely on for comprehensive monitoring — there are plenty of other equally valid statistical methods that can also detect drift effectively.

Detected shifts often point to underlying issues that warrant further investigation. The root cause could be as serious as a data collection bug, or as minor as a time change like daylight savings time adjustments.

There are also fantastic python packages, like evidently.ai, that automate many of these comparisons. However, I believe there’s significant value in deeply understanding the statistical techniques behind drift detection, rather than relying solely on these tools.

What’s the model monitoring process like at places you’ve worked?

Want to build your AI skills?

👉🏻 I run the AI Weekender and write weekly blog posts on data science, AI weekend projects, career advice for professionals in data.

Resources

Read More »

Peer raises $10.5M for metaverse engine, launches 3D personal planets

Peer Global Inc announced today that it has raised $10.5 million in its latest round of funding for its metaverse game engine, which it plans to use to build out its team and to accelerate AI product development. In addition to the funding, the company also launched its personal planets feature — an in-engine feature that allows users to create their own 3D social hubs. According to founder Tony Tran, this new form of social engagement is intended to be a tonic to more addictive, static forms of social media.

Peer’s total investment numbers sit at $65.5 million, all from angel investors. The Family Office of Tommy Mai is the sole investor in this round of funding. The company will build out its AI features, which make up the backbone of its persistent world. AI is also one of the tools that the company offers for developers who wish to build their experiences on Peer. Within the game’s engine, all games and experiences would be connected to each other.

Mai said in a statement, “Websites, social networks, and digital brand experiences today are flat. People have short attention spans. AI will push everything into spatial experiences, and Peer is leading the way. We’re really excited about the potential for this technology and think Tony and team are the ones to get this right.”

Peer wants to redefine the social experience

Speaking with GamesBeat, Tran said that Peer reshapes those same social engagement forces into something that gets users going outside and engaging with the world around them. “We use location sharing dynamically within the platform, to create a living map where people can see each other moving in real-time, sparking spontaneous interactions and collaboration rather than passive consumption. This transforms the energy of traditional FOMO into something constructive towards exploration, discovery, and shared experiences. Instead of feeling left out, users are invited into the action, whether it’s meeting up with friends, joining an event, or co-creating within the AI-driven world.”

Tran also notes the advantages of using AI for developers: “What we have is a social interface where AI can create to its maximum potential—generating games, characters, and entire experiences on demand—for mass consumption. Peer leverages AI to bring the visual side of the metaverse to life in a way that other experiences can’t. All other metaverses exist in isolation, where in Peer, AI acts as the connective tissue. It links people, places, and experiences in real time, forming an instant information layer that keeps everything fluid, responsive, and intelligent.”

According to Tran, Peer plans to launch its location-based mechanics in the near-future, as well as nascent monetization mechanics, such as digital property sales and premium experiences. For the long-term, the company plans to make the Peer experience accessible on any device. To build it at scale, they plan to offer subscription tiers, AI-based advertising and a full digital economy.

Tran told GamesBeat, “Peer’s AI integration allows for dynamic, procedurally generated environments, meaning developers can create living worlds that adapt to player actions… Peer gives developers a platform to build games that are not just played but lived in—unlocking new possibilities for immersive, connected gameplay.”

Read More »

A standard, open framework for building AI agents is coming from Cisco, LangChain and Galileo

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More One goal for an agentic future is for AI agents from different organizations to freely and seamlessly talk to one another. But getting to that point requires interoperability, and these agents may have been built with different LLMs, data frameworks and code. To achieve interoperability, developers of these agents must agree on how they can communicate with each other. This is a challenging task.  A group of companies, including Cisco, LangChain, LlamaIndex, Galileo and Glean, have now created AGNTCY, an open-source collective with the goal of creating an industry-standard agent interoperability language. AGNTCY aims to make it easy for any AI agent to communicate and exchange data with another. Uniting AI Agents “Just like when the cloud and the internet came about and accelerated applications and all social interactions at a global scale, we want to build the Internet of Agents that accelerate all of human work at a global scale,” said Vijoy Pandey, head of Outshift by Cisco, Cisco’s incubation arm, in an interview with VentureBeat.  Pandey likened AGNTCY to the advent of the Transmission Control Protocol/Internet Protocol (TCP/IP) and the domain name system (DNS), which helped organize the internet and allowed for interconnections between different computer systems.  “The way we are thinking about this problem is that the original internet allowed for humans and servers and web farms to all come together,” he said. “This is the Internet of Agents, and the only way to do that is to make it open and interoperable.” Cisco, LangChain and Galileo will act as AGNTCY’s core maintainers, with Glean and LlamaIndex as contributors. However, this structure may change as the collective adds more members.  Standardizing a fast-moving industry AI agents cannot be

Read More »

Hugging Face co-founder Thomas Wolf just challenged Anthropic CEO’s vision for AI’s future — and the $130 billion industry is taking notice

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Thomas Wolf, co-founder of AI company Hugging Face, has issued a stark challenge to the tech industry’s most optimistic visions of artificial intelligence, arguing that today’s AI systems are fundamentally incapable of delivering the scientific revolutions their creators promise. In a provocative blog post published on his personal website this morning, Wolf directly confronts the widely circulated vision of Anthropic CEO Dario Amodei, who predicted that advanced AI would deliver a “compressed 21st century” where decades of scientific progress could unfold in just years. “I’m afraid AI won’t give us a ‘compressed 21st century,’” Wolf writes in his post, arguing that current AI systems are more likely to produce “a country of yes-men on servers” rather than the “country of geniuses” that Amodei envisions. The exchange highlights a growing divide in how AI leaders think about the technology’s potential to transform scientific discovery and problem-solving, with major implications for business strategies, research priorities, and policy decisions. From straight-A student to ‘mediocre researcher’: Why academic excellence doesn’t equal scientific genius Wolf grounds his critique in personal experience. Despite being a straight-A student who attended MIT, he describes discovering he was a “pretty average, underwhelming, mediocre researcher” when he began his PhD work. This experience shaped his view that academic success and scientific genius require fundamentally different mental approaches — the former rewarding conformity, the latter demanding rebellion against established thinking. “The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students,” Wolf explains. “A real science breakthrough is Copernicus proposing, against all the knowledge of his days — in ML terms we would say ‘despite all his training dataset’ — that the earth may orbit the sun

Read More »

The Download: Denmark’s robot city, and Google’s AI-only search results

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Welcome to robot city The city of Odense, in Denmark, is best known as the site where King Canute, Denmark’s last Viking king, was murdered during the 11th century. Today, Odense it’s also home to more than 150 robotics, automation, and drone companies. It’s particularly renowned for collaborative robots, or cobots—those designed to work alongside humans, often in an industrial setting.Odense’s robotics success has its roots in the more traditional industry of shipbuilding. During the ‘90s, the Mærsk shipping company funded the creation of the Mærsk Mc-Kinney Møller Institute (MMMI), a center dedicated to autonomous systems that drew students keen to study robotics. But there are challenges to being based in a city that, though the third-largest in Denmark, is undeniably small on the global scale. Read the full story. —Victoria Turk
This story is from our latest print issue, which is all about how technology is changing our relationships with each other—and ourselves. If you haven’t already, subscribe now to receive future issues once they land. If you’re interested in robotics, why not check out: 
+ Will we ever trust robots? If most robots still need remote human operators to be safe and effective, why should we welcome them into our homes? Read the full story.+ Why robots need to become lazier before they can be truly useful. + AI models let robots carry out tasks in unfamiliar environments. “Robot utility models” sidestep the need to tweak the data used to train robots every time they try to do something in unfamiliar settings. Read the full story.+ What’s next for robots in 2025, from humanoid bots to new developments in military applications. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Google has started testing AI-only search resultsWhat could possibly go wrong? (Ars Technica)+ It’s also rolling out more AI Overview result summaries. (The Verge)+ AI means the end of internet search as we’ve known it. (MIT Technology Review)   2 Elon Musk’s DOGE is coming for consultantsDeloitte, Accenture and others will be told to justify the billions of dollars they receive from the US government. (FT $)+ One federal agency has forbidden DOGE workers from entering its office. (WP $)+ Anti-Musk protestors set up camp inside a Portland Tesla store. (Reuters) 3 The US military will use AI tools to plan maneuversThanks to a new deal with startup Scale AI. (WP $)+ Meanwhile, Europe’s defense sector is on the ascendancy. (FT $)+ We saw a demo of the new AI system powering Anduril’s vision for war. (MIT Technology Review)

4 Global sea ice levels have fallen to a record lowThe north pole experienced a period of extreme heat last month. (The Guardian)+ The ice cores that will let us look 1.5 million years into the past. (MIT Technology Review) 5 Where are all the EV chargers?Lack of charging infrastructure is still a major roadblock to wider adoption. So why haven’t we solved it? (IEEE Spectrum)+ Why EV charging needs more than Tesla. (MIT Technology Review) 6 We need new tests to measure AI progressTraining models on questions they’re later tested on is a poor metric. (The Atlantic $)+ The way we measure progress in AI is terrible. (MIT Technology Review) 7 American cities have a plan to combat extreme heatwavesData mapping projects are shedding new light on how to save lives. (Knowable Magazine)+ A successful air monitoring program has come to an abrupt halt. (Wired $) 8 Chatbots need love tooNew research suggests models can tweak their behavior to appear more likeable. (Wired $)+ The AI relationship revolution is already here. (MIT Technology Review)  9 McDonald’s is being given an AI makeover 🍔In a bid to reduce stress for customers and its workers alike. (WSJ $) 10 How to stop doom scrollingSpoiler: those screen time reports aren’t helping. (Vox)+ How to log off. (MIT Technology Review)
Quote of the day “What happens when you get to a point where every video, audio, everything you read and see online can be fake? Where’s our shared sense of reality?”
—Hany Farid, a professor at the University of California, tells the Guardian why it’s essential to question the veracity of the media we come across online. The big story What Africa needs to do to become a major AI player November 2024 Africa is still early in the process of adopting AI technologies. But researchers say the continent is uniquely hospitable to it for several reasons, including a relatively young and increasingly well-educated population, a rapidly growing ecosystem of AI startups, and lots of potential consumers. 
However, ambitious efforts to develop AI tools that answer the needs of Africans face numerous hurdles. The biggest are inadequate funding and poor infrastructure. Limited internet access and a scarcity of domestic data centers also mean that developers might not be able to deploy cutting-edge AI capabilities. Complicating this further is a lack of overarching policies or strategies for harnessing AI’s immense benefits—and regulating its downsides. Taken together, researchers worry, these issues will hold Africa’s AI sector back and hamper its efforts to pave its own pathway in the global AI race. Read the full story. —Abdullahi Tsanni We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ Are you a summer or winter? Warm or cool? If you don’t know, it’s time to get your colors done.+ Why more women are choosing to explore the world on women-only trips.+ Whitetop the llama, who spends his days comforting ill kids, is a true hero 🦙+ If you missed the great sourdough craze of 2020, fear not—here are some great tips to get you started.

Read More »

Kubernetes — Understanding and Utilizing Probes Effectively

Introduction

Let’s talk about Kubernetes probes and why they matter in your deployments. When managing production-facing containerized applications, even small optimizations can have enormous benefits.

Aiming to reduce deployment times, making your applications better react to scaling events, and managing the running pods healthiness requires fine-tuning your container lifecycle management. This is exactly why proper configuration — and implementation — of Kubernetes probes is vital for any critical deployment. They assist your cluster to make intelligent decisions about traffic routing, restarts, and resource allocation.

Properly configured probes dramatically improve your application reliability, reduce deployment downtime, and handle unexpected errors gracefully. In this article, we’ll explore the three types of probes available in Kubernetes and how utilizing them alongside each other helps configure more resilient systems.

Quick refresher

Understanding exactly what each probe does and some common configuration patterns is essential. Each of them serves a specific purpose in the container lifecycle and when used together, they create a rock-solid framework for maintaining your application availability and performance.

Startup: Optimizing start-up times

Start-up probes are evaluated once when a new pod is spun up because of a scale-up event or a new deployment. It serves as a gatekeeper for the rest of the container checks and fine-tuning it will help your applications better handle increased load or service degradation.

Sample Config:

startupProbe:
httpGet:
path: /health
port: 80
failureThreshold: 30
periodSeconds: 10

Key takeaways:

Keep periodSeconds low, so that the probe fires often, quickly detecting a successful deployment.

Increase failureThreshold to a high enough value to accommodate for the worst-case start-up time.

The Startup probe will check whether your container has started by querying the configured path. It will additionally stop the triggering of the Liveness and Readiness probes until it is successful.

Liveness: Detecting dead containers

Your liveness probes answer a very simple question: “Is this pod still running properly?” If not, K8s will restart it.

Sample Config:

livenessProbe:
httpGet:
path: /health
port: 80
periodSeconds: 10
failureThreshold: 3

Key takeaways:

Since K8s will completely restart your container and spin up a new one, add a failureThreshold to combat intermittent abnormalities.

Avoid using initialDelaySeconds as it is too restrictive — use a Start-up probe instead.

Be mindful that a failing Liveness probe will bring down your currently running pod and spin up a new one, so avoid making it too aggressive — that’s for the next one.

Readiness: Handling unexpected errors

The readiness probe determines if it should start — or continue — to receive traffic. It is extremely useful in situations where your container lost connection to the database or is otherwise over-utilized and should not receive new requests.

Sample Config:

readinessProbe:
httpGet:
path: /health
port: 80
periodSeconds: 3
failureThreshold: 1
timeoutSeconds: 1

Key takeaways:

Since this is your first guard to stopping traffic to unhealthy targets, make the probe aggressive and reduce the periodSeconds .

Keep failureThreshold at a minimum, you want to fail quick.

The timeout period should also be kept at a minimum to handle slower Containers.

Give the readinessProbe ample time to recover by having a longer-running livenessProbe .

Readiness probes ensure that traffic will not reach a container not ready for it and as such it’s one of the most important ones in the stack.

Putting it all together

As you can see, even if all of the probes have their own distinct uses, the best way to improve your application’s resilience strategy is using them alongside each other.

Your startup probe will assist you in scale up scenarios and new deployments, allowing your containers to be quickly brought up. They’re fired only once and also stop the execution of the rest of the probes until they successfully complete.

The liveness probe helps in dealing with dead containers suffering from non-recoverable errors and tells the cluster to bring up a new, fresh pod just for you.

The readiness probe is the one telling K8s when a pod should receive traffic or not. It can be extremely useful dealing with intermittent errors or high resource consumption resulting in slower response times.

Additional configurations

Probes can be further configured to use a command in their checks instead of an HTTP request, as well as giving ample time for the container to safely terminate. While these are useful in more specific scenarios, understanding how you can extend your deployment configuration can be beneficial, so I’d recommend doing some additional reading if your containers handle unique use cases.

Further reading:Liveness, Readiness, and Startup ProbesConfigure Liveness, Readiness and Startup Probes

Read More »

When You Just Can’t Decide on a Single Action

In Game Theory, the players typically have to make assumptions about the other players’ actions. What will the other player do? Will he use rock, paper or scissors? You never know, but in some cases, you might have an idea of the probability of some actions being higher than others. Adding such a notion of probability or randomness opens up a new chapter in game theory that lets us analyse more complicated scenarios. 

This article is the third in a four-chapter series on the fundamentals of game theory. If you haven’t checked out the first two chapters yet, I’d encourage you to do that to become familiar with the basic terms and concepts used in the following. If you feel ready, let’s go ahead!

Mixed Strategies

To the best of my knowledge, soccer is all about hitting the goal, although that happens very infrequently. Photo by Zainu Color on Unsplash

So far we have always considered games where each player chooses exactly one action. Now we will extend our games by allowing each player to select different actions with given probabilities, which we call a mixed strategy. If you play rock-paper-scissors, you do not know which action your opponent takes, but you might guess that they select each action with a probability of 33%, and if you play 99 games of rock-paper-scissors, you might indeed find your opponent to choose each action roughly 33 times. With this example, you directly see the main reasons why we want to introduce probability. First, it allows us to describe games that are played multiple times, and second, it enables us to consider a notion of the (assumed) likelihood of a player’s actions. 

Let me demonstrate the later point in more detail. We come back to the soccer game we saw in chapter 2, where the keeper decides on a corner to jump into and the other player decides on a corner to aim for.

A game matrix for a penalty shooting.

If you are the keeper, you win (reward of 1) if you choose the same corner as the opponent and you lose (reward of -1) if you choose the other one. For your opponent, it is the other way round: They win, if you select different corners. This game only makes sense, if both the keeper and the opponent select a corner randomly. To be precise, if one player knows that the other always selects the same corner, they know exactly what to do to win. So, the key to success in this game is to choose the corner by some random mechanism. The main question now is, what probability should the keeper and the opponent assign to both corners? Would it be a good strategy to choose the right corner with a probability of 80%? Probably not. 

To find the best strategy, we need to find the Nash equilibrium, because that is the state where no player can get any better by changing their behaviour. In the case of mixed strategies, such a Nash Equilibrium is described by a probability distribution over the actions, where no player wants to increase or decrease any probability anymore. In other words, it is optimal (because if it were not optimal, one player would like to change). We can find this optimal probability distribution if we consider the expected reward. As you might guess, the expected reward is composed of the reward (also called utility) the players get (which is given in the matrix above) times the likelihood of that reward. Let’s say the shooter chooses the left corner with probability p and the right corner with probability 1-p. What reward can the keeper expect? Well, if they choose the left corner, they can expect a reward of p*1 + (1-p)*(-1). Do you see how this is derived from the game matrix? If the keeper chooses the left corner, there is a probability of p, that the shooter chooses the same corner, which is good for the keeper (reward of 1). But with a chance of (1-p), the shooter chooses the other corner and the keeper loses (reward of -1). In a likewise fashion, if the keeper chooses the right corner, he can expect a reward of (1-p)*1 + p*(-1). Consequently, if the keeper chooses the left corner with probability q and the right corner with probability (1-q), the overall expected reward for the keeper is q times the expected reward for the left corner plus (1-q) times the reward for the right corner. 

Now let’s take the perspective of the shooter. He wants the keeper to be indecisive between the corners. In other words, he wants the keeper to see no advantage in any corner so he chooses randomly. Mathematically that means that the expected rewards for both corners should be equal, i.e.

which can be solved to p=0.5. So the optimal strategy for the shooter to keep the keeper indecisive is to choose the right corner with a Probability of p=0.5 and hence choose the left corner with an equal probability of p=0.5. 

But now imagine a shooter who is well known for his tendency to choose the right corner. You might not expect a 50/50 probability for each corner, but you assume he will choose the right corner with a probability of 70%. If the keeper stays with their 50/50 split for choosing a corner, their expected reward is 0.5 times the expected reward for the left corner plus 0.5 times the expected reward for the right corner:

That does not sound too bad, but there is a better option still. If the keeper always chooses the right corner (i.e., q=1), they get a reward of 0.4, which is better than 0. In this case, there is a clear best answer for the keeper which is to favour the corner the shooter prefers. That, however, would lower the shooter’s reward. If the keeper always chooses the right corner, the shooter would get a reward of -1 with a probability of 70% (because the shooter themself chooses the right corner with a probability of 70%) and a reward of 1 in the remaining 30% of cases, which yields an expected reward of 0.7*(-1) + 0.3*1 = -0.4. That is worse than the reward of 0 they got when they chose 50/50. Do you remember that a Nash equilibrium is a state, where no player has any reason to change his action unless any other player does? This scenario is not a Nash equilibrium, because the shooter has an incentive to change his action more towards a 50/50 split, even if the keeper does not change his strategy. This 50/50 split, however, is a Nash equilibrium, because in that scenario neither the shooter nor the keeper gains anything from changing their probability of choosing the one or the other corner. 

Fighting birds

Food can be a reason for birds to fight each other. Photo byViktor Keri on Unsplash

From the previous example we saw, that a player’s assumptions about the other player’s actions influence the first player’s action selection as well. If a player wants to behave rationally (and this is what we always expect in game theory), they would choose actions such that they maximize their expected reward given the other players’ mixed action strategies. In the soccer scenario it is quite simple to more often jump into a corner, if you assume that the opponent will choose that corner more often, so let us continue with a more complicated example, that takes us outside into nature. 

As we walk across the forest, we notice some interesting behaviour in wild animals. Say two birds come to a place where there is something to eat. If you were a bird, what would you do? Would you share the food with the other bird, which means less food for both of you? Or would you fight? If you threaten your opponent, they might give in and you have all the food for yourself. But if the other bird is as aggressive as you, you end up in a real fight and you hurt each other. Then again you might have preferred to give in in the first place and just leave without a fight. As you see, the outcome of your action depends on the other bird. Preparing to fight can be very rewarding if the opponent gives in, but very costly if the other bird is willing to fight as well. In matrix notation, this game looks like this:

A matrix for a game that is someties called hawk vs. dove.

The question is, what would be the rational behaviour for a given distribution of birds who fight or give in? If you are in a very dangerous environment, where most birds are known to be aggressive fighters, you might prefer giving in to not get hurt. But if you assume that most other birds are cowards, you might see a potential benefit in preparing for a fight to scare the others away. By calculating the expected reward, we can figure out the exact proportions of birds fighting and birds giving in, which forms an equilibrium. Say the probability to fight is denoted p for bird 1 and q for bird 2, then the probability for giving in is 1-p for bird 1 and 1-q for bird 2. In a Nash equilibrium, no player wants to change their strategies unless any other payer does. Formally that means, that both options need to yield the same expected reward. So, for bird 2 fighting with a probability of q needs to be as good as giving in with a probability of (1-q). This leads us to the following formula we can solve for q:

For bird 2 it would be optimal to fight with a probability of 1/3 and give in with a probability of 2/3, and the same holds for bird 1 because of the symmetry of the game. In a big population of birds, that would mean that a third of the birds are fighters, who usually seek the fight and the other two-thirds prefer giving in. As this is an equilibrium, these ratios will stay stable over time. If it were to happen that more birds became cowards who always give in, fighting would become more rewarding, as the chance of winning increased. Then, however, more birds would choose to fight and the number of cowardly birds decreases and the stable equilibrium is reached again. 

Report a crime

There is nothing to see here. Move on and learn more about game theory. Photo by JOSHUA COLEMAN on Unsplash

Now that we have understood that we can find optimal Nash equilibria by comparing the expected rewards for the different options, we will use this strategy on a more sophisticated example to unleash the power game theory analyses can have for realistic complex scenarios. 

Say a crime happened in the middle of the city centre and there are multiple witnesses to it. The question is, who calls the police now? As there are many people around, everybody might expect others to call the police and hence refrain from doing it themself. We can model this scenario as a game again. Let’s say we have n players and everybody has two options, namely calling the police or not calling it. And what is the reward? For the reward, we distinguish three cases. If nobody calls the police, the reward is zero, because then the crime is not reported. If you call the police, you have some costs (e.g. the time you have to spend to wait and tell the police what happened), but the crime is reported which helps keep your city safe. If somebody else reports the crime, the city would still be kept safe, but you didn’t have the costs of calling the police yourself. Formally, we can write this down as follows:

v is the reward of keeping the city safe, which you get either if somebody else calls the police (first row) or if you call the police yourself (second row). However, in the second case, your reward is diminished a little by the costs c you have to take. However, let us assume that c is smaller than v, which means, that the costs of calling the police never exceed the reward you get from keeping your city safe. In the last case, where nobody calls the police, your reward is zero.

This game looks a little different from the previous ones we had, mainly because we didn’t display it as a matrix. In fact, it is more complicated. We didn’t specify the exact number of players (we just called it n), and we also didn’t specify the rewards explicitly but just introduced some values v and c. However, this helps us model a quite complicated real situation as a game and will allow us to answer more interesting questions: First, what happens if more people witness the crime? Will it become more likely that somebody will report the crime? Second, how do the costs c influence the likelihood of the crime being reported? We can answer these questions with the game-theoretic concepts we have learned already. 

As in the previous examples, we will use the Nash equilibrium’s property that in an optimal state, nobody should want to change their action. That means, for every individual calling the police should be as good as not calling it, which leads us to the following formula:

On the left, you have the reward if you call the police yourself (v-c). This should be as good as a reward of v times the likelihood that anybody else calls the police. Now, the probability of anybody else calling the police is the same as 1 minus the probability that nobody else calls the police. If we denote the probability that an individual calls the police with p, the probability that a single individual does not call the police is 1-p. Consequently, the probability that two individuals don’t call the police is the product of the single probabilities, (1-p)*(1-p). For n-1 individuals (all individuals except you), this gives us the term 1-p to the power of n-1 in the last row. We can solve this equation and finally arrive at:

This last row gives you the probability of a single individual calling the police. What happens, if there are more witnesses to the crime? If n gets larger, the exponent becomes smaller (1/n goes towards 0), which finally leads to:

Given that x to the power of 0 is always 1, p becomes zero. In other words, the more witnesses are around (higher n), the less likely it becomes that you call the police, and for an infinite amount of other witnesses, the probability drops to zero. This sounds reasonable. The more other people around, the more likely you are to expect that anybody else will call the police and the smaller you see your responsibility. Naturally, all other individuals will have the same chain of thought. But that also sounds a little tragic, doesn’t it? Does this mean that nobody will call the police if there are many witnesses? 

Well, not necessarily. We just saw that the probability of a single person calling the police declines with higher n, but there are still more people around. Maybe the sheer number of people around counteracts this diminishing probability. A hundred people with a small probability of calling the police each might still be worth more than a few people with moderate individual probabilities. Let us now take a look at the probability that anybody calls the police.

The probability that anybody calls the police is equal to 1 minus the probability that nobody calls the police. Like in the example before, the probability of nobody calling the police is described by 1-p to the power of n. We then use an equation we derived previously (see formulas above) to replace (1-p)^(n-1) with c/v. 

When we look at the last line of our calculations, what happens for big n now? We already know that p drops to zero, leaving us with a probability of 1-c/v. This is the likelihood that anybody will call the police if there are many people around (note that this is different from the probability that a single individual calls the police). We see that this likelihood heavily depends on the ratio of c and v. The smaller c, the more likely it is that anybody calls the police. If c is (close to) zero, it is almost certain that the police will be called, but if c is almost as big as v (that is, the costs of calling the police eat up the reward of reporting the crime), it becomes unlikely that anybody calls the police. This gives us a lever to influence the probability of reporting crimes. Calling the police and reporting a crime should be as effortless and low-threshold as possible.

Summary

We have learned a lot about probabilities and choosing actions randomly today. Photo by Robert Stump on Unsplash

In this chapter on our journey through the realms of game theory, we have introduced so-called mixed strategies, which allowed us to describe games by the probabilities with which different actions are taken. We can summarize our key findings as follows: 

A mixed strategy is described by a probability distribution over the different actions.

In a Nash equilibrium, the expected reward for all actions a player can take must be equal.

In mixed strategies, a Nash equilibrium means that no player wants to change the probabilities of their actions

We can find out the probabilities of different actions in a Nash equilibrium by setting the expected rewards of two (or more) options equal.

Game-theoretic concepts allow us to analyze scenarios with an infinite amount of players. Such analyses can also tell us how the exact shaping of the reward can influence the probabilities in a Nash equilibrium. This can be used to inspire decisions in the real world, as we saw in the crime reporting example.

We are almost through with our series on the fundamentals of game theory. In the next and final chapter, we will introduce the idea of taking turns in games. Stay tuned!

References

The topics introduced here are typically covered in standard textbooks on game theory. I mainly used this one, which is written in German though:

Bartholomae, F., & Wiens, M. (2016). Spieltheorie. Ein anwendungsorientiertes Lehrbuch. Wiesbaden: Springer Fachmedien Wiesbaden.

An alternative in English language could be this one:

Espinola-Arredondo, A., & Muñoz-Garcia, F. (2023). Game Theory: An Introduction with Step-by-step Examples. Springer Nature.

Game theory is a rather young field of research, with the first main textbook being this one:

Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior.

Like this article? Follow me to be notified of my future posts.

Read More »

EVOL X Fugro International Women’s Day special

Join Energy Voice News Editor Erikka Askeland who speaks to two high profile energy industry business leaders for International Women’s Day. We speak to Nicola Welsh, UK Country Director at geo-data specialist Fugro alongsideLinda Stewart, Director Marine Geophysical Europe, also at Fugro. Tune in to hear Nicola discuss her route from mining camps in the Australian outback to a senior leadership role while Linda charts her 19-year career journey to become Fugro’s first female director in her role in Scotland. There’s serious discussion about leaning in, the “double bind” and what the IWD 2025 call to “accelerate action” really means. This special podcast also serves and the opening of Energy Voice’s highly anticipated Women in New Energy Event which takes place in Aberdeen in June. Recommended for you Celebrating International Women’s Day with Axis Network’s Emma Behjat

Read More »

Repsol to slash North Sea jobs

Repsol has blamed UK government tax “policies and adverse economic conditions” as it as confirmed plans to cut jobs in its North Sea operations. The Spanish energy firm said 21 in-house roles could be cut although it did not confirm how many jobs would have to go as it announced its “new and more efficient operating model”. However all of the operator’s 1,000 North Sea staff and contractor roles will be up for review, with Petrofac and Altrad the firm’s biggest employers. Many firms are citing the general market and UK fiscal policies for the cuts. This week North Sea decommissioning firm Well-Safe Solutions announced plans to cut dozens of jobs on shore as well as on its vessel, the Well-Safe Guardian. The firm which has invested tens of millions in repurposing drilling rigs into units that can remove subsea oil and gas infrastructure, said the cuts were due to a business down turn which was a “knock-on effects” of the windfall tax. “Repsol UK has undertaken a review of its operations at our offshore sites, which will result in a new and more efficient operating model.  The health and safety of our people and delivery of safe operations remain our priority. “We remain committed to thrive in the UK North Sea basin, but the UK government’s policies and adverse economic conditions make these changes necessary. “There will be organisational changes, and we are in dialogue with the affected employees and will seek to redeploy where possible.” More to follow. Recommended for you SeAH Wind brings in three contractors for Hornsea 3 work

Read More »

BP CEO Sees Pay Cut 30 Pct After Profit Miss, Elliott Intervention

BP Plc Chief Executive Officer Murray Auchincloss’ total compensation dropped to £5.36 million ($6.91 million) in 2024, about 30% less than the previous year, after the energy giant’s profits disappointed. The London-based company’s 2024 earnings results reported in February showed a steep drop in profits compared with the previous year. That set the stage for a subsequent strategic switch back to oil and gas after years of shifting away from fossil fuels, as it strives to catch up with rivals such as Shell Plc which were quicker to pivot back to core businesses. While Auchincloss saw his base salary rise to £1.45 million from £1.02 million, his share awards dropped to £2.75 million from £4.36 million, according to the annual report published on Thursday. His annual bonus was sharply reduced in his first full year as boss. Auchincloss is in the middle of a roadshow meeting with investors in London in the hope of enlisting support for the company’s new direction. Activist investor Elliott Investment Management, which had bought about 5% of the oil major, is ramping up pressure on the company’s management after the new strategy fell short of its expectations. BP’s shares have declined about 6% since the strategy announcement on Feb. 26.  BP chair Helge Lund is looking for new board members who can bring skills and experience that align with the company’s revised oil and gas-focused strategy, he said in the annual report. The board is particularly keen to recruit an oil and gas expert, according to a person familiar with the matter who asked not to be identified because the information is private. Grafton Group Chair Ian Tyler was appointed to BP’s board to lead the remuneration committee, the company said Thursday. Tyler is also a director at Anglo American Plc. BP’s previous strategy, unveiled in 2020, focused on shifting away from oil

Read More »

Lenovo introduces entry-level, liquid cooled AI edge server

Lenovo has announced the ThinkEdge SE100, an entry-level AI inferencing server, designed to make edge AI affordable for enterprises as well as small and medium-sized businesses. AI systems are not normally associated with being small and compact; they’re big, decked out servers with lots of memory, GPUs, and CPUs. But the server is for inferencing, which is the less compute intensive portion of AI processing, Lenovo stated.  GPUs are considered overkill for inferencing and there are multiple startups making small PC cards with inferencing chip on them instead of the more power-hungry CPU and GPU. This design brings AI to the data rather than the other way around. Instead of sending the data to the cloud or data center to be processed, edge computing uses devices located at the data source, reducing latency and the amount of data being sent up to the cloud for processing, Lenovo stated. 

Read More »

Mayo Clinic’s secret weapon against AI hallucinations: Reverse RAG in action

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Even as large language models (LLMs) become ever more sophisticated and capable, they continue to suffer from hallucinations: offering up inaccurate information, or, to put it more harshly, lying.  This can be particularly harmful in areas like healthcare, where wrong information can have dire results.  Mayo Clinic, one of the top-ranked hospitals in the U.S., has adopted a novel technique to address this challenge. To succeed, the medical facility must overcome the limitations of retrieval-augmented generation (RAG). That’s the process by which large language models (LLMs) pull information from specific, relevant data sources. The hospital has employed what is essentially backwards RAG, where the model extracts relevant information, then links every data point back to its original source content.  Remarkably, this has eliminated nearly all data-retrieval-based hallucinations in non-diagnostic use cases — allowing Mayo to push the model out across its clinical practice. “With this approach of referencing source information through links, extraction of this data is no longer a problem,” Matthew Callstrom, Mayo’s medical director for strategy and chair of radiology, told VentureBeat. Accounting for every single data point Dealing with healthcare data is a complex challenge — and it can be a time sink. Although vast amounts of data are collected in electronic health records (EHRs), data can be extremely difficult to find and parse out.  Mayo’s first use case for AI in wrangling all this data was discharge summaries (visit wrap-ups with post-care tips), with its models using traditional RAG. As Callstrom explained, that was a natural place to start because it is simple extraction and summarization, which is what LLMs generally excel at.  “In the first phase, we’re not trying to come up with a diagnosis, where

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE