Stay Ahead, Stay ONMINE

Learning How to Play Atari Games Through Deep Neural Networks

In July 1959, Arthur Samuel developed one of the first agents to play the game of checkers. What constitutes an agent that plays checkers can be best described in Samuel’s own words, “…a computer [that] can be programmed so that it will learn to play a better game of checkers than can be played by […]

In July 1959, Arthur Samuel developed one of the first agents to play the game of checkers. What constitutes an agent that plays checkers can be best described in Samuel’s own words, “…a computer [that] can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program” [1]. The checkers’ agent tries to follow the idea of simulating every possible move given the current situation and selecting the most advantageous one i.e. one that brings the player closer to winning. The move’s “advantageousnessis determined by an evaluation function, which the agent improves through experience. Naturally, the concept of an agent is not restricted to the game of checkers, and many practitioners have sought to match or surpass human performance in popular games. Notable examples include IBM’s Deep Blue (which managed to defeat Garry Kasparov, a chess world champion at the time), and Tesauro’s TD-Gammon, a temporal-difference approach, where the evaluation function was modelled using a neural network. In fact, TD-Gammon’s playing style was so uncommon that some experts even adopted some strategies it conjured up [2].

Unsurprisingly, research into creating such ‘agents’ only skyrocketed, with novel approaches able to reach peak human performance in complex games. In this post, we explore one such approach: the DQN approach introduced in 2013 by Mnih et al, in which playing Atari games is approached through a synthesis of Deep Neural Networks and TD-Learning (NB: the original paper came out in 2013, but we will focus on the 2015 version which comes with some technical improvements) [3, 4]. Before we continue, you should note that in the ever-expanding space of new approaches, DQN has been superseded by faster and more refined state-of-the-art methods. Yet, it remains an ideal stepping stone in the field of Deep Reinforcement Learning, widely recognized for combining deep learning with reinforcement learning. Hence, readers aiming to dive into Deep-RL are encouraged to begin with DQN.

This post is sectioned as follows: first, I define the problem with playing Atari games and explain why some traditional methods can be intractable. Finally, I present the specifics of the DQN approach and dive into the technical implementation.

The Problem At Hand

For the remainder of the post, I’ll assume that you know the basics of supervised learning, neural networks (basic FFNs and CNNs) and also basic reinforcement learning concepts (Bellman equations, TD-learning, Q-learning etc) If some of these RL concepts are foreign to you, then this playlist is a good introduction.  

Figure 2: Pong as shown in the ALE environment. [All media hereafter is created by the author unless otherwise noted]

Atari is a nostalgia-laden term, featuring iconic games such as Pong, Breakout, Asteroids and many more. In this post, we restrict ourselves to Pong. Pong is a 2-player game, where each player controls a paddle and can use said paddle to hit the incoming ball. Points are scored when the opponent is unable to return the ball, in other words, the ball goes past them. A player wins when they reach 21 points. 

Considering the sequential nature of the game, it might be appropriate to frame the problem as an RL problem, and then apply one of the solution methods. We can frame the game as an MDP:

The states would represent the current game state (where the ball or player paddle is etc, analogous to the idea of a search state). The rewards encapsulate our idea of winning and the actions correspond to the buttons on the Atari 2600 console. Our goal now becomes finding a policy

also known as the optimal policy. Let’s see what might happen if we try to train an agent using some classical RL algorithms. 

A straightforward solution might entail solving the problem using a tabular approach. We could enumerate all states (and actions) and associate each state with a corresponding state or state-action value. We could then apply one of the classical RL methods (Monte-Carlo, TD-Learning, Value Iteration etc), taking a dynamic Programming approach. However, employing this approach faces large pitfalls rather quickly. What do we consider as states? How many states do we have to enumerate?

It quickly becomes quite difficult to answer these questions. Defining a state becomes difficult as many elements are in play when considering the idea of a state (i.e. the states need to be Markovian, encapsulate a search state etc). What about visual output (frames) to represent a state? After all this is how we as humans interact with Atari games. We see frames, deduce information regarding the game state and then choose the appropriate action. However, there are impossibly many states when using this representation, which would make our tabular approach quite intractable, memory-wise.

Now for the sake of argument imagine that we have enough memory to hold a table of this size. Even then we would need to explore all the states a good number of times to get good approximations of the value function. We would need to explore all possible states (or state-action) enough times to arrive at a useful value. Herein lies the runtime hurdle; it would be quite infeasible for the values to converge for all the states in the table in a reasonable amount of time as we have infinite states.

Perhaps instead of framing it as a reinforcement learning problem, can we instead rephrase it into a supervised learning problem? Perhaps a formulation in which the states are samples and the labels are the actions performed. Even this perspective brings forth new problems. Atari games are inherently sequential, each state is sampled based on the previous. This breaks the i.i.d assumptions applied in supervised learning, negatively affecting supervised learning-based solutions. Similarly, we would need to create a hand-labelled dataset, perhaps employing a human expert to hand label actions for each frame. This would be expensive and laborious, and still might yield insufficient results.

Solely relying on either supervised learning or RL may lead to inefficient learning, whether due to computational constraints or suboptimal policies. This calls for a more efficient approach to solving Atari games.

DQN: Intuition & Implementation

I assume you have some basic knowledge of PyTorch, Numpy and Python, though I’ll try to be as articulate as possible. For those unfamiliar, I recommend consulting: pytorch & numpy

Deep-Q Networks aim to overcome the aforementioned barriers through a variety of techniques. Let’s go through each of the problems step-by-step and address how DQN mitigates or solves these challenges.

It’s quite hard to come up with a formal state definition for Atari games due to their diversity. DQN is designed to work for most Atari games, and as a result, we need a stated formalization that is compatible with said games. To this end, the visual representation (pixel values) of the games at any given moment are used to fashion a state. Naturally, this entails a continuous state space. This connects to our previous discussion on potential ways to represent states.

  Figure 3: The function approximation visualized. Image from [3].

The challenge of continuous states is solved through function approximation. Function approximation (FA) aims to approximate the state-action value function directly using a function approximation. Let’s go through the steps to understand what the FA does. 

Imagine that we have a network that given a state outputs the value of being in said state and performing a certain action. We then select actions based on the highest reward. However, this network would be short-sighted, only taking into account one timestep. Can we incorporate possible rewards from further down the line? Yes we can! This is the idea of the expected return. From this view, the FA becomes quite simple to understand; we aim to find a function:

In other words, a function which outputs the expected return of being in a given state after performing an action

This idea of approximation becomes crucial due to the continuous nature of the state space. By using a FA, we can exploit the idea of generalization. States close to each other (similar pixel values) will have similar Q-values, meaning that we don’t need to cover the entire (infinite) state space, greatly lowering our computational overhead. 

DQN employs FA in tandem with Q-learning. As a small refresher, Q-learning aims to find the expected return for being in a state and performing a certain action using bootstrapping. Bootstrapping models the expected return that we mentioned using the current Q-function. This ensures that we don’t need to wait till the end of an episode to update our Q-function. Q-learning is also 0ff-policy, which means that the data we use to learn the Q-function is different from the actual policy being learned. The resulting Q-function then corresponds to the optimal Q-function and can be used to find the optimal policy (just find the action that maximizes the Q-value in a given state). Moreover, Q-learning is a model-free solution, meaning that we don’t need to know the dynamics of the environment (transition functions etc) to learn an optimal policy, unlike in value iteration. Thus, DQN is also off-policy and model-free.

By using a neural network as our approximator, we need not construct a full table containing all the states and their respective Q-values. Our neural network will output the Q-value for being a given state and performing a certain action. From this point on, we refer to the approximator as the Q-network.

Figure 4: DQN architecture. Note that the last layer must equal the number of possible actions for the given game, in the case of Pong this is 6.

Since our states are defined by images, using a basic feed-forward network (FFN) would incur a large computational overhead. For this specific reason, we employ the use of a convolutional network, which is much better able to learn the distinct features of each state. The CNNs are able to distill the images down to a representation (this is the idea of representation learning), which is then fed to a FFN. The neural network architecture can be seen above. Instead of returning one value for:

we return an array with each value corresponding to a possible action in the given state (for Pong we can perform 6 actions, so we return 6 values).

Figure 5: MSE loss function, often used for regression tasks.

Recall that to train a neural network we need to define a loss function that captures our goals. DQN uses the MSE loss function. For the predicted values we the output of our Q-network. For the true values, we use the bootstrapped values. Hence, our loss function becomes the following:

If we differentiate the loss function with respect to the weights we arrive at the following equation.

Plugging this into the stochastic gradient descent (SGD) equation, we arrive at Q-learning [4]. 

By performing SGD updates using the MSE loss function, we perform Q-learning. However, this is an approximation of Q-learning, as we don’t update on a single move but instead on a batch of moves. The expectation is simplified for expedience, though the message remains the same.

From another perspective, you can also think of the MSE loss function as nudging the predicted Q-values as close to the bootstrapped Q-values (after all this is what the MSE loss intends). This inadvertently mimics Q-learning, and slowly converges to the optimal Q-function.

By employing a function approximator, we become subject to the conditions of supervised learning, namely that the data is i.i.d. But in the case of Atari games (or MDPs) this condition is often not upheld. Samples from the environment are sequential in nature, making them dependent on each other. Similarly, as the agent improves the value function and updates its policy, the distribution from which we sample also changes, violating the condition of sampling from an identical distribution.

To solve this the authors of DQN capitalize on the idea of an experience replay. This concept is core to keep the training of DQN stable and convergent. An experience replay is a buffer which stores the tuple (s, a, r, s’, d) where s, a, r, s’ are returned after performing an action in an MDP, and d is a boolean representing whether the episode has finished or not. The replay has a maximum capacity which is defined beforehand. It might be simpler to think of the replay as a queue or a FIFO data structure; old samples are removed to make room for new samples. The experience replay is used to sample a random batch of tuples which are then used for training.

The experience replay helps with the alleviation of two major challenges when using neural network function approximators with RL problems. The first deals with the independence of the samples. By randomly sampling a batch of moves and then using those for training we decouple the training process from the sequential nature of Atari games. Each batch may have actions from different timesteps (or even different episodes), giving a stronger semblance of independence. 

Secondly, the experience replay addresses the issue of non-stationarity. As the agent learns, changes in its behaviour are reflected in the data. This is the idea of non-stationarity; the distribution of data changes over time. By reusing samples in the replay and using a FIFO structure, we limit the adverse effects of non-stationarity on training. The distribution of the data still changes, but slowly and its effects are less impactful. Since Q-learning is an off-policy algorithm, we still end up learning the optimal policy, making this a viable solution. These changes allow for a more stable training procedure.

As a serendipitous side effect, the experience replay also allows for better data efficiency. Before training examples were discarded after being used for a single update step. However, through the use of an experience replay, we can reuse moves that we have made in the past for updates.

A change made in the 2015 Nature version of DQN was the introduction of a target network. Neural networks are fickle; slight changes in the weights can introduce drastic changes in the output. This is unfavourable for us, as we use the outputs of the Q-network to bootstrap our targets. If the targets are prone to large changes, it will destabilize training, which naturally we want to avoid. To alleviate this issue, the authors introduce a target network, which copies the weights of the Q-network every set amount of timesteps. By using the target network for bootstrapping, our bootstrapped targets are less unstable, making training more efficient.

Lastly, the DQN authors stack four consecutive frames after executing an action. This remark is made to ensure the Markovian property holds [9]. A singular frame omits many details of the game state such as the velocity and direction of the ball. A stacked representation is able to overcome these obstacles, providing a holistic view of the game at any given timestep.

With this, we have covered most of the major techniques used for training a DQN agent. Let’s go over the training procedure. The procedure will be more of an overview, and we’ll iron out the details in the implementation section.

Figure 6: Training procedure to train a DQN agent.

One important clarification arises from step 2. In this step, we perform a process called ε-greedy action selection. In ε-greedy, we randomly choose an action with probability ε, and otherwise choose the best possible action (according to our learned Q-network). Choosing an appropriate ε allows for the sufficient exploration of actions which is crucial to converge to a reliable Q-function. We often start with a high ε and slowly decay this value over time.

Implementation

If you want to follow along with my implementation of DQN then you will need the following libraries (apart from Numpy and PyTorch). I provide a concise explanation of their use.

  • Arcade Learning Environment → ALE is a framework that allows us to interact with Atari 2600 environments. Technically we interface ALE through gymnasium, an API for RL environments and benchmarking.
  • StableBaselines3 → SB3 is a deep reinforcement learning framework with a backend designed in Pytorch. We will only need this for some preprocessing wrappers.

Let’s import all of the necessary libraries.

import numpy as np
import time
import torch
import torch.nn as nn
import gymnasium as gym
import ale_py

from collections import deque # FIFO queue data structurefrom tqdm import tqdm  # progress barsfrom gymnasium.wrappers import FrameStack
from gymnasium.wrappers.frame_stack import LazyFrames
from stable_baselines3.common.atari_wrappers import (
  AtariWrapper,
  FireResetEnv,
)

gym.register_envs(ale_py) # we need to register ALE with gym

# use cuda if you have it otherwise cpu
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device

First, we construct an environment, using the ALE framework. Since we are working with pong we create an environment with the name PongNoFrameskip-v4. With this, we can create an environment using the following code:

env = gym.make('PongNoFrameskip-v4', render_mode='rgb_array')

The rgb_array parameter tells ALE to return pixel values instead of RAM codes (which is the default). The code to interact with the Atari becomes extremely simple with gym. The following excerpt encapsulates most of the utilities that we will need from gym.

# this code restarts/starts a environment to the beginning of an episode
observation, _ = env.reset()
for _ in range(100):  # number of timesteps
  # randomly get an action from possible actions
  action = env.action_space.sample()
  # take a step using the given action
  # observation_prime refers to s', terminated and truncated refer to
  # whether an episode has finished or been cut short
  observation_prime, reward, terminated, truncated, _ = env.step(action)
  observation = observation_prime

With this, we are given states (we name them observations) with the shape (210, 160, 3). Hence the states are RGB images with the shape 210×160. An example can be seen in Figure 2. When training our DQN agent, an image of this size adds unnecessary computational overhead. A similar observation can be made about the fact that the frames are RGB (3 channels).

To solve this, we downsample the frame down to 84×84 and transform it into grayscale. We can do this by employing a wrapper from SB3, which does this for us. Now every time we perform an action our output will be in grayscale (with 1 channel) and of size 84×84.

env = AtariWrapper(env, terminal_on_life_loss=False, frame_skip=4)

The wrapper above does more than downsample and turn our frame into grayscale. Let’s go over some other changes the wrapper introduces.

  • Noop Reset → The start state of each Atari game is deterministic, i.e. you start at the same state each time the game ends. With this the agent may learn to memorize a sequence of actions from the starting state, resulting in a sub-optimal policy. To prevent this, we perform no actions for a set amount of timesteps in the beginning.
  • Frame Skipping → In the ALE environment each frame needs an action. Instead of choosing an action at each frame, we select an action and repeat it for a set number of timesteps. This is the idea of frame skipping and allows for smoother transitions.
  • Max-pooling → Due to the manner in which ALE/Atari renders its frames and the downsampling, it is possible that we encounter flickering. To solve this we take the max over two consecutive frames.
  • Terminal Life on Loss → Many Atari games do not end when the player dies. Consider Pong, no player wins until the score hits 21. However, by default agents might consider the loss of life as the end of an episode, which is undesirable. This wrapper counteracts this and ends the episode when the game is truly over.
  • Clip Reward → The gradients are highly sensitive to the magnitude of the rewards. To avoid unstable updates, we clip the rewards to be between {-1, 0, 1}.

Apart from these we also introduce an additional frame stack wrapper (FrameStack). This performs what was discussed above, stacking 4 frames on top of each to keep the states Markovian. The ALE environment returns LazyFrames, which are designed to be more memory efficient, as the same frame might occur multiple times. However, they are not compatible with many of the operations that we perform throughout the training procedure. To convert LazyFrames into usable objects, we apply a custom wrapper which converts an observation to Numpy before returning it to us. The code is shown below.

class LazyFramesToNumpyWrapper(gym.ObservationWrapper): # subclass obswrapper
  def __init__(self, env):
      super().__init__(env)
      self.env = env # the environment that we want to convert

  def observation(self, observation):
      # if its a LazyFrames object then turn it into a numpy array
      if isinstance(observation, LazyFrames):
          return np.array(observation)
      return observation

Let’s combine all of the wrappers into one function that returns an environment that does all of the above.

def make_env(game, render='rgb_array'):
  env = gym.make(game, render_mode=render)
  env = AtariWrapper(env, terminal_on_life_loss=False, frame_skip=4)
  env = FrameStack(env, num_stack=4)
  env = LazyFramesToNumpyWrapper(env)
  # sometimes a environment needs that the fire button be
  # pressed to start the game, this makes sure that game is started when needed
  if "FIRE" in env.unwrapped.get_action_meanings():
      env = FireResetEnv(env)
  return env

These changes are derived from the 2015 Nature paper and help to stabilize training [3]. The interfacing with gym remains the same as shown above. An example of the preprocessed states can be seen in Figure 7.

Figure 7: Preprocessed successive Atari frames; each frame is preprocessed by turning the image from RGB to grayscale, and downsampling the size of the image from 210×160 pixels to 84×84 pixels.

Now that we have an appropriate environment let’s move on to create the replay buffer.

class ReplayBuffer:

  def __init__(self, capacity, device):
      self.capacity = capacity
      self._buffer =  np.zeros((capacity,), dtype=object) # stores the tuples
      self._position = 0 # keep track of where we are
      self._size = 0
      self.device = device

  def store(self, experience):
      """Adds a new experience to the buffer,
        overwriting old entries when full."""
      idx = self._position % self.capacity # get the index to replace
      self._buffer[idx] = experience
      self._position += 1
      self._size = min(self._size + 1, self.capacity) # max size is the capacity

  def sample(self, batch_size):
      """ Sample a batch of tuples and load it onto the device
      """
      # if the buffer is not full capacity then return everything we have
      buffer = self._buffer[0:min(self._position-1, self.capacity-1)]
      # minibatch of tuples
      batch = np.random.choice(buffer, size=[batch_size], replace=True)

      # we need to return the objects as torch tensors, hence we delegate
      # this task to the transform function
      return (
          self.transform(batch, 0, shape=(batch_size, 4, 84, 84), dtype=torch.float32),
          self.transform(batch, 1, shape=(batch_size, 1), dtype=torch.int64),
          self.transform(batch, 2, shape=(batch_size, 1), dtype=torch.float32),
          self.transform(batch, 3, shape=(batch_size, 4, 84, 84), dtype=torch.float32),
          self.transform(batch, 4, shape=(batch_size, 1), dtype=torch.bool)
      )
     
  def transform(self, batch, index, shape, dtype):
      """ Transform a passed batch into a torch tensor for a given axis.
      E.g. if index 0 of a tuple means the state then we return all states
      as a torch tensor. We also return a specified shape.
      """
      # reshape the tensors as needed
      batched_values = np.array([val[index] for val in batch]).reshape(shape)
      # convert to torch tensors
      batched_values = torch.as_tensor(batched_values, dtype=dtype, device=self.device)
      return batched_values

  # below are some magic methods I used for debugging, not very important
  # they just turn the object into an arraylike object
  def __len__(self):
      return self._size

  def __getitem__(self, index):
      return self._buffer[index]

  def __setitem__(self, index, value: tuple):
      self._buffer[index] = value

The replay buffer works by allocating space in the memory for the given capacity. We maintain a pointer that keeps track of the number of objects added. Every time a new tuple is added we replace the oldest tuples with the new ones. To sample a minibatch, we first randomly sample a minibatch in numpy and then convert it into torch tensors, also loading it to the appropriate device.

Some of the aspects of the replay buffer are inspired by [8]. The replay buffer proved to be the biggest bottleneck in training the agent, and thus small speed-ups in the code proved to be monumentally important. An alternative strategy which uses an deque object to hold the tuples can also be used. If you are creating your own buffer, I would emphasize that you spend a little more time to ensure its efficiency. 

We can now use this to create a function that creates a buffer and preloads a given number of tuples with a random policy.

def load_buffer(preload, capacity, game, *, device):
  # make the environment
  env = make_env(game)
  # create the buffer
  buffer = ReplayBuffer(capacity,device=device)
 
  # start the environment
  observation, _ = env.reset()
  # run for as long as the specified preload
  for _ in tqdm(range(preload)):
      # sample random action -> random policy 
      action = env.action_space.sample()
   
      observation_prime, reward, terminated, truncated, _ = env.step(action)
     
      # store the results from the action as a python tuple object
      buffer.store((
          observation.squeeze(), # squeeze will remove the unnecessary grayscale channel
          action,
          reward,
          observation_prime.squeeze(),
          terminated or truncated))
      # set old observation to be new observation_prime
      observation = observation_prime
     
      # if the episode is done, then restart the environment
      done = terminated or truncated
      if done:
          observation, _ = env.reset()
 
  # return the env AND the loaded buffer
  return buffer, env

The function is quite straightforward, we create a buffer and environment object and then preload the buffer using a random policy. Note that we squeeze the observations to remove the redundant color channel. Let’s move on to the next step and define the function approximator.

class DQN(nn.Module):

  def __init__(
      self,
      env,
      in_channels = 4, # number of stacked frames
      hidden_filters = [16, 32],
      start_epsilon = 0.99, # starting epsilon for epsilon-decay
      max_decay = 0.1, # end epsilon-decay
      decay_steps = 1000, # how long to reach max_decay
      *args,
      **kwargs
  ) -> None:
      super().__init__(*args, **kwargs)
     
      # instantiate instance vars
      self.start_epsilon = start_epsilon
      self.epsilon = start_epsilon
      self.max_decay = max_decay
      self.decay_steps = decay_steps
      self.env = env
      self.num_actions = env.action_space.n
   
      # Sequential is an arraylike object that allows us to
      # perform the forward pass in one line
      self.layers = nn.Sequential(
          nn.Conv2d(in_channels, hidden_filters[0], kernel_size=8, stride=4),
          nn.ReLU(),
          nn.Conv2d(hidden_filters[0], hidden_filters[1], kernel_size=4, stride=2),
          nn.ReLU(),
          nn.Flatten(start_dim=1),
          nn.Linear(hidden_filters[1] * 9 * 9, 512), # the final value is calculated by using the equation for CNNs
          nn.ReLU(),
          nn.Linear(512, self.num_actions)
      )
       
      # initialize weights using he initialization
      # (pytorch already does this for conv layers but not linear layers)
      # this is not necessary and nothing you need to worry about
      self.apply(self._init)

  def forward(self, x):
      """ Forward pass. """
      # the /255.0 performs normalization of pixel values to be in [0.0, 1.0]
      return self.layers(x / 255.0)

  def epsilon_greedy(self, state, dim=1):
      """Epsilon greedy. Randomly select value with prob e,
        else choose greedy action"""

      rng = np.random.random() # get random value between [0, 1]
     
      if rng < self.epsilon: # for prob under e
          # random sample and return as torch tensor
          action = self.env.action_space.sample()
          action = torch.tensor(action)
      else:
          # use torch no grad to make sure no gradients are accumulated for this
          # forward pass
          with torch.no_grad():
              q_values = self(state)
          # choose best action
          action = torch.argmax(q_values, dim=dim)

      return action
 
  def epsilon_decay(self, step):
      # linearly decrease epsilon
      self.epsilon = self.max_decay + (self.start_epsilon - self.max_decay) * max(0, (self.decay_steps - step) / self.decay_steps)
 
  def _init(self, m):
    # initialize layers using he init
    if isinstance(m, (nn.Linear, nn.Conv2d)):
      nn.init.kaiming_normal_(m.weight, nonlinearity='relu')
      if m.bias is not None:
        nn.init.zeros_(m.bias)

That covers the model architecture. I used a linear ε-decay scheme, but feel free to try another. We can also create an auxiliary class that keeps track of important metrics. The class keeps track of rewards received for the last few episodes along with the respective lengths of said episodes.

class MetricTracker:
  def __init__(self, window_size=100):
      # the size of the history we use to track stats
      self.window_size = window_size
      self.rewards = deque(maxlen=window_size)
      self.current_episode_reward = 0
     
  def add_step_reward(self, reward):
      # add received reward to the current reward
      self.current_episode_reward += reward
     
  def end_episode(self):
      # add reward for episode to history
      self.rewards.append(self.current_episode_reward)
      # reset metrics
      self.current_episode_reward = 0
 
  # property just makes it so that we can return this value without
  # having to call it as a function
  @property
  def avg_reward(self):
      return np.mean(self.rewards) if self.rewards else 0

Great! Now we have everything we need to start training our agent. Let’s define the training function and go over how it works. Before that, we need to create the necessary objects to pass into our training function along with some hyperparameters. A small note: in the paper the authors use RMSProp, but instead we’ll use Adam. Adam proved to work for me with the given parameters, but you are welcome to try RMSProp or other variations.

TIMESTEPS = 6000000 # total number of timesteps for training
LR = 2.5e-4 # learning rate
BATCH_SIZE = 64 # batch size, change based on your hardware
C = 10000 # the interval at which we update the target network
GAMMA = 0.99 # the discount value
TRAIN_FREQ = 4 # in the paper the SGD updates are made every 4 actions
DECAY_START = 0 # when to start e-decay
FINAL_ANNEAL = 1000000 # when to stop e-decay

# load the buffer
buffer_pong, env_pong = load_buffer(50000, 150000, game='PongNoFrameskip-v4')

# create the networks, push the weights of the q_network onto the target network
q_network_pong = DQN(env_pong, decay_steps=FINAL_ANNEAL).to(device)
target_network_pong = DQN(env_pong, decay_steps=FINAL_ANNEAL).to(device)
target_network_pong.load_state_dict(q_network_pong.state_dict())

# create the optimizer
optimizer_pong = torch.optim.Adam(q_network_pong.parameters(), lr=LR)

# metrics class instantiation
metrics = MetricTracker()
def train(
  env,
  name, # name of the agent, used to save the agent
  q_network,
  target_network,
  optimizer,
  timesteps,
  replay, # passed buffer
  metrics, # metrics class
  train_freq, # this parameter works complementary to frame skipping
  batch_size,
  gamma, # discount parameter
  decay_start,
  C,
  save_step=850000, # I recommend setting this one high or else a lot of models will be saved
):
  loss_func = nn.MSELoss() # create the loss object
  start_time = time.time() # to check speed of the training procedure
  episode_count = 0
  best_avg_reward = -float('inf')
 
  # reset the env
  obs, _ = env.reset()
 
 
  for step in range(1, timesteps+1): # start from 1 just for printing progress

      # we need to pass tensors of size (batch_size, ...) to torch
      # but the observation is just one so it doesn't have that dim
      # so we add it artificially (step 2 in procedure)
      batched_obs = np.expand_dims(obs.squeeze(), axis=0)
      # perform e-greedy on the observation and convert the tensor into numpy and send it to the cpu
      action = q_network.epsilon_greedy(torch.as_tensor(batched_obs, dtype=torch.float32, device=device)).cpu().item()
     
      # take an action
      obs_prime, reward, terminated, truncated, _ = env.step(action)

      # store the tuple (step 3 in the procedure)
      replay.store((obs.squeeze(), action, reward, obs_prime.squeeze(), terminated or truncated))
      metrics.add_step_reward(reward)
      obs = obs_prime
     
      # train every 4 steps as per the paper
      if step % train_freq == 0:
          # sample tuples from the replay (step 4 in the procedure)
          observations, actions, rewards, observation_primes, dones = replay.sample(batch_size)
         
          # we don't want to accumulate gradients for this operation so use no_grad
          with torch.no_grad():
              q_values_minus = target_network(observation_primes)
              # get the max over the target network
              boostrapped_values = torch.amax(q_values_minus, dim=1, keepdim=True)

          # this line basically makes so that for every sample in the minibatch which indicates
          # that the episode is done, we return the reward, else we return the
          # the bootstrapped reward (step 5 in the procedure)
          y_trues = torch.where(dones, rewards, rewards + gamma * boostrapped_values)
          y_preds = q_network(observations)
         
          # compute the loss
          # the gather gets the values of the q_network corresponding to the
          # action taken
          loss = loss_func(y_preds.gather(1, actions), y_trues)
           
          # set the grads to 0, and perform the backward pass (step 6 in the procedure)
          optimizer.zero_grad()
          loss.backward()
          optimizer.step()
     
      # start the e-decay
      if step > decay_start:
          q_network.epsilon_decay(step)
          target_network.epsilon_decay(step)
     
      # if the episode is finished then we print some metrics
      if terminated or truncated:
          # compute steps per sec
          elapsed_time = time.time() - start_time
          steps_per_sec = step / elapsed_time
          metrics.end_episode()
          episode_count += 1
         
          # reset the environment
          obs, _ = env.reset()
         
          # save a model if above save_step and if the average reward has improved
          # this is kind of like early-stopping, but we don't stop we just save a model
          if metrics.avg_reward > best_avg_reward and step > save_step:
              best_avg_reward = metrics.avg_reward
              torch.save({
                  'step': step,
                  'model_state_dict': q_network.state_dict(),
                  'optimizer_state_dict': optimizer.state_dict(),
                  'avg_reward': metrics.avg_reward,
              }, f"models/{name}_dqn_best_{step}.pth")

          # print some metrics
          print(f"rStep: {step:,}/{timesteps:,} | "
                  f"Episodes: {episode_count} | "
                  f"Avg Reward: {metrics.avg_reward:.1f} | "
                  f"Epsilon: {q_network.epsilon:.3f} | "
                  f"Steps/sec: {steps_per_sec:.1f}", end="r")

      # update the target network
      if step % C == 0:
          target_network.load_state_dict(q_network.state_dict())

The training procedure closely follows Figure 6 and the algorithm described in the paper [4]. We first create the necessary objects such as the loss function etc and reset the environment. Then we can start the training loop, by using the Q-network to give us an action based on the ε-greedy policy. We simulate the environment one step forward using the action and push the resultant tuple onto the replay. If the update frequency condition is met, we can proceed with a training step. The motivation behind the update frequency element is something I am not 100% confident in. Currently, the explanation I can provide revolves around computational efficiency: training every 4 steps instead of every step majorly speeds up the algorithm and seems to work relatively well. In the update step itself, we sample a minibatch of tuples and run the model forward to produce predicted Q-values. We then create the target values (the bootstrapped true labels) using the piecewise function in step 5 in Figure 6. Performing an SGD step becomes quite straightforward from this point, since we can rely on autograd to compute the gradients and the optimizer to update the parameters.

If you followed along until now, you can use the following test function to test your saved model.

def test(game, model, num_eps=2):
  # render human opens an instance of the game so you can see it
  env_test = make_env(game, render='human')
 
  # load the model
  q_network_trained = DQN(env_test)
  q_network_trained.load_state_dict(torch.load(model, weights_only=False)['model_state_dict'])
  q_network_trained.eval() # set the model to inference mode (no gradients etc)
  q_network_trained.epsilon = 0.05 # a small amount of stochasticity
 
 
  rewards_list = []
 
  # run for set amount of episodes
  for episode in range(num_eps):
      print(f'Episode {episode}', end='r', flush=True)
     
      # reset the env
      obs, _ = env_test.reset()
      done = False
      total_reward = 0
     
      # until the episode is not done, perform the action from the q-network
      while not done:
          batched_obs = np.expand_dims(obs.squeeze(), axis=0)
          action = q_network_trained.epsilon_greedy(torch.as_tensor(batched_obs, dtype=torch.float32)).cpu().item()
             
          next_observation, reward, terminated, truncated, _ = env_test.step(action)
          total_reward += reward
          obs = next_observation

          done = terminated or truncated
         
      rewards_list.append(total_reward)
 
  # close the environment, since we use render human
  env_test.close()
  print(f'Average episode reward achieved: {np.mean(rewards_list)}')

Here’s how you can use it:

# make sure you use your latest model! I also renamed my model path so
# take that into account
test('PongNoFrameskip-v4', 'models/pong_dqn_best_6M.pth')

That’s everything for the code! You can see a trained agent below in Figure 8. It behaves quite similar to a human might play Pong, and is able to (consistently) beat the AI on the easiest difficulty. This naturally invites the question, how well does it perform on higher difficulties? Try it out using your own agent or my trained one! 

Figure 8: DQN agent playing Pong.

An additional agent was trained on the game Breakout as well, the agent can be seen in Figure 9. Once again, I used the default mode and difficulty. It might be interesting to see how well it performs in different modes or difficulties.

Figure 9: DQN agent playing Breakout.

Summary

DQN solves the issue of training agents to play Atari games. By using a FA, experience replay etc, we are able to train an agent that mimics or even surpasses human performance in Atari games [3]. Deep-RL agents can be finicky and you might have noticed that we use a lot of techniques to ensure that training is stable. If things are going wrong with your implementation it might not hurt to look at the details again. 

If you want to check out the code for my implementation you can use this link. The repo also contains code to train your own model on the game of your choice (as long as it’s in ALE), as well as the trained weights for both Pong and Breakout.

I hope this was a helpful introduction to training DQN agents. To take things to the next level maybe you can try to tweak details to beat the higher difficulties. If you want to look further, there are many extensions to DQN you can explore, such as Dueling DQNs, Prioritized Replay etc. 

References

[1] A. L. Samuel, “Some Studies in Machine Learning Using the Game of Checkers,” IBM Journal of Research and Development, vol. 3, no. 3, pp. 210–229, 1959. doi:10.1147/rd.33.0210.

[2] Sammut, Claude; Webb, Geoffrey I., eds. (2010), “TD-Gammon”, Encyclopedia of Machine Learning, Boston, MA: Springer US, pp. 955–956, doi:10.1007/978–0–387–30164–8_813, ISBN 978–0–387–30164–8, retrieved 2023–12–25

[3] Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, … and Demis Hassabis. “Human-Level Control through Deep Reinforcement Learning.” Nature 518, no. 7540 (2015): 529–533. https://doi.org/10.1038/nature14236

[4] Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, … and Demis Hassabis. “Playing Atari with Deep Reinforcement Learning.” arXiv preprint arXiv:1312.5602 (2013). https://arxiv.org/abs/1312.5602

[5] Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. 2nd ed., MIT Press, 2018.

[6] Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. 4th ed., Pearson, 2020.

[7] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

[8] Bailey, Jay. Deep Q-Networks Explained. 13 Sept. 2022, www.lesswrong.com/posts/kyvCNgx9oAwJCuevo/deep-q-networks-explained.

[9] Hausknecht, M., & Stone, P. (2015). Deep recurrent Q-learning for partially observable MDPs. arXiv preprint arXiv:1507.06527. https://arxiv.org/abs/1507.06527

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AI-driven network management gains enterprise trust

The way the full process works is that the raw data feed comes in, and machine learning is used to identify an anomaly that could be a possible incident. That’s where the generative AI agents step up. In addition to the history of similar issues, the agents also look for

Read More »

Chinese cyberspies target VMware vSphere for long-term persistence

Designed to work in virtualized environments The CISA, NSA, and Canadian Cyber Center analysts note that some of the BRICKSTORM samples are virtualization-aware and they create a virtual socket (VSOCK) interface that enables inter-VM communication and data exfiltration. The malware also checks the environment upon execution to ensure it’s running

Read More »

IBM boosts DNS protection for multicloud operations

“In addition to this DNS synchronization, you can publish DNS configurations to your Amazon Simple Storage Service (S3) bucket. As you implement DNS changes, the S3 bucket will automatically update. The ability to store multiple configurations in your S3 bucket allows you to choose the most appropriate restore point if

Read More »

North America Adds Rigs Week on Week

North America added eight rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was published on December 5. The total U.S. rig count increased by five week on week and the total Canada rig count rose by three during the same period, taking the total North America rig count up to 740, comprising 549 rigs from the U.S. and 191 rigs from Canada, the count outlined. Of the total U.S. rig count of 549, 527 rigs are categorized as land rigs, 19 are categorized as offshore rigs, and three are categorized as inland water rigs. The total U.S. rig count is made up of 413 oil rigs, 129 gas rigs, and seven miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 476 horizontal rigs, 58 directional rigs, and 15 vertical rigs. Week on week, the U.S. land rig count rose by three, and its offshore and inland water rig counts each increased by one, Baker Hughes highlighted. The U.S. oil rig count rose by six week on week, its gas rig count dropped by one by week on week, and its miscellaneous rig count remained unchanged week on week, the count showed. The U.S. horizontal rig count rose by one, its directional rig count remained unchanged, and its vertical rig count increased by four, week on week, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Louisiana added four rigs and New Mexico added one rig. A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the Eagle Ford basin dropped one rig. Canada’s total rig count of 191 is made up of 126 oil rigs and 65 gas rigs, Baker Hughes pointed out.

Read More »

Trump Admin Backs Potential American Buy of Lukoil Iraq Field

The US government is backing Iraq’s plan to transfer Lukoil PJSC’s stake in a giant oil field to an American company, days before a sanctions waiver on the Russian firm is set to expire. Iraq’s Oil Ministry last week said it’s approaching US companies to take over the majority holding in West Qurna 2, which pumps about 10 percent of the country’s crude. The Trump administration’s preference is for the Russian firm’s global assets to be taken over by a US entity, people familiar with the matter said last month. The ministry didn’t name any companies, but US firms including Exxon Mobil Corp. and Chevron Corp. have emerged as potential suitors for Lukoil’s assets. For West Qurna-2, Iraq would prefer Exxon, which had previously operated the neighboring West Qurna 1 oil field, one person said, asking not to be identified because the information is private. Exxon recently returned to Iraq after a two-year absence, signing an initial agreement in October that could pave the way for developing the Majnoon field in the country’s south. Chevron is in discussions to enter Iraq, Chief Executive Officer Mike Wirth said at the company’s Nov. 12 investor day. The company’s officials met with Iraq’s oil minister in Baghdad this week, according to a Iraqi statement. “We are encouraged by the Iraqi Ministry of Oil’s initial agreements with Exxon and Chevron, the recent commitment to transition West Qurna-2 to a US operator,” a State Department spokesperson said in answer to questions from Bloomberg. “The United States will continue to champion the interests of American companies in Iraq.” Exxon and Chevron declined to comment. A call to Lukoil’s press service went unanswered and the company didn’t respond to an email sent outside of normal business hours in Moscow on Monday. Iraq, the second-largest producer in the Organization of

Read More »

Noble to Sell 6 Jackups, Become Pureplay Deepwater Driller

Noble Corp said Monday it had signed separate deals to sell five jackup rigs to Borr Drilling Ltd for $360 million and one jackup to Ocean Oilfield Drilling for $64 million. After the completion of the transactions, expected next year, “Noble will be a pureplay deepwater and ultra-harsh environment jackup operator”, the offshore driller said in an online statement. Borr will acquire Noble Resilient (built 2009), Noble Resolute (built 2009), Noble Mick O’Brien (built 2013), Noble Regina Allen (built 2013) and Noble Tom Prosser (built 2014). The purchase price consists of $210 million in cash and $150 million in seller notes. “The $150 million in proposed seller notes to Borr are expected to have a six-year maturity and be secured by a first lien on three jackups (Noble Tom Prosser, Noble Regina Allen and Noble Resilient)”, Noble said. “Additionally, Noble intends to operate two rigs – Noble Mick O’Brien and Noble Resolute – under a bareboat charter agreement with Borr for one year from signing of the definitive agreement”, it said. Meanwhile Ocean Oilfield Drilling will buy Noble Resolve, built 2009, after the rig’s ongoing contract ends. Noble Resolve will be freed in the first quarter of 2026, Nobel says on its online fleet inventory. The rig is currently deployed in Spain for an unnamed operator, according to Noble’s latest fleet status report, published October 27. Ocean Oilfield Drilling will pay in cash. “These transactions are expected to be immediately accretive to our shareholders based on both trailing 2025 and anticipated 2026 EBITDA and free cash flow, while also bolstering our balance sheet and sharpening the focus on our established positions in the deepwater and ultra-harsh jackup segments”, said president and chief executive Robert W. Eifler. In its quarterly report October 27, Noble said the Noble Globetrotter II drillship, built 2013, was also being sold. During the third

Read More »

Equinor Scores 2 Gas, Condensate Discoveries in Sleipner

Equinor ASA and its partners have achieved two new natural gas and condensate discoveries in the Sleipner area on Norway’s side of the North Sea. Preliminary estimates for Lofn (well 15/5-8 S) and Langemann (15/5-8 A), in production license 1140, indicate 5-18 million standard cubic meters oil-equivalent recoverable resources, or 30-110 million barrels, according to the Norwegian majority state-owned company. “These are Equinor’s largest discoveries so far this year and can be developed for the European market through existing infrastructure”, it said in an online statement. The discoveries sit between the Gudrun and Eirin fields and about 40 kilometers (24.85 miles) northwest of the Sleipner A processing, drilling and living quarters platform, according to Equinor. The platform is one of several installations serving the Sleipner gas and condensate fields Sleipner East (which started production 1993), Gungne (started up 1996) and Sleipner West (also put onstream 1996). Sleipner infrastructure also serves tie-in fields Sigyn (online since 2002), Volve (started up 2008), Gudrun (started up 2014) and Gina Krog (started up 2017). Lofn and Langemann encountered gas and condensate in the Hugin Formation, which consists of sandstones with “good reservoir properties”, Equinor said. “The discoveries reduce uncertainty in several nearby prospects, which will now be further evaluated”, it said. Kjetil Hove, executive vice president for Norwegian exploration and production at Equinor, said, “This demonstrates the importance of maintaining exploration activity on the Norwegian continental shelf. There are still significant energy resources on the shelf, and Europe needs stable oil and gas deliveries”. “Discoveries near existing fields can be developed quickly through subsea facilities, with limited environmental impact, very low CO2 emissions from production and strong profitability”, Hove said. “Equinor plans to accelerate such developments on the Norwegian continental shelf”. Karl Johnny Hersvik, chief executive of license co-owner Aker BP ASA, said separately the

Read More »

Crude Settles Lower

Oil eased by the most in almost three weeks as traders monitored India’s buying of Russian crude and refined products markets slumped, leading the energy complex lower. West Texas Intermediate futures fell 2% to settle near $59 a barrel, weighed down by losses in US equities, and have now been trading in a range of less than $4 since the start of November. Russian President Vladimir Putin last week promised “uninterrupted shipments” of fuel to India even as Moscow faces steeper sanctions over its war in Ukraine. The shipments will likely be a key point for discussions as US negotiators arrive in the South Asian nation for trade talks. “Oversupply concerns will eventually be realized, especially as Russian oil and refined product flows eventually circumvent existing sanctions,” said Vivek Dhar, an analyst with Commonwealth Bank of Australia. That will see Brent futures fall toward $60 a barrel through 2026, he said. Among products, gasoline futures dropped 2% in New York, after hitting the lowest level since May 2021 last week. Diesel prices also weakened in a drag on energy commodities across the board. The focus on Moscow’s flows comes as a potential peace deal between Ukraine and Russia also remained in focus. US President Donald Trump said he was disappointed in Ukrainian President Volodymyr Zelenskiy’s handling of a US proposal to end the nearly four-year-old war. Those tensions will be weighed against glut concerns, with higher supply from OPEC+ and producers outside the group — including the US, Brazil and Guyana — set to overwhelm tepid demand growth. The US’s Energy Information Administration, the International Energy Agency and the Organization of the Petroleum Exporting Countries will publish monthly market outlooks this week that may provide further insights. Both WTI and Brent remain on their longest runs below their 100-day moving

Read More »

Energy Department Announces $11 Million in Awards to Develop HALEU Transportation Packages

IDAHO FALLS, ID. —The U.S. Department of Energy (DOE) today announced $11 million in awards to five U.S. companies to develop and license new or modified transportation packages for high-assay low-enriched uranium (HALEU). The announcement was made during U.S. Secretary of Energy Chris Wright’s visit to Idaho National Laboratory (INL), marking the final stop in his ongoing tour of all 17 DOE National Laboratories. These selections advance President Trump’s recent executive orders and commitment to rebuild the Nation’s nuclear fuel cycle, strengthen domestic enrichment and fabrication capabilities, and accelerate the deployment of advanced reactors to usher in a new American nuclear renaissance. “From critical minerals to nuclear fuel, the Trump administration is fully committed to restoring the supply chains needed to secure America’s future,” said Secretary Wright. “Thanks to President Trump, the Energy Department is operating at record speeds to unleash the next American Nuclear Renaissance and to deliver more affordable, reliable, and secure energy for American families and businesses.” DOE’s $11 million in awards will support industry-led efforts to design, modify, and license transportation packages through the U.S. Nuclear Regulatory Commission (NRC). These investments will help establish long-term, economical HALEU transport capabilities that better serve domestic reactor developers and strengthen the U.S. nuclear supply chain. The following companies were selected to develop long-term economic solutions for the safe transport of HALEU through two topic areas: Topic Area 1: Develop new package designs that can be licensed by the NRC NAC International Westinghouse Electric Company Container Technologies Industries, LLC American Centrifuge Operating Paragon D&E Topic Area 2: Modify existing design packages for NRC certification NAC International Projects under Topic Area 1 will have performance periods of up to three years; the Topic Area 2 project will have a performance period of up to two years. Funding is provided through DOE’s

Read More »

What does Arm need to do to gain enterprise acceptance?

But in 2017, AMD released the Zen architecture, which was equal if not superior to the Intel architecture. Zen made AMD competitive, and it fueled an explosive rebirth for a company that was near death a few years prior. AMD now has about 30% market share, while Intel suffers from a loss of technology as well as corporate leadership. Now, customers have a choice of Intel or AMD, and they don’t have to worry about porting their applications to a new platform like they would have to do if they switched to Arm. Analysts weigh in on Arm Tim Crawford sees no demand for Arm in the data center. Crawford is president of AVOA, a CIO consultancy. In his role, he talks to IT professionals all the time, but he’s not hearing much interest in Arm. “I don’t see Arm really making a dent, ever, into the general-purpose processor space,” Crawford said. “I think the opportunity for Arm is special applications and special silicon. If you look at the major cloud providers, their custom silicon is specifically built to do training or optimized to do inference. Arm is kind of in the same situation in the sense that it has to be optimized.” “The problem [for Arm] is that there’s not necessarily a need to fulfill at this point in time,” said Rob Enderle, principal analyst with The Enderle Group. “Obviously, there’s always room for other solutions, but Arm is still going to face the challenge of software compatibility.” And therein lies what may be Arm’s greatest challenge: software compatibility. Software doesn’t care (usually) if it’s on Intel or AMD, because both use the x86 architecture, with some differences in extensions. But Arm is a whole new platform, and that requires porting and testing. Enterprises generally don’t like disruption —

Read More »

Intel decides to keep networking business after all

That doesn’t explain why Intel made the decision to pursue spin-off in the first place. In July, NEX chief Sachin Katti issued a memo that outlined plans to establish key elements of the Networking and Communications business as a stand-alone company. It looked like a done deal, experts said. Jim Hines, research director for enabling technologies and semiconductors at IDC, declined to speculate on whether Intel could get a decent offer but noted NEX is losing ground. IDC estimates Intel’s market share in overall semiconductors at 6.8% in Q3 2025, which is down from 7.4% for the full year 2024 and 9.2% for the full year 2023. Intel’s course reversal “is a positive for Intel in the long term, and recent improvements in its financial situation may have contributed to the decision to keep NEX in house,” he said. When Tan took over as CEO earlier this year, prioritized strengthening the balance sheet and bringing a greater focus on execution. Divest NEX was aligned with these priorities, but since then, Intel has secured investments from the US Government, Nvidia and SoftBank that have reduced the need to raise cash through other means, Hines notes. “The NEX business will prove to be a strategic asset for Intel as it looks to protect and expand its position in the AI datacenter market. Success in this market now requires processor suppliers to offer a full-stack solution, not just silicon. Scale-up and scale-out networking solutions are a key piece of the package, and Intel will be able to leverage its NEX technologies and software, including silicon photonics, to develop differentiated product offerings in this space,” Hines said.

Read More »

At the Crossroads of AI and the Edge: Inside 1623 Farnam’s Rising Role as a Midwest Interconnection Powerhouse

That was the thread that carried through our recent conversation for the DCF Show podcast, where Severn walked through the role Farnam now plays in AI-driven networking, multi-cloud connectivity, and the resurgence of regional interconnection as a core part of U.S. digital infrastructure. Aggregation, Not Proximity: The Practical Edge Severn is clear-eyed about what makes the edge work and what doesn’t. The idea that real content delivery could aggregate at the base of cell towers, he noted, has never been realistic. The traffic simply isn’t there. Content goes where the network already concentrates, and the network concentrates where carriers, broadband providers, cloud onramps, and CDNs have amassed critical mass. In Farnam’s case, that density has grown steadily since the building changed hands in 2018. At the time an “underappreciated asset,” the facility has since become a meeting point for more than 40 broadband providers and over 60 carriers, with major content operators and hyperscale platforms routing traffic directly through its MMRs. That aggregation effect feeds on itself; as more carrier and content traffic converges, more participants anchor themselves to the hub, increasing its gravitational pull. Geography only reinforces that position. Located on the 41st parallel, the building sits at the historical shortest-distance path for early transcontinental fiber routes. It also lies at the crossroads of major east–west and north–south paths that have made Omaha a natural meeting point for backhaul routes and hyperscale expansions across the Midwest. AI and the New Interconnection Economy Perhaps the clearest sign of Farnam’s changing role is the sheer volume of fiber entering the building. More than 5,000 new strands are being brought into the property, with another 5,000 strands being added internally within the Meet-Me Rooms in 2025 alone. These are not incremental upgrades—they are hyperscale-grade expansions driven by the demands of AI traffic,

Read More »

Schneider Electric’s $2.3 Billion in AI Power and Cooling Deals Sends Message to Data Center Sector

When Schneider Electric emerged from its 2025 North American Innovation Summit in Las Vegas last week with nearly $2.3 billion in fresh U.S. data center commitments, it didn’t just notch a big sales win. It arguably put a stake in the ground about who controls the AI power-and-cooling stack over the rest of this decade. Within a single news cycle, Schneider announced: Together, the deals total about $2.27 billion in U.S. data center infrastructure, a number Schneider confirmed in background with multiple outlets and which Reuters highlighted as a bellwether for AI-driven demand.  For the AI data center ecosystem, these contracts function like early-stage fuel supply deals for the power and cooling systems that underpin the “AI factory.” Supply Capacity Agreements: Locking in the AI Supply Chain Significantly, both deals are structured as supply capacity agreements, not traditional one-off equipment purchase orders. Under the SCA model, Schneider is committing dedicated manufacturing lines and inventory to these customers, guaranteeing output of power and cooling systems over a multi-year horizon. In return, Switch and Digital Realty are providing Schneider with forecastable volume and visibility at the scale of gigawatt-class campus build-outs.  A Schneider spokesperson told Reuters that the two contracts are phased across 2025 and 2026, underscoring that this arrangement is about pipeline, as opposed to a one-time backlog spike.  That structure does three important things for the market: Signals confidence that AI demand is durable.You don’t ring-fence billions of dollars of factory output for two customers unless you’re highly confident the AI load curve runs beyond the current GPU cycle. Pre-allocates power & cooling the way the industry pre-allocated GPUs.Hyperscalers and neoclouds have already spent two years locking up Nvidia and AMD capacity. These SCAs suggest power trains and thermal systems are joining chips on the list of constrained strategic resources.

Read More »

The Data Center Power Squeeze: Mapping the Real Limits of AI-Scale Growth

As we all know, the data center industry is at a crossroads. As artificial intelligence reshapes the already insatiable digital landscape, the demand for computing power is surging at a pace that outstrips the growth of the US electric grid. As engines of the AI economy, an estimated 1,000 new data centers1 are needed to process, store, and analyze the vast datasets that run everything from generative models to autonomous systems. But this transformation comes with a steep price and the new defining criteria for real estate: power. Our appetite for electricity is now the single greatest constraint on our expansion, threatening to stall the very innovation we enable. In 2024, US data centers consumed roughly 4% of the nation’s total electricity, a figure that is projected to triple by 2030, reaching 12% or more.2 For AI-driven hyperscale facilities, the numbers are even more staggering. With the largest planned data centers requiring gigawatts of power, enough to supply entire cities, the cumulative demand from all data centers is expected to reach 134 gigawatts by 2030, nearly three times the current load.​3 This presents a systemic challenge. The U.S. power grid, built for a different era, is struggling to keep pace. Utilities are reporting record interconnection requests, with some regions seeing demand projections that exceed their total system capacity by fivefold.4 In Virginia and Texas, the epicenters of data center expansion, grid operators are warning of tight supply-demand balances and the risk of blackouts during peak periods.5 The problem is not just the sheer volume of power needed, but the speed at which it must be delivered. Data center operators are racing to secure power for projects that could be online in as little as 18 months, but grid upgrades and new generation can take years, if not decades. The result

Read More »

The Future of Hyperscale: Neoverse Joins NVLink Fusion as SC25 Accelerates Rack-Scale AI Architectures

Neoverse’s Expanding Footprint and the Power-Efficiency Imperative With Neoverse deployments now approaching roughly 50% of all compute shipped into top hyperscalers in 2025 (representing more than a billion Arm cores) and with nation-scale AI campuses such as the Stargate project already anchored on Arm compute, the addition of NVLink Fusion becomes a pivotal extension of the Neoverse roadmap. Partners can now connect custom Arm CPUs to their preferred NVIDIA accelerators across a coherent, high-bandwidth, rack-scale fabric. Arm characterized the shift as a generational inflection point in data-center architecture, noting that “power—not FLOPs—is the bottleneck,” and that future design priorities hinge on maximizing “intelligence per watt.” Ian Buck, vice president and general manager of accelerated computing at NVIDIA, underscored the practical impact: “Folks building their own Arm CPU, or using an Arm IP, can actually have access to NVLink Fusion—be able to connect that Arm CPU to an NVIDIA GPU or to the rest of the NVLink ecosystem—and that’s happening at the racks and scale-up infrastructure.” Despite the expanded design flexibility, this is not being positioned as an open interconnect ecosystem. NVIDIA continues to control the NVLink Fusion fabric, and all connections ultimately run through NVIDIA’s architecture. For data-center planners, the SC25 announcement translates into several concrete implications: 1.   NVIDIA “Grace-style” Racks Without Buying Grace With NVLink Fusion now baked into Neoverse, hyperscalers and sovereign operators can design their own Arm-based control-plane or pre-processing CPUs that attach coherently to NVIDIA GPU domains—such as NVL72 racks or HGX B200/B300 systems—without relying on Grace CPUs. A rack-level architecture might now resemble: Custom Neoverse SoC for ingest, orchestration, agent logic, and pre/post-processing NVLink Fusion fabric Blackwell GPU islands and/or NVLink-attached custom accelerators (Marvell, MediaTek, others) This decouples CPU choice from NVIDIA’s GPU roadmap while retaining the full NVLink fabric. In practice, it also opens

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »