Stay Ahead, Stay ONMINE

Learning How to Play Atari Games Through Deep Neural Networks

In July 1959, Arthur Samuel developed one of the first agents to play the game of checkers. What constitutes an agent that plays checkers can be best described in Samuel’s own words, “…a computer [that] can be programmed so that it will learn to play a better game of checkers than can be played by […]

In July 1959, Arthur Samuel developed one of the first agents to play the game of checkers. What constitutes an agent that plays checkers can be best described in Samuel’s own words, “…a computer [that] can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program” [1]. The checkers’ agent tries to follow the idea of simulating every possible move given the current situation and selecting the most advantageous one i.e. one that brings the player closer to winning. The move’s “advantageousnessis determined by an evaluation function, which the agent improves through experience. Naturally, the concept of an agent is not restricted to the game of checkers, and many practitioners have sought to match or surpass human performance in popular games. Notable examples include IBM’s Deep Blue (which managed to defeat Garry Kasparov, a chess world champion at the time), and Tesauro’s TD-Gammon, a temporal-difference approach, where the evaluation function was modelled using a neural network. In fact, TD-Gammon’s playing style was so uncommon that some experts even adopted some strategies it conjured up [2].

Unsurprisingly, research into creating such ‘agents’ only skyrocketed, with novel approaches able to reach peak human performance in complex games. In this post, we explore one such approach: the DQN approach introduced in 2013 by Mnih et al, in which playing Atari games is approached through a synthesis of Deep Neural Networks and TD-Learning (NB: the original paper came out in 2013, but we will focus on the 2015 version which comes with some technical improvements) [3, 4]. Before we continue, you should note that in the ever-expanding space of new approaches, DQN has been superseded by faster and more refined state-of-the-art methods. Yet, it remains an ideal stepping stone in the field of Deep Reinforcement Learning, widely recognized for combining deep learning with reinforcement learning. Hence, readers aiming to dive into Deep-RL are encouraged to begin with DQN.

This post is sectioned as follows: first, I define the problem with playing Atari games and explain why some traditional methods can be intractable. Finally, I present the specifics of the DQN approach and dive into the technical implementation.

The Problem At Hand

For the remainder of the post, I’ll assume that you know the basics of supervised learning, neural networks (basic FFNs and CNNs) and also basic reinforcement learning concepts (Bellman equations, TD-learning, Q-learning etc) If some of these RL concepts are foreign to you, then this playlist is a good introduction.  

Figure 2: Pong as shown in the ALE environment. [All media hereafter is created by the author unless otherwise noted]

Atari is a nostalgia-laden term, featuring iconic games such as Pong, Breakout, Asteroids and many more. In this post, we restrict ourselves to Pong. Pong is a 2-player game, where each player controls a paddle and can use said paddle to hit the incoming ball. Points are scored when the opponent is unable to return the ball, in other words, the ball goes past them. A player wins when they reach 21 points. 

Considering the sequential nature of the game, it might be appropriate to frame the problem as an RL problem, and then apply one of the solution methods. We can frame the game as an MDP:

The states would represent the current game state (where the ball or player paddle is etc, analogous to the idea of a search state). The rewards encapsulate our idea of winning and the actions correspond to the buttons on the Atari 2600 console. Our goal now becomes finding a policy

also known as the optimal policy. Let’s see what might happen if we try to train an agent using some classical RL algorithms. 

A straightforward solution might entail solving the problem using a tabular approach. We could enumerate all states (and actions) and associate each state with a corresponding state or state-action value. We could then apply one of the classical RL methods (Monte-Carlo, TD-Learning, Value Iteration etc), taking a dynamic Programming approach. However, employing this approach faces large pitfalls rather quickly. What do we consider as states? How many states do we have to enumerate?

It quickly becomes quite difficult to answer these questions. Defining a state becomes difficult as many elements are in play when considering the idea of a state (i.e. the states need to be Markovian, encapsulate a search state etc). What about visual output (frames) to represent a state? After all this is how we as humans interact with Atari games. We see frames, deduce information regarding the game state and then choose the appropriate action. However, there are impossibly many states when using this representation, which would make our tabular approach quite intractable, memory-wise.

Now for the sake of argument imagine that we have enough memory to hold a table of this size. Even then we would need to explore all the states a good number of times to get good approximations of the value function. We would need to explore all possible states (or state-action) enough times to arrive at a useful value. Herein lies the runtime hurdle; it would be quite infeasible for the values to converge for all the states in the table in a reasonable amount of time as we have infinite states.

Perhaps instead of framing it as a reinforcement learning problem, can we instead rephrase it into a supervised learning problem? Perhaps a formulation in which the states are samples and the labels are the actions performed. Even this perspective brings forth new problems. Atari games are inherently sequential, each state is sampled based on the previous. This breaks the i.i.d assumptions applied in supervised learning, negatively affecting supervised learning-based solutions. Similarly, we would need to create a hand-labelled dataset, perhaps employing a human expert to hand label actions for each frame. This would be expensive and laborious, and still might yield insufficient results.

Solely relying on either supervised learning or RL may lead to inefficient learning, whether due to computational constraints or suboptimal policies. This calls for a more efficient approach to solving Atari games.

DQN: Intuition & Implementation

I assume you have some basic knowledge of PyTorch, Numpy and Python, though I’ll try to be as articulate as possible. For those unfamiliar, I recommend consulting: pytorch & numpy

Deep-Q Networks aim to overcome the aforementioned barriers through a variety of techniques. Let’s go through each of the problems step-by-step and address how DQN mitigates or solves these challenges.

It’s quite hard to come up with a formal state definition for Atari games due to their diversity. DQN is designed to work for most Atari games, and as a result, we need a stated formalization that is compatible with said games. To this end, the visual representation (pixel values) of the games at any given moment are used to fashion a state. Naturally, this entails a continuous state space. This connects to our previous discussion on potential ways to represent states.

  Figure 3: The function approximation visualized. Image from [3].

The challenge of continuous states is solved through function approximation. Function approximation (FA) aims to approximate the state-action value function directly using a function approximation. Let’s go through the steps to understand what the FA does. 

Imagine that we have a network that given a state outputs the value of being in said state and performing a certain action. We then select actions based on the highest reward. However, this network would be short-sighted, only taking into account one timestep. Can we incorporate possible rewards from further down the line? Yes we can! This is the idea of the expected return. From this view, the FA becomes quite simple to understand; we aim to find a function:

In other words, a function which outputs the expected return of being in a given state after performing an action

This idea of approximation becomes crucial due to the continuous nature of the state space. By using a FA, we can exploit the idea of generalization. States close to each other (similar pixel values) will have similar Q-values, meaning that we don’t need to cover the entire (infinite) state space, greatly lowering our computational overhead. 

DQN employs FA in tandem with Q-learning. As a small refresher, Q-learning aims to find the expected return for being in a state and performing a certain action using bootstrapping. Bootstrapping models the expected return that we mentioned using the current Q-function. This ensures that we don’t need to wait till the end of an episode to update our Q-function. Q-learning is also 0ff-policy, which means that the data we use to learn the Q-function is different from the actual policy being learned. The resulting Q-function then corresponds to the optimal Q-function and can be used to find the optimal policy (just find the action that maximizes the Q-value in a given state). Moreover, Q-learning is a model-free solution, meaning that we don’t need to know the dynamics of the environment (transition functions etc) to learn an optimal policy, unlike in value iteration. Thus, DQN is also off-policy and model-free.

By using a neural network as our approximator, we need not construct a full table containing all the states and their respective Q-values. Our neural network will output the Q-value for being a given state and performing a certain action. From this point on, we refer to the approximator as the Q-network.

Figure 4: DQN architecture. Note that the last layer must equal the number of possible actions for the given game, in the case of Pong this is 6.

Since our states are defined by images, using a basic feed-forward network (FFN) would incur a large computational overhead. For this specific reason, we employ the use of a convolutional network, which is much better able to learn the distinct features of each state. The CNNs are able to distill the images down to a representation (this is the idea of representation learning), which is then fed to a FFN. The neural network architecture can be seen above. Instead of returning one value for:

we return an array with each value corresponding to a possible action in the given state (for Pong we can perform 6 actions, so we return 6 values).

Figure 5: MSE loss function, often used for regression tasks.

Recall that to train a neural network we need to define a loss function that captures our goals. DQN uses the MSE loss function. For the predicted values we the output of our Q-network. For the true values, we use the bootstrapped values. Hence, our loss function becomes the following:

If we differentiate the loss function with respect to the weights we arrive at the following equation.

Plugging this into the stochastic gradient descent (SGD) equation, we arrive at Q-learning [4]. 

By performing SGD updates using the MSE loss function, we perform Q-learning. However, this is an approximation of Q-learning, as we don’t update on a single move but instead on a batch of moves. The expectation is simplified for expedience, though the message remains the same.

From another perspective, you can also think of the MSE loss function as nudging the predicted Q-values as close to the bootstrapped Q-values (after all this is what the MSE loss intends). This inadvertently mimics Q-learning, and slowly converges to the optimal Q-function.

By employing a function approximator, we become subject to the conditions of supervised learning, namely that the data is i.i.d. But in the case of Atari games (or MDPs) this condition is often not upheld. Samples from the environment are sequential in nature, making them dependent on each other. Similarly, as the agent improves the value function and updates its policy, the distribution from which we sample also changes, violating the condition of sampling from an identical distribution.

To solve this the authors of DQN capitalize on the idea of an experience replay. This concept is core to keep the training of DQN stable and convergent. An experience replay is a buffer which stores the tuple (s, a, r, s’, d) where s, a, r, s’ are returned after performing an action in an MDP, and d is a boolean representing whether the episode has finished or not. The replay has a maximum capacity which is defined beforehand. It might be simpler to think of the replay as a queue or a FIFO data structure; old samples are removed to make room for new samples. The experience replay is used to sample a random batch of tuples which are then used for training.

The experience replay helps with the alleviation of two major challenges when using neural network function approximators with RL problems. The first deals with the independence of the samples. By randomly sampling a batch of moves and then using those for training we decouple the training process from the sequential nature of Atari games. Each batch may have actions from different timesteps (or even different episodes), giving a stronger semblance of independence. 

Secondly, the experience replay addresses the issue of non-stationarity. As the agent learns, changes in its behaviour are reflected in the data. This is the idea of non-stationarity; the distribution of data changes over time. By reusing samples in the replay and using a FIFO structure, we limit the adverse effects of non-stationarity on training. The distribution of the data still changes, but slowly and its effects are less impactful. Since Q-learning is an off-policy algorithm, we still end up learning the optimal policy, making this a viable solution. These changes allow for a more stable training procedure.

As a serendipitous side effect, the experience replay also allows for better data efficiency. Before training examples were discarded after being used for a single update step. However, through the use of an experience replay, we can reuse moves that we have made in the past for updates.

A change made in the 2015 Nature version of DQN was the introduction of a target network. Neural networks are fickle; slight changes in the weights can introduce drastic changes in the output. This is unfavourable for us, as we use the outputs of the Q-network to bootstrap our targets. If the targets are prone to large changes, it will destabilize training, which naturally we want to avoid. To alleviate this issue, the authors introduce a target network, which copies the weights of the Q-network every set amount of timesteps. By using the target network for bootstrapping, our bootstrapped targets are less unstable, making training more efficient.

Lastly, the DQN authors stack four consecutive frames after executing an action. This remark is made to ensure the Markovian property holds [9]. A singular frame omits many details of the game state such as the velocity and direction of the ball. A stacked representation is able to overcome these obstacles, providing a holistic view of the game at any given timestep.

With this, we have covered most of the major techniques used for training a DQN agent. Let’s go over the training procedure. The procedure will be more of an overview, and we’ll iron out the details in the implementation section.

Figure 6: Training procedure to train a DQN agent.

One important clarification arises from step 2. In this step, we perform a process called ε-greedy action selection. In ε-greedy, we randomly choose an action with probability ε, and otherwise choose the best possible action (according to our learned Q-network). Choosing an appropriate ε allows for the sufficient exploration of actions which is crucial to converge to a reliable Q-function. We often start with a high ε and slowly decay this value over time.

Implementation

If you want to follow along with my implementation of DQN then you will need the following libraries (apart from Numpy and PyTorch). I provide a concise explanation of their use.

  • Arcade Learning Environment → ALE is a framework that allows us to interact with Atari 2600 environments. Technically we interface ALE through gymnasium, an API for RL environments and benchmarking.
  • StableBaselines3 → SB3 is a deep reinforcement learning framework with a backend designed in Pytorch. We will only need this for some preprocessing wrappers.

Let’s import all of the necessary libraries.

import numpy as np
import time
import torch
import torch.nn as nn
import gymnasium as gym
import ale_py

from collections import deque # FIFO queue data structurefrom tqdm import tqdm  # progress barsfrom gymnasium.wrappers import FrameStack
from gymnasium.wrappers.frame_stack import LazyFrames
from stable_baselines3.common.atari_wrappers import (
  AtariWrapper,
  FireResetEnv,
)

gym.register_envs(ale_py) # we need to register ALE with gym

# use cuda if you have it otherwise cpu
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device

First, we construct an environment, using the ALE framework. Since we are working with pong we create an environment with the name PongNoFrameskip-v4. With this, we can create an environment using the following code:

env = gym.make('PongNoFrameskip-v4', render_mode='rgb_array')

The rgb_array parameter tells ALE to return pixel values instead of RAM codes (which is the default). The code to interact with the Atari becomes extremely simple with gym. The following excerpt encapsulates most of the utilities that we will need from gym.

# this code restarts/starts a environment to the beginning of an episode
observation, _ = env.reset()
for _ in range(100):  # number of timesteps
  # randomly get an action from possible actions
  action = env.action_space.sample()
  # take a step using the given action
  # observation_prime refers to s', terminated and truncated refer to
  # whether an episode has finished or been cut short
  observation_prime, reward, terminated, truncated, _ = env.step(action)
  observation = observation_prime

With this, we are given states (we name them observations) with the shape (210, 160, 3). Hence the states are RGB images with the shape 210×160. An example can be seen in Figure 2. When training our DQN agent, an image of this size adds unnecessary computational overhead. A similar observation can be made about the fact that the frames are RGB (3 channels).

To solve this, we downsample the frame down to 84×84 and transform it into grayscale. We can do this by employing a wrapper from SB3, which does this for us. Now every time we perform an action our output will be in grayscale (with 1 channel) and of size 84×84.

env = AtariWrapper(env, terminal_on_life_loss=False, frame_skip=4)

The wrapper above does more than downsample and turn our frame into grayscale. Let’s go over some other changes the wrapper introduces.

  • Noop Reset → The start state of each Atari game is deterministic, i.e. you start at the same state each time the game ends. With this the agent may learn to memorize a sequence of actions from the starting state, resulting in a sub-optimal policy. To prevent this, we perform no actions for a set amount of timesteps in the beginning.
  • Frame Skipping → In the ALE environment each frame needs an action. Instead of choosing an action at each frame, we select an action and repeat it for a set number of timesteps. This is the idea of frame skipping and allows for smoother transitions.
  • Max-pooling → Due to the manner in which ALE/Atari renders its frames and the downsampling, it is possible that we encounter flickering. To solve this we take the max over two consecutive frames.
  • Terminal Life on Loss → Many Atari games do not end when the player dies. Consider Pong, no player wins until the score hits 21. However, by default agents might consider the loss of life as the end of an episode, which is undesirable. This wrapper counteracts this and ends the episode when the game is truly over.
  • Clip Reward → The gradients are highly sensitive to the magnitude of the rewards. To avoid unstable updates, we clip the rewards to be between {-1, 0, 1}.

Apart from these we also introduce an additional frame stack wrapper (FrameStack). This performs what was discussed above, stacking 4 frames on top of each to keep the states Markovian. The ALE environment returns LazyFrames, which are designed to be more memory efficient, as the same frame might occur multiple times. However, they are not compatible with many of the operations that we perform throughout the training procedure. To convert LazyFrames into usable objects, we apply a custom wrapper which converts an observation to Numpy before returning it to us. The code is shown below.

class LazyFramesToNumpyWrapper(gym.ObservationWrapper): # subclass obswrapper
  def __init__(self, env):
      super().__init__(env)
      self.env = env # the environment that we want to convert

  def observation(self, observation):
      # if its a LazyFrames object then turn it into a numpy array
      if isinstance(observation, LazyFrames):
          return np.array(observation)
      return observation

Let’s combine all of the wrappers into one function that returns an environment that does all of the above.

def make_env(game, render='rgb_array'):
  env = gym.make(game, render_mode=render)
  env = AtariWrapper(env, terminal_on_life_loss=False, frame_skip=4)
  env = FrameStack(env, num_stack=4)
  env = LazyFramesToNumpyWrapper(env)
  # sometimes a environment needs that the fire button be
  # pressed to start the game, this makes sure that game is started when needed
  if "FIRE" in env.unwrapped.get_action_meanings():
      env = FireResetEnv(env)
  return env

These changes are derived from the 2015 Nature paper and help to stabilize training [3]. The interfacing with gym remains the same as shown above. An example of the preprocessed states can be seen in Figure 7.

Figure 7: Preprocessed successive Atari frames; each frame is preprocessed by turning the image from RGB to grayscale, and downsampling the size of the image from 210×160 pixels to 84×84 pixels.

Now that we have an appropriate environment let’s move on to create the replay buffer.

class ReplayBuffer:

  def __init__(self, capacity, device):
      self.capacity = capacity
      self._buffer =  np.zeros((capacity,), dtype=object) # stores the tuples
      self._position = 0 # keep track of where we are
      self._size = 0
      self.device = device

  def store(self, experience):
      """Adds a new experience to the buffer,
        overwriting old entries when full."""
      idx = self._position % self.capacity # get the index to replace
      self._buffer[idx] = experience
      self._position += 1
      self._size = min(self._size + 1, self.capacity) # max size is the capacity

  def sample(self, batch_size):
      """ Sample a batch of tuples and load it onto the device
      """
      # if the buffer is not full capacity then return everything we have
      buffer = self._buffer[0:min(self._position-1, self.capacity-1)]
      # minibatch of tuples
      batch = np.random.choice(buffer, size=[batch_size], replace=True)

      # we need to return the objects as torch tensors, hence we delegate
      # this task to the transform function
      return (
          self.transform(batch, 0, shape=(batch_size, 4, 84, 84), dtype=torch.float32),
          self.transform(batch, 1, shape=(batch_size, 1), dtype=torch.int64),
          self.transform(batch, 2, shape=(batch_size, 1), dtype=torch.float32),
          self.transform(batch, 3, shape=(batch_size, 4, 84, 84), dtype=torch.float32),
          self.transform(batch, 4, shape=(batch_size, 1), dtype=torch.bool)
      )
     
  def transform(self, batch, index, shape, dtype):
      """ Transform a passed batch into a torch tensor for a given axis.
      E.g. if index 0 of a tuple means the state then we return all states
      as a torch tensor. We also return a specified shape.
      """
      # reshape the tensors as needed
      batched_values = np.array([val[index] for val in batch]).reshape(shape)
      # convert to torch tensors
      batched_values = torch.as_tensor(batched_values, dtype=dtype, device=self.device)
      return batched_values

  # below are some magic methods I used for debugging, not very important
  # they just turn the object into an arraylike object
  def __len__(self):
      return self._size

  def __getitem__(self, index):
      return self._buffer[index]

  def __setitem__(self, index, value: tuple):
      self._buffer[index] = value

The replay buffer works by allocating space in the memory for the given capacity. We maintain a pointer that keeps track of the number of objects added. Every time a new tuple is added we replace the oldest tuples with the new ones. To sample a minibatch, we first randomly sample a minibatch in numpy and then convert it into torch tensors, also loading it to the appropriate device.

Some of the aspects of the replay buffer are inspired by [8]. The replay buffer proved to be the biggest bottleneck in training the agent, and thus small speed-ups in the code proved to be monumentally important. An alternative strategy which uses an deque object to hold the tuples can also be used. If you are creating your own buffer, I would emphasize that you spend a little more time to ensure its efficiency. 

We can now use this to create a function that creates a buffer and preloads a given number of tuples with a random policy.

def load_buffer(preload, capacity, game, *, device):
  # make the environment
  env = make_env(game)
  # create the buffer
  buffer = ReplayBuffer(capacity,device=device)
 
  # start the environment
  observation, _ = env.reset()
  # run for as long as the specified preload
  for _ in tqdm(range(preload)):
      # sample random action -> random policy 
      action = env.action_space.sample()
   
      observation_prime, reward, terminated, truncated, _ = env.step(action)
     
      # store the results from the action as a python tuple object
      buffer.store((
          observation.squeeze(), # squeeze will remove the unnecessary grayscale channel
          action,
          reward,
          observation_prime.squeeze(),
          terminated or truncated))
      # set old observation to be new observation_prime
      observation = observation_prime
     
      # if the episode is done, then restart the environment
      done = terminated or truncated
      if done:
          observation, _ = env.reset()
 
  # return the env AND the loaded buffer
  return buffer, env

The function is quite straightforward, we create a buffer and environment object and then preload the buffer using a random policy. Note that we squeeze the observations to remove the redundant color channel. Let’s move on to the next step and define the function approximator.

class DQN(nn.Module):

  def __init__(
      self,
      env,
      in_channels = 4, # number of stacked frames
      hidden_filters = [16, 32],
      start_epsilon = 0.99, # starting epsilon for epsilon-decay
      max_decay = 0.1, # end epsilon-decay
      decay_steps = 1000, # how long to reach max_decay
      *args,
      **kwargs
  ) -> None:
      super().__init__(*args, **kwargs)
     
      # instantiate instance vars
      self.start_epsilon = start_epsilon
      self.epsilon = start_epsilon
      self.max_decay = max_decay
      self.decay_steps = decay_steps
      self.env = env
      self.num_actions = env.action_space.n
   
      # Sequential is an arraylike object that allows us to
      # perform the forward pass in one line
      self.layers = nn.Sequential(
          nn.Conv2d(in_channels, hidden_filters[0], kernel_size=8, stride=4),
          nn.ReLU(),
          nn.Conv2d(hidden_filters[0], hidden_filters[1], kernel_size=4, stride=2),
          nn.ReLU(),
          nn.Flatten(start_dim=1),
          nn.Linear(hidden_filters[1] * 9 * 9, 512), # the final value is calculated by using the equation for CNNs
          nn.ReLU(),
          nn.Linear(512, self.num_actions)
      )
       
      # initialize weights using he initialization
      # (pytorch already does this for conv layers but not linear layers)
      # this is not necessary and nothing you need to worry about
      self.apply(self._init)

  def forward(self, x):
      """ Forward pass. """
      # the /255.0 performs normalization of pixel values to be in [0.0, 1.0]
      return self.layers(x / 255.0)

  def epsilon_greedy(self, state, dim=1):
      """Epsilon greedy. Randomly select value with prob e,
        else choose greedy action"""

      rng = np.random.random() # get random value between [0, 1]
     
      if rng < self.epsilon: # for prob under e
          # random sample and return as torch tensor
          action = self.env.action_space.sample()
          action = torch.tensor(action)
      else:
          # use torch no grad to make sure no gradients are accumulated for this
          # forward pass
          with torch.no_grad():
              q_values = self(state)
          # choose best action
          action = torch.argmax(q_values, dim=dim)

      return action
 
  def epsilon_decay(self, step):
      # linearly decrease epsilon
      self.epsilon = self.max_decay + (self.start_epsilon - self.max_decay) * max(0, (self.decay_steps - step) / self.decay_steps)
 
  def _init(self, m):
    # initialize layers using he init
    if isinstance(m, (nn.Linear, nn.Conv2d)):
      nn.init.kaiming_normal_(m.weight, nonlinearity='relu')
      if m.bias is not None:
        nn.init.zeros_(m.bias)

That covers the model architecture. I used a linear ε-decay scheme, but feel free to try another. We can also create an auxiliary class that keeps track of important metrics. The class keeps track of rewards received for the last few episodes along with the respective lengths of said episodes.

class MetricTracker:
  def __init__(self, window_size=100):
      # the size of the history we use to track stats
      self.window_size = window_size
      self.rewards = deque(maxlen=window_size)
      self.current_episode_reward = 0
     
  def add_step_reward(self, reward):
      # add received reward to the current reward
      self.current_episode_reward += reward
     
  def end_episode(self):
      # add reward for episode to history
      self.rewards.append(self.current_episode_reward)
      # reset metrics
      self.current_episode_reward = 0
 
  # property just makes it so that we can return this value without
  # having to call it as a function
  @property
  def avg_reward(self):
      return np.mean(self.rewards) if self.rewards else 0

Great! Now we have everything we need to start training our agent. Let’s define the training function and go over how it works. Before that, we need to create the necessary objects to pass into our training function along with some hyperparameters. A small note: in the paper the authors use RMSProp, but instead we’ll use Adam. Adam proved to work for me with the given parameters, but you are welcome to try RMSProp or other variations.

TIMESTEPS = 6000000 # total number of timesteps for training
LR = 2.5e-4 # learning rate
BATCH_SIZE = 64 # batch size, change based on your hardware
C = 10000 # the interval at which we update the target network
GAMMA = 0.99 # the discount value
TRAIN_FREQ = 4 # in the paper the SGD updates are made every 4 actions
DECAY_START = 0 # when to start e-decay
FINAL_ANNEAL = 1000000 # when to stop e-decay

# load the buffer
buffer_pong, env_pong = load_buffer(50000, 150000, game='PongNoFrameskip-v4')

# create the networks, push the weights of the q_network onto the target network
q_network_pong = DQN(env_pong, decay_steps=FINAL_ANNEAL).to(device)
target_network_pong = DQN(env_pong, decay_steps=FINAL_ANNEAL).to(device)
target_network_pong.load_state_dict(q_network_pong.state_dict())

# create the optimizer
optimizer_pong = torch.optim.Adam(q_network_pong.parameters(), lr=LR)

# metrics class instantiation
metrics = MetricTracker()
def train(
  env,
  name, # name of the agent, used to save the agent
  q_network,
  target_network,
  optimizer,
  timesteps,
  replay, # passed buffer
  metrics, # metrics class
  train_freq, # this parameter works complementary to frame skipping
  batch_size,
  gamma, # discount parameter
  decay_start,
  C,
  save_step=850000, # I recommend setting this one high or else a lot of models will be saved
):
  loss_func = nn.MSELoss() # create the loss object
  start_time = time.time() # to check speed of the training procedure
  episode_count = 0
  best_avg_reward = -float('inf')
 
  # reset the env
  obs, _ = env.reset()
 
 
  for step in range(1, timesteps+1): # start from 1 just for printing progress

      # we need to pass tensors of size (batch_size, ...) to torch
      # but the observation is just one so it doesn't have that dim
      # so we add it artificially (step 2 in procedure)
      batched_obs = np.expand_dims(obs.squeeze(), axis=0)
      # perform e-greedy on the observation and convert the tensor into numpy and send it to the cpu
      action = q_network.epsilon_greedy(torch.as_tensor(batched_obs, dtype=torch.float32, device=device)).cpu().item()
     
      # take an action
      obs_prime, reward, terminated, truncated, _ = env.step(action)

      # store the tuple (step 3 in the procedure)
      replay.store((obs.squeeze(), action, reward, obs_prime.squeeze(), terminated or truncated))
      metrics.add_step_reward(reward)
      obs = obs_prime
     
      # train every 4 steps as per the paper
      if step % train_freq == 0:
          # sample tuples from the replay (step 4 in the procedure)
          observations, actions, rewards, observation_primes, dones = replay.sample(batch_size)
         
          # we don't want to accumulate gradients for this operation so use no_grad
          with torch.no_grad():
              q_values_minus = target_network(observation_primes)
              # get the max over the target network
              boostrapped_values = torch.amax(q_values_minus, dim=1, keepdim=True)

          # this line basically makes so that for every sample in the minibatch which indicates
          # that the episode is done, we return the reward, else we return the
          # the bootstrapped reward (step 5 in the procedure)
          y_trues = torch.where(dones, rewards, rewards + gamma * boostrapped_values)
          y_preds = q_network(observations)
         
          # compute the loss
          # the gather gets the values of the q_network corresponding to the
          # action taken
          loss = loss_func(y_preds.gather(1, actions), y_trues)
           
          # set the grads to 0, and perform the backward pass (step 6 in the procedure)
          optimizer.zero_grad()
          loss.backward()
          optimizer.step()
     
      # start the e-decay
      if step > decay_start:
          q_network.epsilon_decay(step)
          target_network.epsilon_decay(step)
     
      # if the episode is finished then we print some metrics
      if terminated or truncated:
          # compute steps per sec
          elapsed_time = time.time() - start_time
          steps_per_sec = step / elapsed_time
          metrics.end_episode()
          episode_count += 1
         
          # reset the environment
          obs, _ = env.reset()
         
          # save a model if above save_step and if the average reward has improved
          # this is kind of like early-stopping, but we don't stop we just save a model
          if metrics.avg_reward > best_avg_reward and step > save_step:
              best_avg_reward = metrics.avg_reward
              torch.save({
                  'step': step,
                  'model_state_dict': q_network.state_dict(),
                  'optimizer_state_dict': optimizer.state_dict(),
                  'avg_reward': metrics.avg_reward,
              }, f"models/{name}_dqn_best_{step}.pth")

          # print some metrics
          print(f"rStep: {step:,}/{timesteps:,} | "
                  f"Episodes: {episode_count} | "
                  f"Avg Reward: {metrics.avg_reward:.1f} | "
                  f"Epsilon: {q_network.epsilon:.3f} | "
                  f"Steps/sec: {steps_per_sec:.1f}", end="r")

      # update the target network
      if step % C == 0:
          target_network.load_state_dict(q_network.state_dict())

The training procedure closely follows Figure 6 and the algorithm described in the paper [4]. We first create the necessary objects such as the loss function etc and reset the environment. Then we can start the training loop, by using the Q-network to give us an action based on the ε-greedy policy. We simulate the environment one step forward using the action and push the resultant tuple onto the replay. If the update frequency condition is met, we can proceed with a training step. The motivation behind the update frequency element is something I am not 100% confident in. Currently, the explanation I can provide revolves around computational efficiency: training every 4 steps instead of every step majorly speeds up the algorithm and seems to work relatively well. In the update step itself, we sample a minibatch of tuples and run the model forward to produce predicted Q-values. We then create the target values (the bootstrapped true labels) using the piecewise function in step 5 in Figure 6. Performing an SGD step becomes quite straightforward from this point, since we can rely on autograd to compute the gradients and the optimizer to update the parameters.

If you followed along until now, you can use the following test function to test your saved model.

def test(game, model, num_eps=2):
  # render human opens an instance of the game so you can see it
  env_test = make_env(game, render='human')
 
  # load the model
  q_network_trained = DQN(env_test)
  q_network_trained.load_state_dict(torch.load(model, weights_only=False)['model_state_dict'])
  q_network_trained.eval() # set the model to inference mode (no gradients etc)
  q_network_trained.epsilon = 0.05 # a small amount of stochasticity
 
 
  rewards_list = []
 
  # run for set amount of episodes
  for episode in range(num_eps):
      print(f'Episode {episode}', end='r', flush=True)
     
      # reset the env
      obs, _ = env_test.reset()
      done = False
      total_reward = 0
     
      # until the episode is not done, perform the action from the q-network
      while not done:
          batched_obs = np.expand_dims(obs.squeeze(), axis=0)
          action = q_network_trained.epsilon_greedy(torch.as_tensor(batched_obs, dtype=torch.float32)).cpu().item()
             
          next_observation, reward, terminated, truncated, _ = env_test.step(action)
          total_reward += reward
          obs = next_observation

          done = terminated or truncated
         
      rewards_list.append(total_reward)
 
  # close the environment, since we use render human
  env_test.close()
  print(f'Average episode reward achieved: {np.mean(rewards_list)}')

Here’s how you can use it:

# make sure you use your latest model! I also renamed my model path so
# take that into account
test('PongNoFrameskip-v4', 'models/pong_dqn_best_6M.pth')

That’s everything for the code! You can see a trained agent below in Figure 8. It behaves quite similar to a human might play Pong, and is able to (consistently) beat the AI on the easiest difficulty. This naturally invites the question, how well does it perform on higher difficulties? Try it out using your own agent or my trained one! 

Figure 8: DQN agent playing Pong.

An additional agent was trained on the game Breakout as well, the agent can be seen in Figure 9. Once again, I used the default mode and difficulty. It might be interesting to see how well it performs in different modes or difficulties.

Figure 9: DQN agent playing Breakout.

Summary

DQN solves the issue of training agents to play Atari games. By using a FA, experience replay etc, we are able to train an agent that mimics or even surpasses human performance in Atari games [3]. Deep-RL agents can be finicky and you might have noticed that we use a lot of techniques to ensure that training is stable. If things are going wrong with your implementation it might not hurt to look at the details again. 

If you want to check out the code for my implementation you can use this link. The repo also contains code to train your own model on the game of your choice (as long as it’s in ALE), as well as the trained weights for both Pong and Breakout.

I hope this was a helpful introduction to training DQN agents. To take things to the next level maybe you can try to tweak details to beat the higher difficulties. If you want to look further, there are many extensions to DQN you can explore, such as Dueling DQNs, Prioritized Replay etc. 

References

[1] A. L. Samuel, “Some Studies in Machine Learning Using the Game of Checkers,” IBM Journal of Research and Development, vol. 3, no. 3, pp. 210–229, 1959. doi:10.1147/rd.33.0210.

[2] Sammut, Claude; Webb, Geoffrey I., eds. (2010), “TD-Gammon”, Encyclopedia of Machine Learning, Boston, MA: Springer US, pp. 955–956, doi:10.1007/978–0–387–30164–8_813, ISBN 978–0–387–30164–8, retrieved 2023–12–25

[3] Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, … and Demis Hassabis. “Human-Level Control through Deep Reinforcement Learning.” Nature 518, no. 7540 (2015): 529–533. https://doi.org/10.1038/nature14236

[4] Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, … and Demis Hassabis. “Playing Atari with Deep Reinforcement Learning.” arXiv preprint arXiv:1312.5602 (2013). https://arxiv.org/abs/1312.5602

[5] Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. 2nd ed., MIT Press, 2018.

[6] Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. 4th ed., Pearson, 2020.

[7] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

[8] Bailey, Jay. Deep Q-Networks Explained. 13 Sept. 2022, www.lesswrong.com/posts/kyvCNgx9oAwJCuevo/deep-q-networks-explained.

[9] Hausknecht, M., & Stone, P. (2015). Deep recurrent Q-learning for partially observable MDPs. arXiv preprint arXiv:1507.06527. https://arxiv.org/abs/1507.06527

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

US lets China buy semiconductor design software again

The reversal marks a dramatic shift from the aggressive stance the Trump administration took in May, when it imposed sweeping restrictions on electronic design automation (EDA) software — the critical tools needed to design advanced semiconductors.  A short-lived stoppage  The restrictions had targeted what analysts called the “upstream” of chip

Read More »

Hardcoded root credentials in Cisco Unified CM trigger max-severity alert

The affected products-Cisco Unified CM and Unified CM SME–are core components of enterprise telephony infrastructure, widely deployed across government agencies, financial institutions, and large corporations to manage voice, video, and messaging at scale. A flaw in these systems could allow attackers to compromise an organization’s communications, letting them log in

Read More »

HCLTech Expands Partnership with Equinor

Norwegian energy major Equinor ASA has expanded its collaboration with HCL Technologies Limited (HCLTech) to cover Equinor’s IT landscape across several key strategic areas. The two companies agreed to accelerate Equinor’s digital transformation through the acceleration of cloud migration and standardization of services across operations. Furthermore, HCLTech said it will enhance Equinor’s cyber resilience and network performance, improve workplace experience through automation, and enable advanced user experiences with technologies like augmented reality (AR).   “We’re pleased to continue our long-standing collaboration with Equinor”, Sandeep Kumar Saxena, Executive Vice President, HCLTech, said. “This collaboration reflects our shared commitment to innovation and sustainability”. Over the last ten years, HCLTech has assisted with the company’s international growth, infrastructure developments, and cybersecurity enhancements. This partnership has progressed from managed services to a strategic alliance that aligns with Equinor’s broader digital and business goals, HCLTech said. Earlier HCLTech was recognized as a Responsible AI Partner by Microsoft. “This citation validates HCLTech’s AI offerings as meeting high standards of responsibility and security, built with robust guardrails, enabling compliance, reducing bias, and enhancing explainability”, the company said last month. The company said responsible AI is integrated throughout HCLTech’s GenAI solutions and services, including its service transformation platform, AI Force; its value stream innovation platform, AI Foundry; its physical AI engineering-driven development capability, AI Engineering; and its global experiential spaces, AI Labs – ensuring that governance is incorporated from the design phase through to deployment. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR

Read More »

Angola Raises Diesel Price by 33 Pct, Third Increase This Year

Angola raised the diesel price by 33%, the third increase this year as authorities press ahead with fuel-subsidy cuts that have been encouraged by the International Monetary Fund. The price will rise to 400 kwanzas ($0.43) per liter on Friday from 300 kwanza previously, the Petroleum Derivatives Regulatory Institute said in a statement late Thursday. The increase is part of a “gradual adjustment of fuel prices,” it said. Previous hikes were announced in March and April. The IRDP said prices of other fuels, including gasoline and liquefied-petroleum gas, will remain unchanged in Angola, Africa’s third-largest oil producer. The IMF said in February that Angola should do more to eliminate subsidies that cost about $3 billion last year — similar to the amount the government spent on health and education last year. The latest hike follows an IMF-World Bank review of Angola’s financial system that ended last month. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

OPEC+ Moves Meeting to Saturday as Group Weighs Another Hike

Key OPEC+ members brought forward to Saturday an online meeting where they’re set to consider a fourth bumper oil production increase, delegates said.  Saudi Arabia and its partners have been discussing another output hike of 411,000 barrels a day for August as their base-case scenario as they seek to recoup lost market share. The video-conference was moved one day earlier because of scheduling issues, said the officials, who asked not to be identified since the change isn’t yet public.   The Organization of the Petroleum Exporting Countries has roiled markets in recent months by speeding up the return of halted output, despite faltering demand and an impending surplus. Their strategy shift is dragging crude prices lower, offering relief to consumers and playing into calls from US President Donald Trump for cheaper fuel. Eight major OPEC+ members have already agreed to restart 411,000 barrels a day in May, June and July, triple the rate they initially scheduled. Officials have said that Riyadh is eager to revive more idle production as quickly as possible to regain market share ceded to US shale drillers and other rivals. The kingdom’s pivot away from years of supply restraint aimed at shoring up crude prices has upended traders’ assumptions about what role the OPEC+ alliance will continue to play in world oil markets. Brent crude futures traded near $68 a barrel in London on Friday. The international benchmark plunged 12% last week as a tentative truce between Israel and Iran allayed fears over the threat to Middle East energy exports.    Further OPEC+ increases threaten to create a glut. Global oil inventories have been building at a brisk clip of around 1 million barrels a day in recent months as demand cools in China and supplies continue to swell across the Americas.  Markets are headed for a substantial surplus later this year,

Read More »

Methane Emission Tracking Satellite Lost in Space, EDF Says

Methane emissions tracking satellite MethaneSAT lost contact with mission operations, and it is “likely not recoverable,” the Environmental Defense Fund (EDF) said in a statement. “After pursuing all options to restore communications, we learned this morning that the satellite has lost power,” the EDF said. “The engineering team is conducting a thorough investigation into the loss of communication. This is expected to take time. We will share what we learn,” the nonprofit organization added. Launched in March 2024, MethaneSAT had been collecting methane emissions data over the past year. It was one of the most advanced methane tracking satellites in space, measuring methane emissions in oil and gas producing regions across the world, according to the statement. “The mission has been a remarkable success in terms of scientific and technological accomplishment, and for its lasting influence on both industry and regulators worldwide,” the EDF said. “Thanks to MethaneSAT, we have gained critical insight about the distribution and volume of methane being released from oil and gas production areas. We have also developed an unprecedented capability to interpret the measurements from space and translate them into volumes of methane released. This capacity will be valuable to other missions,” the organization continued. MethaneSAT had the ability to monitor both high-emitting methane sources and small sources spread over a wide area, according to the release. It is designed to measure regions at intervals under seven days, regularly monitoring roughly 50 major regions accounting for more than 80 percent of global oil and gas production, according to an earlier statement. “The advanced spectrometers developed specifically for MethaneSAT met or exceeded all expectations throughout the mission. In combination with the mission algorithms and software, we showed that the highly sensitive instrument could see total methane emissions, even at low levels, over wide areas, including both

Read More »

How Has USA Energy Use Changed Since 1776?

A new analysis piece published on the U.S. Energy Information Administration (EIA) website recently, which was penned by Mickey Francis, Program Manager and Lead Economist for the EIA’s State Energy Data System, has outlined how U.S. energy use has changed since the Declaration of Independence was signed in 1776. The piece highlighted that, according to the EIA’s monthly energy review, in 2024, the U.S. consumed about 94 quadrillion British thermal units (quads) of energy. Fossil fuels – namely petroleum, natural gas, and coal – made up 82 percent of total U.S. energy consumption last year, the piece pointed out, adding that non-fossil fuel energy accounted for the other 18 percent. Petroleum remained the most-consumed fuel in the United States, the piece stated, outlining that this has been the case for the past 75 years. It also highlighted that, last year, nuclear energy consumption exceeded coal consumption for the first time ever. The analysis piece went on to note that, when the Declaration of Independence was signed in 1776, wood was the largest source of energy in the United States. “Used for heating, cooking, and lighting, wood remained the largest U.S. energy source until the late 1800s, when coal consumption became more common,” it added. “Wood energy is still consumed, mainly by industrial lumber and paper plants that burn excess wood waste to generate electricity,” it continued. The piece went on to highlight that coal was the largest source of U.S. energy for about 65 years, from 1885 until 1950. “Early uses of coal included many purposes that are no longer common, such as in stoves for home heating and in engines for trains and ships. Since the 1960s, nearly all coal consumed in the United States has been for electricity generation,” the piece said. The analysis piece went on to state that petroleum has

Read More »

Ocean Installer Awarded EPCI Contract for Var Energi’s Balder Project

Subsea services firm Ocean Installer has been awarded a fast-track engineering, procurement, construction and installation (EPCI) contract by Var Energi for further development of the Balder Phase VI project for the further development of the Balder area in the North Sea. This project is part of Var Energi’s hub development strategy in the Balder area, which is centered around the newly installed Jotun floating production storage and offloading vessel (FPSO), Ocean Installer said in a news release. Ocean Installer said it will execute subsea umbilicals, risers, and flowlines (SURF) activities including the fabrication and installation of flexible flowlines and umbilicals. Financial details of the contract were not disclosed. The project is scheduled to deliver first oil by the end of 2026, reinforcing both companies’ shared commitment to efficient development of subsea tie-backs on the Norwegian Continental Shelf (NCS), according to the release. “Var Energi is a key customer for Ocean Installer and the wider Moreld group. It’s exciting to see that Ocean Installer signs a new contract within the same week that the Jotun FPSO starts producing first oil as part of the Balder Future project, in which Ocean Installer has played a key role,” Moreld CEO Geir Austigard said. The contract is called off under the strategic partnership contract entered into with Vår Energi in June 2022. It is also a continuation of a multi-year collaboration between Vår Energi and Ocean Installer in the Balder area, where Ocean Installer has been engaged since 2019, the release said. “We are happy that Vår Energi continues to place their trust in us. Subsea tiebacks have been the core of our business for 14 years, and as the NCS transitions to more marginal fields, our expertise is valuable in enabling faster and more cost-efficient developments. Working together with Vår Energi to utilize

Read More »

CoreWeave achieves a first with Nvidia GB300 NVL72 deployment

The deployment, Kimball said, “brings Dell quality to the commodity space. Wins like this really validate what Dell has been doing in reshaping its portfolio to accommodate the needs of the market — both in the cloud and the enterprise.” Although concerns were voiced last year that Nvidia’s next-generation Blackwell data center processors had significant overheating problems when they were installed in high-capacity server racks, he said that a repeat performance is unlikely. Nvidia, said Kimball “has been very disciplined in its approach with its GPUs and not shipping silicon until it is ready. And Dell almost doubles down on this maniacal quality focus. I don’t mean to sound like I have blind faith, but I’ve watched both companies over the last several years be intentional in delivering product in volume. Especially as the competitive market starts to shape up more strongly, I expect there is an extremely high degree of confidence in quality.” CoreWeave ‘has one purpose’ He said, “like Lambda Labs, Crusoe and others, [CoreWeave] seemingly has one purpose (for now): deliver GPU capacity to the market. While I expect these cloud providers will expand in services, I think for now the type of customer employing services is on the early adopter side of AI. From an enterprise perspective, I have to think that organizations well into their AI journey are the consumers of CoreWeave.”  “CoreWeave is also being utilized by a lot of the model providers and tech vendors playing in the AI space,” Kimball pointed out. “For instance, it’s public knowledge that Microsoft, OpenAI, Meta, IBM and others use CoreWeave GPUs for model training and more. It makes sense. These are the customers that truly benefit from the performance lift that we see from generation to generation.”

Read More »

Oracle to power OpenAI’s AGI ambitions with 4.5GW expansion

“For CIOs, this shift means more competition for AI infrastructure. Over the next 12–24 months, securing capacity for AI workloads will likely get harder, not easier. Though cost is coming down but demand is increasing as well, due to which CIOs must plan earlier and build stronger partnerships to ensure availability,” said Pareekh Jain, CEO at EIIRTrend & Pareekh Consulting. He added that CIOs should expect longer wait times for AI infrastructure. To mitigate this, they should lock in capacity through reserved instances, diversify across regions and cloud providers, and work with vendors to align on long-term demand forecasts.  “Enterprises stand to benefit from more efficient and cost-effective AI infrastructure tailored to specialized AI workloads, significantly lower their overall future AI-related investments and expenses. Consequently, CIOs face a critical task: to analyze and predict the diverse AI workloads that will prevail across their organizations, business units, functions, and employee personas in the future. This foresight will be crucial in prioritizing and optimizing AI workloads for either in-house deployment or outsourced infrastructure, ensuring strategic and efficient resource allocation,” said Neil Shah, vice president at Counterpoint Research. Strategic pivot toward AI data centers The OpenAI-Oracle deal comes in stark contrast to developments earlier this year. In April, AWS was reported to be scaling back its plans for leasing new colocation capacity — a move that AWS Vice President for global data centers Kevin Miller described as routine capacity management, not a shift in long-term expansion plans. Still, these announcements raised questions around whether the hyperscale data center boom was beginning to plateau. “This isn’t a slowdown, it’s a strategic pivot. The era of building generic data center capacity is over. The new global imperative is a race for specialized, high-density, AI-ready compute. Hyperscalers are not slowing down; they are reallocating their capital to

Read More »

Arista Buys VeloCloud to reboot SD-WANs amid AI infrastructure shift

What this doesn’t answer is how Arista Networks plans to add newer, security-oriented Secure Access Service Edge (SASE) capabilities to VeloCloud’s older SD-WAN technology. Post-acquisition, it still has only some of the building blocks necessary to achieve this. Mapping AI However, in 2025 there is always more going on with networking acquisitions than simply adding another brick to the wall, and in this case it’s the way AI is changing data flows across networks. “In the new AI era, the concepts of what comprises a user and a site in a WAN have changed fundamentally. The introduction of agentic AI even changes what might be considered a user,” wrote Arista Networks CEO, Jayshree Ullal, in a blog highlighting AI’s effect on WAN architectures. “In addition to people accessing data on demand, new AI agents will be deployed to access data independently, adapting over time to solve problems and enhance user productivity,” she said. Specifically, WANs needed modernization to cope with the effect AI traffic flows are having on data center traffic. Sanjay Uppal, now VP and general manager of the new VeloCloud Division at Arista Networks, elaborated. “The next step in SD-WAN is to identify, secure and optimize agentic AI traffic across that distributed enterprise, this time from all end points across to branches, campus sites, and the different data center locations, both public and private,” he wrote. “The best way to grab this opportunity was in partnership with a networking systems leader, as customers were increasingly looking for a comprehensive solution from LAN/Campus across the WAN to the data center.”

Read More »

Data center capacity continues to shift to hyperscalers

However, even though colocation and on-premises data centers will continue to lose share, they will still continue to grow. They just won’t be growing as fast as hyperscalers. So, it creates the illusion of shrinkage when it’s actually just slower growth. In fact, after a sustained period of essentially no growth, on-premises data center capacity is receiving a boost thanks to genAI applications and GPU infrastructure. “While most enterprise workloads are gravitating towards cloud providers or to off-premise colo facilities, a substantial subset are staying on-premise, driving a substantial increase in enterprise GPU servers,” said John Dinsdale, a chief analyst at Synergy Research Group.

Read More »

Oracle inks $30 billion cloud deal, continuing its strong push into AI infrastructure.

He pointed out that, in addition to its continued growth, OCI has a remaining performance obligation (RPO) — total future revenue expected from contracts not yet reported as revenue — of $138 billion, a 41% increase, year over year. The company is benefiting from the immense demand for cloud computing largely driven by AI models. While traditionally an enterprise resource planning (ERP) company, Oracle launched OCI in 2016 and has been strategically investing in AI and data center infrastructure that can support gigawatts of capacity. Notably, it is a partner in the $500 billion SoftBank-backed Stargate project, along with OpenAI, Arm, Microsoft, and Nvidia, that will build out data center infrastructure in the US. Along with that, the company is reportedly spending about $40 billion on Nvidia chips for a massive new data center in Abilene, Texas, that will serve as Stargate’s first location in the country. Further, the company has signaled its plans to significantly increase its investment in Abu Dhabi to grow out its cloud and AI offerings in the UAE; has partnered with IBM to advance agentic AI; has launched more than 50 genAI use cases with Cohere; and is a key provider for ByteDance, which has said it plans to invest $20 billion in global cloud infrastructure this year, notably in Johor, Malaysia. Ellison’s plan: dominate the cloud world CTO and co-founder Larry Ellison announced in a recent earnings call Oracle’s intent to become No. 1 in cloud databases, cloud applications, and the construction and operation of cloud data centers. He said Oracle is uniquely positioned because it has so much enterprise data stored in its databases. He also highlighted the company’s flexible multi-cloud strategy and said that the latest version of its database, Oracle 23ai, is specifically tailored to the needs of AI workloads. Oracle

Read More »

Datacenter industry calls for investment after EU issues water consumption warning

CISPE’s response to the European Commission’s report warns that the resulting regulatory uncertainty could hurt the region’s economy. “Imposing new, standalone water regulations could increase costs, create regulatory fragmentation, and deter investment. This risks shifting infrastructure outside the EU, undermining both sustainability and sovereignty goals,” CISPE said in its latest policy recommendation, Advancing water resilience through digital innovation and responsible stewardship. “Such regulatory uncertainty could also reduce Europe’s attractiveness for climate-neutral infrastructure investment at a time when other regions offer clear and stable frameworks for green data growth,” it added. CISPE’s recommendations are a mix of regulatory harmonization, increased investment, and technological improvement. Currently, water reuse regulation is directed towards agriculture. Updated regulation across the bloc would encourage more efficient use of water in industrial settings such as datacenters, the asosciation said. At the same time, countries struggling with limited public sector budgets are not investing enough in water infrastructure. This could only be addressed by tapping new investment by encouraging formal public-private partnerships (PPPs), it suggested: “Such a framework would enable the development of sustainable financing models that harness private sector innovation and capital, while ensuring robust public oversight and accountability.” Nevertheless, better water management would also require real-time data gathered through networks of IoT sensors coupled to AI analytics and prediction systems. To that end, cloud datacenters were less a drain on water resources than part of the answer: “A cloud-based approach would allow water utilities and industrial users to centralize data collection, automate operational processes, and leverage machine learning algorithms for improved decision-making,” argued CISPE.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »