Stay Ahead, Stay ONMINE

Introduction to Minimum Cost Flow Optimization in Python

Minimum cost flow optimization minimizes the cost of moving flow through a network of nodes and edges. Nodes include sources (supply) and sinks (demand), with different costs and capacity limits. The aim is to find the least costly way to move volume from sources to sinks while adhering to all capacity limitations. Applications Applications of […]

Minimum cost flow optimization minimizes the cost of moving flow through a network of nodes and edges. Nodes include sources (supply) and sinks (demand), with different costs and capacity limits. The aim is to find the least costly way to move volume from sources to sinks while adhering to all capacity limitations.

Applications

Applications of minimum cost flow optimization are vast and varied, spanning multiple industries and sectors. This approach is crucial in logistics and supply chain management, where it is used to minimize transportation costs while ensuring timely delivery of goods. In telecommunications, it helps in optimizing the routing of data through networks to reduce latency and improve bandwidth utilization. The energy sector leverages minimum cost flow optimization to efficiently distribute electricity through power grids, reducing losses and operational costs. Urban planning and infrastructure development also benefit from this optimization technique, as it assists in designing efficient public transportation systems and water distribution networks.

Example

Below is a simple flow optimization example:

The image above illustrates a minimum cost flow optimization problem with six nodes and eight edges. Nodes A and B serve as sources, each with a supply of 50 units, while nodes E and F act as sinks, each with a demand of 40 units. Every edge has a maximum capacity of 25 units, with variable costs indicated in the image. The objective of the optimization is to allocate flow on each edge to move the required units from nodes A and B to nodes E and F, respecting the edge capacities at the lowest possible cost.

Node F can only receive supply from node B. There are two paths: directly or through node D. The direct path has a cost of 2, while the indirect path via D has a combined cost of 3. Thus, 25 units (the maximum edge capacity) are moved directly from B to F. The remaining 15 units are routed via B -D-F to meet the demand.

Currently, 40 out of 50 units have been transferred from node B, leaving a remaining supply of 10 units that can be moved to node E. The available pathways for supplying node E include: A-E and B-E with a cost of 3, A-C-E with a cost of 4, and B-C-E with a cost of 5. Consequently, 25 units are transported from A-E (limited by the edge capacity) and 10 units from B-E (limited by the remaining supply at node B). To meet the demand of 40 units at node E, an additional 5 units are moved via A-C-E, resulting in no flow being allocated to the B-C pathway.

Mathematical formulation

I introduce two mathematical formulations of minimum cost flow optimization:

1. LP (linear program) with continuous variables only

2. MILP (mixed integer linear program) with continuous and discrete variables

I am using following definitions:

Definitions

LP formulation

This formulation only contains decision variables that are continuous, meaning they can have any value as long as all constraints are fulfilled. Decision variables are in this case the flow variables x(u, v) of all edges.

The objective function describes how the costs that are supposed to be minimized are calculated. In this case it is defined as the flow multiplied with the variable cost summed up over all edges:

Constraints are conditions that must be satisfied for the solution to be valid, ensuring that the flow does not exceed capacity limitations.

First, all flows must be non-negative and not exceed to edge capacities:

Flow conservation constraints ensure that the same amount of flow that goes into a node has to come out of the node. These constraints are applied to all nodes that are neither sources nor sinks:

For source and sink nodes the difference of out flow and in flow is smaller or equal the supply of the node:

If v is a source the difference of outflow minus inflow must not exceed the supply s(v). In case v is a sink node we do not allow that more than -s(v) can flow into the node than out of the node (for sinks s(v) is negative).

MILP

Additionally, to the continuous variables of the LP formulation, the MILP formulation also contains discreate variables that can only have specific values. Discrete variables allow to restrict the number of used nodes or edges to certain values. It can also be used to introduce fixed costs for using nodes or edges. In this article I show how to add fixed costs. It is important to note that adding discrete decision variables makes it much more difficult to find an optimal solution, hence this formulation should only be used if a LP formulation is not possible.

The objective function is defined as:

With three terms: variable cost of all edges, fixed cost of all edges, and fixed cost of all nodes.

The maximum flow that can be allocated to an edge depends on the edge’s capacity, the edge selection variable, and the origin node selection variable:

This equation ensures that flow can only be assigned to edges if the edge selection variable and the origin node selection variable are 1.

The flow conservation constraints are equivalent to the LP problem.

Implementation

In this section I explain how to implement a MILP optimization in Python. You can find the code in this repo.

Libraries

To build the flow network, I used NetworkX which is an excellent library (https://networkx.org/) for working with graphs. There are many interesting articles that demonstrate how powerful and easy to use NetworkX is to work with graphs, i.a. customizing NetworkX GraphsNetworkX: Code Demo for Manipulating SubgraphsSocial Network Analysis with NetworkX: A Gentle Introduction.

One important aspect when building an optimization is to make sure that the input is correctly defined. Even one small error can make the problem infeasible or can lead to an unexpected solution. To avoid this, I used Pydantic to validate the user input and raise any issues at the earliest possible stage. This article gives an easy to understand introduction to Pydantic.

To transform the defined network into a mathematical optimization problem I used PuLP. Which allows to define all variables and constraint in an intuitive way. This library also has the advantage that it can use many different solvers in a simple pug-and-play fashion. This article provides good introduction to this library.

Defining nodes and edges

The code below shows how nodes are defined:

from pydantic import BaseModel, model_validator
from typing import Optional

# node and edge definitions
class Node(BaseModel, frozen=True):
    """
    class of network node with attributes:
    name: str - name of node
    demand: float - demand of node (if node is sink)
    supply: float - supply of node (if node is source)
    capacity: float - maximum flow out of node
    type: str - type of node
    x: float - x-coordinate of node
    y: float - y-coordinate of node
    fixed_cost: float - cost of selecting node
    """
    name: str
    demand: Optional[float] = 0.0
    supply: Optional[float] = 0.0
    capacity: Optional[float] = float('inf')
    type: Optional[str] = None
    x: Optional[float] = 0.0
    y: Optional[float] = 0.0
    fixed_cost: Optional[float] = 0.0

    @model_validator(mode='after')
    def validate(self):
        """
        validate if node definition are correct
        """
        # check that demand is non-negative
        if self.demand < 0 or self.demand == float('inf'): raise ValueError('demand must be non-negative and finite')
        # check that supply is non-negative
        if self.supply < 0: raise ValueError('supply must be non-negative')
        # check that capacity is non-negative
        if self.capacity < 0: raise ValueError('capacity must be non-negative')
        # check that fixed_cost is non-negative
        if self.fixed_cost < 0: raise ValueError('fixed_cost must be non-negative')
        return self

Nodes are defined through the Node class which is inherited from Pydantic’s BaseModel. This enables an automatic validation that ensures that all properties are defined with the correct datatype whenever a new object is created. In this case only the name is a required input, all other properties are optional, if they are not provided the specified default value is assigned to them. By setting the “frozen” parameter to True I made all properties immutable, meaning they cannot be changed after the object has been initialized.

The validate method is executed after the object has been initialized and applies more checks to ensure the provided values are as expected. Specifically it checks that demand, supply, capacity, variable cost and fixed cost are not negative. Furthermore, it also does not allow infinite demand as this would lead to an infeasible optimization problem.

These checks look trivial, however their main benefit is that they will trigger an error at the earliest possible stage when an input is incorrect. Thus, they prevent creating a optimization model that is incorrect. Exploring why a model cannot be solved would be much more time consuming as there are many factors that would need to be analyzed, while such “trivial” input error may not be the first aspect to investigate.

Edges are implemented as follows:

class Edge(BaseModel, frozen=True):
"""
class of edge between two nodes with attributes:
origin: 'Node' - origin node of edge
destination: 'Node' - destination node of edge
capacity: float - maximum flow through edge
variable_cost: float - cost per unit flow through edge
fixed_cost: float - cost of selecting edge
"""
origin: Node
destination: Node
capacity: Optional[float] = float('inf')
variable_cost: Optional[float] = 0.0
fixed_cost: Optional[float] = 0.0

@model_validator(mode='after')
def validate(self):
"""
validate of edge definition is correct
"""
# check that node names are different
if self.origin.name == self.destination.name: raise ValueError('origin and destination names must be different')
# check that capacity is non-negative
if self.capacity < 0: raise ValueError('capacity must be non-negative')
# check that variable_cost is non-negative
if self.variable_cost < 0: raise ValueError('variable_cost must be non-negative')
# check that fixed_cost is non-negative
if self.fixed_cost < 0: raise ValueError('fixed_cost must be non-negative')
return self

The required inputs are an origin node and a destination node object. Additionally, capacity, variable cost and fixed cost can be provided. The default value for capacity is infinity which means if no capacity value is provided it is assumed the edge does not have a capacity limitation. The validation ensures that the provided values are non-negative and that origin node name and the destination node name are different.

Initialization of flowgraph object

To define the flowgraph and optimize the flow I created a new class called FlowGraph that is inherited from NetworkX’s DiGraph class. By doing this I can add my own methods that are specific to the flow optimization and at the same time use all methods DiGraph provides:

from networkx import DiGraph
from pulp import LpProblem, LpVariable, LpMinimize, LpStatus

class FlowGraph(DiGraph):
    """
    class to define and solve minimum cost flow problems
    """
    def __init__(self, nodes=[], edges=[]):
        """
        initialize FlowGraph object
        :param nodes: list of nodes
        :param edges: list of edges
        """
        # initialialize digraph
        super().__init__(None)

        # add nodes and edges
        for node in nodes: self.add_node(node)
        for edge in edges: self.add_edge(edge)


    def add_node(self, node):
        """
        add node to graph
        :param node: Node object
        """
        # check if node is a Node object
        if not isinstance(node, Node): raise ValueError('node must be a Node object')
        # add node to graph
        super().add_node(node.name, demand=node.demand, supply=node.supply, capacity=node.capacity, type=node.type, 
                         fixed_cost=node.fixed_cost, x=node.x, y=node.y)
        
    
    def add_edge(self, edge):    
        """
        add edge to graph
        @param edge: Edge object
        """   
        # check if edge is an Edge object
        if not isinstance(edge, Edge): raise ValueError('edge must be an Edge object')
        # check if nodes exist
        if not edge.origin.name in super().nodes: self.add_node(edge.origin)
        if not edge.destination.name in super().nodes: self.add_node(edge.destination)

        # add edge to graph
        super().add_edge(edge.origin.name, edge.destination.name, capacity=edge.capacity, 
                         variable_cost=edge.variable_cost, fixed_cost=edge.fixed_cost)

FlowGraph is initialized by providing a list of nodes and edges. The first step is to initialize the parent class as an empty graph. Next, nodes and edges are added via the methods add_node and add_edge. These methods first check if the provided element is a Node or Edge object. If this is not the case an error will be raised. This ensures that all elements added to the graph have passed the validation of the previous section. Next, the values of these objects are added to the Digraph object. Note that the Digraph class also uses add_node and add_edge methods to do so. By using the same method name I am overwriting these methods to ensure that whenever a new element is added to the graph it must be added through the FlowGraph methods which validate the object type. Thus, it is not possible to build a graph with any element that has not passed the validation tests.

Initializing the optimization problem

The method below converts the network into an optimization model, solves it, and retrieves the optimized values.

  def min_cost_flow(self, verbose=True):
        """
        run minimum cost flow optimization
        @param verbose: bool - print optimization status (default: True)
        @return: status of optimization
        """
        self.verbose = verbose

        # get maximum flow
        self.max_flow = sum(node['demand'] for _, node in super().nodes.data() if node['demand'] > 0)

        start_time = time.time()
        # create LP problem
        self.prob = LpProblem("FlowGraph.min_cost_flow", LpMinimize)
        # assign decision variables
        self._assign_decision_variables()
        # assign objective function
        self._assign_objective_function()
        # assign constraints
        self._assign_constraints()
        if self.verbose: print(f"Model creation time: {time.time() - start_time:.2f} s")

        start_time = time.time()
        # solve LP problem
        self.prob.solve()
        solve_time = time.time() - start_time

        # get status
        status = LpStatus[self.prob.status]

        if verbose:
            # print optimization status
            if status == 'Optimal':
                # get objective value
                objective = self.prob.objective.value()
                print(f"Optimal solution found: {objective:.2f} in {solve_time:.2f} s")
            else:
                print(f"Optimization status: {status} in {solve_time:.2f} s")
        
        # assign variable values
        self._assign_variable_values(status=='Optimal')

        return status

Pulp’s LpProblem is initialized, the constant LpMinimize defines it as a minimization problem — meaning it is supposed to minimize the value of the objective function. In the following lines all decision variables are initialized, the objective function as well as all constraints are defined. These methods will be explained in the following sections.

Next, the problem is solved, in this step the optimal value of all decision variables is determined. Following the status of the optimization is retrieved. When the status is “Optimal” an optimal solution could be found other statuses are “Infeasible” (it is not possible to fulfill all constraints), “Unbounded” (the objective function can have an arbitrary low values), and “Undefined” meaning the problem definition is not complete. In case no optimal solution was found the problem definition needs to be reviewed.

Finally, the optimized values of all variables are retrieved and assigned to the respective nodes and edges.

Defining decision variables

All decision variables are initialized in the method below:

   def _assign_variable_values(self, opt_found):
        """
        assign decision variable values if optimal solution found, otherwise set to None
        @param opt_found: bool - if optimal solution was found
        """
        # assign edge values        
        for _, _, edge in super().edges.data():
            # initialize values
            edge['flow'] = None
            edge['selected'] = None
            # check if optimal solution found
            if opt_found and edge['flow_var'] is not None:                    
                edge['flow'] = edge['flow_var'].varValue                    

                if edge['selection_var'] is not None: 
                    edge['selected'] = edge['selection_var'].varValue

        # assign node values
        for _, node in super().nodes.data():
            # initialize values
            node['selected'] = None
            if opt_found:                
                # check if node has selection variable
                if node['selection_var'] is not None: 
                    node['selected'] = node['selection_var'].varValue

First it iterates through all edges and assigns continuous decision variables if the edge capacity is greater than 0. Furthermore, if fixed costs of the edge are greater than 0 a binary decision variable is defined as well. Next, it iterates through all nodes and assigns binary decision variables to nodes with fixed costs. The total number of continuous and binary decision variables is counted and printed at the end of the method.

Defining objective

After all decision variables have been initialized the objective function can be defined:

    def _assign_objective_function(self):
        """
        define objective function
        """
        objective = 0
 
        # add edge costs
        for _, _, edge in super().edges.data():
            if edge['selection_var'] is not None: objective += edge['selection_var'] * edge['fixed_cost']
            if edge['flow_var'] is not None: objective += edge['flow_var'] * edge['variable_cost']
        
        # add node costs
        for _, node in super().nodes.data():
            # add node selection costs
            if node['selection_var'] is not None: objective += node['selection_var'] * node['fixed_cost']

        self.prob += objective, 'Objective',

The objective is initialized as 0. Then for each edge fixed costs are added if the edge has a selection variable, and variable costs are added if the edge has a flow variable. For all nodes with selection variables fixed costs are added to the objective as well. At the end of the method the objective is added to the LP object.

Defining constraints

All constraints are defined in the method below:

  def _assign_constraints(self):
        """
        define constraints
        """
        # count of contraints
        constr_count = 0
        # add capacity constraints for edges with fixed costs
        for origin_name, destination_name, edge in super().edges.data():
            # get capacity
            capacity = edge['capacity'] if edge['capacity'] < float('inf') else self.max_flow
            rhs = capacity
            if edge['selection_var'] is not None: rhs *= edge['selection_var']
            self.prob += edge['flow_var'] <= rhs, f"capacity_{origin_name}-{destination_name}",
            constr_count += 1
            
            # get origin node
            origin_node = super().nodes[origin_name]
            # check if origin node has a selection variable
            if origin_node['selection_var'] is not None:
                rhs = capacity * origin_node['selection_var'] 
                self.prob += (edge['flow_var'] <= rhs, f"node_selection_{origin_name}-{destination_name}",)
                constr_count += 1

        total_demand = total_supply = 0
        # add flow conservation constraints
        for node_name, node in super().nodes.data():
            # aggregate in and out flows
            in_flow = 0
            for _, _, edge in super().in_edges(node_name, data=True):
                if edge['flow_var'] is not None: in_flow += edge['flow_var']
            
            out_flow = 0
            for _, _, edge in super().out_edges(node_name, data=True):
                if edge['flow_var'] is not None: out_flow += edge['flow_var']

            # add node capacity contraint
            if node['capacity'] < float('inf'):
                self.prob += out_flow = demand - supply
                rhs = node['demand'] - node['supply']
                self.prob += in_flow - out_flow >= rhs, f"flow_balance_{node_name}",
            constr_count += 1

            # update total demand and supply
            total_demand += node['demand']
            total_supply += node['supply']

        if self.verbose:
            print(f"Constraints: {constr_count}")
            print(f"Total supply: {total_supply}, Total demand: {total_demand}")

First, capacity constraints are defined for each edge. If the edge has a selection variable the capacity is multiplied with this variable. In case there is no capacity limitation (capacity is set to infinity) but there is a selection variable, the selection variable is multiplied with the maximum flow that has been calculated by aggregating the demand of all nodes. An additional constraint is added in case the edge’s origin node has a selection variable. This constraint means that flow can only come out of this node if the selection variable is set to 1.

Following, the flow conservation constraints for all nodes are defined. To do so the total in and outflow of the node is calculated. Getting all in and outgoing edges can easily be done by using the in_edges and out_edges methods of the DiGraph class. If the node has a capacity limitation the maximum outflow will be constraint by that value. For the flow conservation it is necessary to check if the node is either a source or sink node or a transshipment node (demand equals supply). In the first case the difference between inflow and outflow must be greater or equal the difference between demand and supply while in the latter case in and outflow must be equal.

The total number of constraints is counted and printed at the end of the method.

Retrieving optimized values

After running the optimization, the optimized variable values can be retrieved with the following method:

    def _assign_variable_values(self, opt_found):
        """
        assign decision variable values if optimal solution found, otherwise set to None
        @param opt_found: bool - if optimal solution was found
        """
        # assign edge values        
        for _, _, edge in super().edges.data():
            # initialize values
            edge['flow'] = None
            edge['selected'] = None
            # check if optimal solution found
            if opt_found and edge['flow_var'] is not None:                    
                edge['flow'] = edge['flow_var'].varValue                    

                if edge['selection_var'] is not None: 
                    edge['selected'] = edge['selection_var'].varValue

        # assign node values
        for _, node in super().nodes.data():
            # initialize values
            node['selected'] = None
            if opt_found:                
                # check if node has selection variable
                if node['selection_var'] is not None: 
                    node['selected'] = node['selection_var'].varValue 

This method iterates through all edges and nodes, checks if decision variables have been assigned and adds the decision variable value via varValue to the respective edge or node.

Demo

To demonstrate how to apply the flow optimization I created a supply chain network consisting of 2 factories, 4 distribution centers (DC), and 15 markets. All goods produced by the factories have to flow through one distribution center until they can be delivered to the markets.

Supply chain problem

Node properties were defined:

Node definitions

Ranges mean that uniformly distributed random numbers were generated to assign these properties. Since Factories and DCs have fixed costs the optimization also needs to decide which of these entities should be selected.

Edges are generated between all Factories and DCs, as well as all DCs and Markets. The variable cost of edges is calculated as the Euclidian distance between origin and destination node. Capacities of edges from Factories to DCs are set to 350 while from DCs to Markets are set to 100.

The code below shows how the network is defined and how the optimization is run:

# Define nodes
factories = [Node(name=f'Factory {i}', supply=700, type='Factory', fixed_cost=100, x=random.uniform(0, 2),
                  y=random.uniform(0, 1)) for i in range(2)]
dcs = [Node(name=f'DC {i}', fixed_cost=25, capacity=500, type='DC', x=random.uniform(0, 2), 
            y=random.uniform(0, 1)) for i in range(4)]
markets = [Node(name=f'Market {i}', demand=random.randint(1, 100), type='Market', x=random.uniform(0, 2), 
                y=random.uniform(0, 1)) for i in range(15)]

# Define edges
edges = []
# Factories to DCs
for factory in factories:
    for dc in dcs:
        distance = ((factory.x - dc.x)**2 + (factory.y - dc.y)**2)**0.5
        edges.append(Edge(origin=factory, destination=dc, capacity=350, variable_cost=distance))

# DCs to Markets
for dc in dcs:
    for market in markets:
        distance = ((dc.x - market.x)**2 + (dc.y - market.y)**2)**0.5
        edges.append(Edge(origin=dc, destination=market, capacity=100, variable_cost=distance))

# Create FlowGraph
G = FlowGraph(edges=edges)

G.min_cost_flow()

The output of flow optimization is as follows:

Variable types: 68 continuous, 6 binary
Constraints: 161
Total supply: 1400.0, Total demand: 909.0
Model creation time: 0.00 s
Optimal solution found: 1334.88 in 0.23 s

The problem consists of 68 continuous variables which are the edges’ flow variables and 6 binary decision variables which are the selection variables of the Factories and DCs. There are 161 constraints in total which consist of edge and node capacity constraints, node selection constraints (edges can only have flow if the origin node is selected), and flow conservation constraints. The next line shows that the total supply is 1400 which is higher than the total demand of 909 (if the demand was higher than the supply the problem would be infeasible). Since this is a small optimization problem, the time to define the optimization model was less than 0.01 seconds. The last line shows that an optimal solution with an objective value of 1335 could be found in 0.23 seconds.

Additionally, to the code I described in this post I also added two methods that visualize the optimized solution. The code of these methods can also be found in the repo.

Flow graph

All nodes are located by their respective x and y coordinates. The node and edge size is relative to the total volume that is flowing through. The edge color refers to its utilization (flow over capacity). Dashed lines show edges without flow allocation.

In the optimal solution both Factories were selected which is inevitable as the maximum supply of one Factory is 700 and the total demand is 909. However, only 3 of the 4 DCs are used (DC 0 has not been selected).

In general the plot shows the Factories are supplying the nearest DCs and DCs the nearest Markets. However, there are a few exceptions to this observation: Factory 0 also supplies DC 3 although Factory 1 is nearer. This is due to the capacity constraints of the edges which only allow to move at most 350 units per edge. However, the closest Markets to DC 3 have a slightly higher demand, hence Factory 0 is moving additional units to DC 3 to meet that demand. Although Market 9 is closest to DC 3 it is supplied by DC 2. This is because DC 3 would require an additional supply from Factory 0 to supply this market and since the total distance from Factory 0 over DC 3 is longer than the distance from Factory 0 through DC 2, Market 9 is supplied via the latter route.

Another way to visualize the results is via a Sankey diagram which focuses on visualizing the flows of the edges:

Sankey flow diagram

The colors represent the edges’ utilizations with lowest utilizations in green changing to yellow and red for the highest utilizations. This diagram shows very well how much flow goes through each node and edge. It highlights the flow from Factory 0 to DC 3 and also that Market 13 is supplied by DC 2 and DC 1.

Summary

Minimum cost flow optimizations can be a very helpful tool in many domains like logistics, transportation, telecommunication, energy sector and many more. To apply this optimization it is important to translate a physical system into a mathematical graph consisting of nodes and edges. This should be done in a way to have as few discrete (e.g. binary) decision variables as necessary as those make it significantly more difficult to find an optimal solution. By combining Python’s NetworkX, Pulp and Pydantic libraries I built an flow optimization class that is intuitive to initialize and at the same time follows a generalized formulation which allows to apply it in many different use cases. Graph and flow diagrams are very helpful to understand the solution found by the optimizer.

If not otherwise stated all images were created by the author.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

SATORP halts processing activities at Jubail refinery

Saudi Aramco Total Refinery & Petrochemicals Co.—a joint venture of Saudi Aramco (62.5%) and TotalEnergies SE (37.5%)—has temporarily shuttered units at its 460,000 b/d full-conversion refinery complex at Jubail, on Saudi Arabia’s eastern coast, following disruptions resulting from the ongoing war in the Middle East. In an Apr. 10 update

Read More »

Intel secures Google cloud and AI infrastructure deal

“Scaling AI requires more than accelerators – it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand,” said Lip-Bu Tan, CEO  of Intel in a statement. Google does offer custom Armv9-based Axion processors as an alternative to x86 based instances

Read More »

Broadcom strikes chip deals with Google, Anthropic

Anthropic said this week that the AI startup’s annual revenue run rate has now crossed $30 billion, up from about $9 billion the previous year. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth,” said Krishna Rao, CFO of Anthropic, in a

Read More »

BW Energy granted 25-year extension of license offshore Gabon

BW Energy Gabon has received approval from the Ministry of Oil and Gas of the Gabonese Republic to extend the Dussafu Marin production license offshore Gabon, West Africa. The license period has been extended to 2053 from 2028, inclusive of three 5-year option periods from 2038 onwards. The prior contract was until 2038 inclusive of two 5-year option periods from 2028 onwards. The extra time “provides long-term visibility for production, investments, and reserve development” of the operator’s “core producing asset,” the company said in a release Apr. 7. Ongoing license projects include MaBoMo Phase 2, with planned first oil in second-half 2026, and the Bourdon development following its discovery last year. The timeline also “strengthens the foundation for future infrastructure‑led growth opportunities across the adjacent Niosi and Guduma licenses, both operated by BW Energy,” the company continued. The Dussafu Marin permit is a development and exploitation license with multiple discoveries and prospects lying within a proven oil and gas play fairway within Southern Gabon basin. To the northwest of the block is the Etame-Ebouri Trend, a collection of fields producing from the pre-salt Gamba and Dentale sandstones, and to the north are Lucina and M’Bya fields which produce from the syn-rift Lucina sandstones beneath the Gamba. Oil fields within the Dussafu Permit include Moubenga, Walt Whitman, Ruche, Ruche North East, Tortue, Hibiscus, and Hibiscus North. BW Energy Gabon is operator at Dussafu (73.50%) with partners Panoro Energy ASA (17.5%) and Gabon Oil Co. (9%). Dussafu.

Read More »

Santos plans development of North Slope’s Quokka Unit

Santos Ltd. has started development planning in the Quokka Unit on Alaska’s North Slope after further delineating the Nanushuk reservoir. The Quokka-1 appraisal well spudded on Jan. 1, 2026, about 6 six miles from the Mitquq-1 discovery well drilled in 2020. It was drilled to 4,787 ft TD and encountered a high-quality reservoir with about 143 ft of net oil pay in the Nanushuk formation, demonstrating an average porosity of 19%. Following a single stage fracture stimulation, the well achieved a flow rate of 2,190 bo/d. Reservoir sands correlated between the two discoveries, coupled with fluid analyses, confirm the presence of high‑quality, light‑gravity oil, supporting strong well performance and improved pricing relative to Pikka oil. Together with additional geological data, these results underpin the potential for a two‑drill‑site development with production capacity comparable to Pikka phase 1, the company said.  Rate and resource potential for the two-drill-site development is being evaluated. Resource estimation is ongoing and appraisal results will be evaluated as part of the FY26 contingent resource assessment. In FY25, Santos reported 2C contingent resources of 177 MMboe for the Quokka Unit. Based on these results, Santos has started development planning, including the initiation of key permitting activities. Santos is operator of the Quokka Unit (51%) with partner Repsol (49%).

Read More »

Fluor, Axens secure contracts for US grassroots refinery project

Fluor Corp. and Axens Group have been awarded key contracts for America First Refining’s (AFR) proposed grassroots refinery at the Port of Brownsville, Tex., advancing development of what would be the first new US refinery to be built in more than 50 years. Fluor will execute front-end engineering and design (FEED) for the project, while Axens will serve as technology licensor of core refining process technologies to be used at the site, the service providers said in separate Apr. 7 releases. The AFR refinery is designed to process more than 60 million bbl/year—or about 164,400 b/d—of US light shale crude into transportation fuels, including gasoline, diesel, and jet fuel. Contract details Without disclosing a specific value of its contract, Fluor said the scope of its FEED study will cover early-stage engineering and design required to define project execution, cost, and schedule based on a complex that will incorporate commercially proven technologies to improve efficiency and emissions performance while processing domestic shale crude. As technology licensor, Axens said it will deliver process technologies for key refining units at the site, including those for: Naphtha, diesel hydrotreating. Continuous catalytic reforming. Isomerization. Alongside supporting improved fuel-quality specifications, the unspecified technologies to be supplied for the refinery will also help to reduce overall energy consumption at the site. Axens—which confirmed its involvement since 2017 in working with AFR on early-stage development of the project—said this latest licensing agreement will also cover engineering support, equipment, catalysts, and services across the refinery’s process configuration. Project background, commercial framework Upon first announcing the project in March 2026, AFR said the proposed development came alongside an already signed 20-year offtake agreement with a global integrated oil company covering 1.2 billion bbl of US light shale crude, as well as capital investment to support construction. As part of the

Read More »

EIA: US crude inventories up 3.1 million bbl

US crude oil inventories for the week ended Apr. 3, excluding the Strategic Petroleum Reserve, increased by 3.1 million bbl from the previous week, according to data from the US Energy Information Administration (EIA). At 464.7 million bbl, US crude oil inventories are about 2% above the 5-year average for this time of year, the EIA report indicated. EIA said total motor gasoline inventories decreased by 1.6 million bbl from last week and are about 3% above the 5-year average for this time of year. Finished gasoline inventories increased while blending components inventories decreased last week. Distillate fuel inventories decreased by 3.1 million bbl last week and are about 5% below the 5-year average for this time of year. Propane-propylene inventories increased by 600,000 bbl from last week and are 71% above the 5-year average for this time of year, EIA said. US crude oil refinery inputs averaged 16.3 million b/d for the week ended Apr. 3, which was 129,000 b/d less than the previous week’s average. Refineries operated at 92% of capacity. Gasoline production decreased, averaging 9.4 million b/d. Distillate fuel production increased, averaging 5.0 million b/d. US crude oil imports averaged 6.3 million b/d, down 130,000 b/d from the previous week. Over the last 4 weeks, crude oil imports averaged about 6.6 million b/d, 9.1% more than the same 4-week period last year. Total motor gasoline imports averaged 571,000 b/d. Distillate fuel imports averaged 152,000 b/d.

Read More »

Oil prices plunge as Iran war tensions ease amid tentative Hormuz reopening

Crude oil prices plunged sharply on Apr. 7 after US President Donald Trump announced a conditional 2-week ceasefire agreement with Iran, contingent on reopening the Strait of Hormuz and restoring safe passage for energy shipments. Both Brent and WTI crude oil fell towards $95/bbl, marking their largest single-day decline since 2020. Under the agreement, Iran signaled willingness to halt attacks on shipping and allow transit through Hormuz while broader negotiations continue. The US also indicated it would assist in clearing a backlog of tankers and stabilizing maritime traffic. Benchmark crude prices initially surged above $110/bbl in early April amid fears of prolonged supply disruption after Iran effectively restricted traffic through the strait—a corridor responsible for roughly 20% of global oil flows. The blockade, triggered by escalating US-Iran hostilities, caused tanker traffic to collapse and stranded millions of barrels of crude and refined products in the region. Despite the price correction, analysts caution that supply disruptions and infrastructure damage will continue to constrain markets. The conflict has already impaired regional energy assets, including LNG infrastructure in Qatar, and forced producers across the Middle East to curtail output or delay exports. The US Energy Information Administration (EIA) warned that fuel prices may remain elevated for months even if flows normalize, citing logistical bottlenecks, depleted inventories, and continued geopolitical uncertainty. “In theory, the 10–13 million b/d of crude oil and product supply stranded behind the Strait should now be gradually released. Whether the pre-March status quo will be re-established depends entirely on whether the truce can be turned into a permanent peace during the negotiations in Pakistan,” said Tamas Varga, analyst, PVM Oil Associates. “What appears evident, at least for now, is that the current quarter, the April–June period, will be the tightest, as the scarcity of available oil, both crude and refined

Read More »

EIA: Brent crude to reach $115/bbl in second-quarter 2026

Global oil markets have entered a period of acute volatility, with prices expected to surge into second-quarter 2026 as war-driven supply disruptions in the Middle East constrain flows through the Strait of Hormuz, according to the US Energy Information Administration (EIA)’s April Short-Term Energy Outlook. The agency estimates that Brent crude averaged $103/bbl in March and will climb further to a quarterly peak of about $115/bbl in second-quarter 2026, reflecting a sharp tightening in global supply following widespread production shut-ins across key Gulf producers. The disruption stems from the effective closure of the Strait of Hormuz, a critical chokepoint that typically carries nearly 20% of global oil supply. The US-Iran war in the region has forced producers including Saudi Arabia, Iraq, Kuwait, and the UAE to curtail output significantly. EIA estimates that crude production shut-ins averaged 7.5 million b/d in March and will rise to a peak of 9.1 million b/d in April. In this outlook, EIA assumes the conflict does not persist past April and that traffic through the Strait of Hormuz gradually resumes. Under those assumptions, EIA expects production shut-ins will fall to 6.7 million b/d in May and return close to pre-conflict levels in late 2026. The scale of the outage has rapidly flipped the market from prior expectations of oversupply into a pronounced deficit, with global inventories drawing sharply during the second quarter. Despite an assumption that the conflict does not persist beyond April, the agency warns that supply chains will take months to normalize, keeping a geopolitical risk premium embedded in prices through late 2026. EIA forecasts the Brent crude oil price will fall below $90/bbl in fourth-quarter 2026 and average $76/bbl in 2027, about $23/bbl higher than in its February STEO forecast. This price forecast is highly dependent on EIA’s assumptions of both the

Read More »

AI is a Positive Catalyst for Grid Growth

Data centers, particularly those optimized for artificial intelligence workloads, are frequently characterized in public discourse as a disruptive threat to grid stability and ratepayer affordability. But behind-the-narrative as we are, the AI‑driven data center growth is simply illuminating pre‑existing systemic weaknesses in electric infrastructure that have accumulated over more than a decade of underinvestment in transmission, substations, and interconnection capacity. Over the same period, many utilities operated under planning assumptions shaped by slow demand growth and regulatory frameworks that incentivized incremental upgrades rather than large, anticipatory capital programs. As a result, the emergence of gigawatt‑scale computing campuses appears to be a sudden shock to a system that, in reality, was already misaligned with long‑term decarbonization, electrification, and digitalization objectives. Utilities have been asked to do more with aging grids, slow permitting, and chronically constrained capital, and now AI and cloud are finally putting real urgency — and real investment — behind modernizing that backbone. In that sense, large‑scale compute is not the problem; it is the catalyst that makes it impossible to ignore the problem any longer. We are at a moment when data centers, and especially AI data centers, are being blamed for exposing weaknesses that were already there, when in reality they are giving society a chance to fix a power system that has been underbuilt for more than a decade. Utilities have been asked to do more with aging grids, slow permitting, and limited investment, and now AI and cloud are finally putting real urgency — and real capital — behind modernizing that backbone. In that sense, data centers aren’t the problem; they are the catalyst that makes it impossible to ignore the problem any longer. AI Demand Provided a Long‑Overdue Stress Test The nature of AI workloads intensified this dynamic. High‑performance computing clusters concentrate substantial power

Read More »

From Land Grab to Structured Scale: Kirkland & Ellis Explains How Capital, Power, and Deal Complexity Are Defining the AI Data Center Boom

The AI data center market is no longer defined by speed alone. For much of the past three years, capital moved aggressively into digital infrastructure, chasing land, power, and platform scale as generative AI workloads began to reshape demand curves. But as Melissa Kalka, M&A and private equity partner, and Kimberly McGrath, real estate partner at Kirkland & Ellis, explain on the latest episode of the Data Center Frontier Show, the industry is now entering a more complex and more consequential phase. The land grab is over. Execution has begun. Capital remains abundant, but it is no longer forgiving. From Capital Rush to Capital Discipline As noted by Kalka and McGrath, the period from roughly 2022 through 2025 marked a rapid acceleration in AI infrastructure investment. Take-private deals involving CyrusOne, QTS, and Switch signaled a structural shift, while hyperscale demand scaled from tens of megawatts to hundreds, and now toward gigawatt-class campuses. But the current phase is not defined by a pullback in capital. Instead, it reflects an expansion of investment pathways and a corresponding increase in scrutiny. “There’s actually more deal flow now,” Kalka notes, pointing to the growing range of entry points across the capital stack, including development vehicles, yield-oriented structures, and private credit. With more capital chasing larger and more complex opportunities, investors are evaluating not just platforms, but the full lifecycle of assets from early-stage development through stabilization and long-term hold. That shift has pulled capital earlier into the process, where risk is higher and less defined. Power availability, permitting, and execution timelines are now central to underwriting decisions. What Defines a “Bankable” Platform In this environment, the definition of a bankable data center platform has tightened. Execution history remains foundational. Investors are looking for consistent delivery, operational reliability, and clean contractual performance. But those factors alone

Read More »

CoreWeave and Bell Canada Reset AI Data Center Scale

From GPU Cloud to AI Factory Operator In sum, CoreWeave is moving beyond its origins as a fast-scaling GPU cloud built on scarcity. The company is increasingly positioning itself as an AI infrastructure operator, where competitive advantage comes from integration across hardware, networking, cooling, platform software, workload orchestration, and early access to NVIDIA’s latest systems. That positioning has been reinforced by NVIDIA itself. In January, NVIDIA outlined a deeper alignment with CoreWeave focused on building AI factories, accelerating the procurement of land, power, and shell, and validating CoreWeave’s AI-native software and reference architecture. The partnership also includes deployment of multiple generations of NVIDIA infrastructure across CoreWeave’s platform, including Rubin systems, Vera CPUs, and BlueField data processing units, alongside a $2 billion equity investment. No simple vendor relationship, this is co-development around physical AI infrastructure. Bell Canada and the Rise of Sovereign AI Capacity Viewed through that lens, Bell Canada’s Saskatchewan announcement can be seen as part of the same structural shift. On March 16, Bell and the Government of Saskatchewan unveiled plans for a 300 MW AI Fabric data center in the Rural Municipality of Sherwood, outside Regina. CoreWeave is expected to anchor the site’s NVIDIA-based GPU infrastructure, extending its AI-native platform into a sovereign, hyperscale, power-dense environment. BCE described the project as its largest-ever investment in the province and said it is expected to become Canada’s largest purpose-built AI data center campus. Bell projects up to $12 billion (CDN) in long-term economic impact, along with at least 800 construction jobs and a minimum of 80 permanent roles once the site is operational. More importantly, Bell is explicitly framing the development as a foundation for domestic compute capacity, positioning AI infrastructure as a national asset tied to economic growth and technological sovereignty. That project extends Bell’s broader sovereign AI strategy.

Read More »

From Reactor Designs to Real Projects: SMRs Enter the Execution Era as AI Power Demand Accelerates

The pattern emerging is clear. The SMR story is no longer about reactor design. Recent announcements are centered on permits, fuel, supply chains, financing, and customer traction, i.e. the factors that determine whether SMRs become a viable market or remain a technology narrative. The conversation has transitioned from technically compelling reactor concepts to the harder problem of industrial execution. Through the first quarter of 2026, and especially in March, vendors moved beyond partnership announcements to concrete progress in licensing, fuel access, supply-chain development, control systems, customer alignment, and capital formation. The distinction now is between companies building credible deployment pathways and those still positioned around long-dated opportunity. At a high level, these developments fall into three categories. First, regulatory progress: the most difficult and time-consuming milestone. Second, efforts to establish manufacturing and fuel ecosystems that can support repeatable, fleet-scale deployment. Third, a broad repositioning toward power-intensive industrial users, utilities, and increasingly data center–driven load growth. The result is an SMR market that looks less like a single competitive race and more like a set of parallel business models converging on the same objective: dispatchable, carbon-free power that can be financed and deployed with greater predictability than traditional gigawatt-scale nuclear. X-energy: Building a Commercial Path to Scale X-energy has emerged as one of the more credible commercialization stories in the SMR market, with recent moves spanning capital markets, customer development, and supply-chain expansion. Reuters reported on March 20 that the company has confidentially filed for an IPO, aiming to capitalize on renewed investor interest in nuclear and rising electricity demand tied to AI infrastructure. That filing followed closely on an agreement with Talen Energy to evaluate multiple four-unit Xe-100 deployments across U.S. power markets, as well as a MOU with Japan’s IHI to expand U.S.-Japan supply chain capabilities for the reactor.

Read More »

DCF Poll: Data Centers and the Public Trust Gap

Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with

Read More »

OpenAI puts part of Stargate project on hold over runaway power costs

OpenAI has postponed plans to open one of the data centers central to its Stargate project. It announced its plan to open the data center in the UK with great fanfare last September, when it was regarded as a major boost for the country’s nascent AI industry, as well as proving a step up for OpenAI’s international credentials. At the time, Sam Altman, CEO of OpenAI, said, “The UK has been a longstanding pioneer of AI, and is now home to world-class researchers, millions of ChatGPT users, and a government that quickly recognized the potential of this technology.” All of that has been quietly forgotten. The plans for the data center in Northumberland, in the Northeast of England, have been put on hold, with the project ready to be revived when the conditions are ripe for major infrastructure investment, according to a report by the BBC.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Financial services

This page brings together essential resources to help financial institutions evaluate, adopt, and scale AI in regulated environments. Whether you’re exploring early use cases or

Read More »