Stay Ahead, Stay ONMINE

Introduction to Minimum Cost Flow Optimization in Python

Minimum cost flow optimization minimizes the cost of moving flow through a network of nodes and edges. Nodes include sources (supply) and sinks (demand), with different costs and capacity limits. The aim is to find the least costly way to move volume from sources to sinks while adhering to all capacity limitations. Applications Applications of […]

Minimum cost flow optimization minimizes the cost of moving flow through a network of nodes and edges. Nodes include sources (supply) and sinks (demand), with different costs and capacity limits. The aim is to find the least costly way to move volume from sources to sinks while adhering to all capacity limitations.

Applications

Applications of minimum cost flow optimization are vast and varied, spanning multiple industries and sectors. This approach is crucial in logistics and supply chain management, where it is used to minimize transportation costs while ensuring timely delivery of goods. In telecommunications, it helps in optimizing the routing of data through networks to reduce latency and improve bandwidth utilization. The energy sector leverages minimum cost flow optimization to efficiently distribute electricity through power grids, reducing losses and operational costs. Urban planning and infrastructure development also benefit from this optimization technique, as it assists in designing efficient public transportation systems and water distribution networks.

Example

Below is a simple flow optimization example:

The image above illustrates a minimum cost flow optimization problem with six nodes and eight edges. Nodes A and B serve as sources, each with a supply of 50 units, while nodes E and F act as sinks, each with a demand of 40 units. Every edge has a maximum capacity of 25 units, with variable costs indicated in the image. The objective of the optimization is to allocate flow on each edge to move the required units from nodes A and B to nodes E and F, respecting the edge capacities at the lowest possible cost.

Node F can only receive supply from node B. There are two paths: directly or through node D. The direct path has a cost of 2, while the indirect path via D has a combined cost of 3. Thus, 25 units (the maximum edge capacity) are moved directly from B to F. The remaining 15 units are routed via B -D-F to meet the demand.

Currently, 40 out of 50 units have been transferred from node B, leaving a remaining supply of 10 units that can be moved to node E. The available pathways for supplying node E include: A-E and B-E with a cost of 3, A-C-E with a cost of 4, and B-C-E with a cost of 5. Consequently, 25 units are transported from A-E (limited by the edge capacity) and 10 units from B-E (limited by the remaining supply at node B). To meet the demand of 40 units at node E, an additional 5 units are moved via A-C-E, resulting in no flow being allocated to the B-C pathway.

Mathematical formulation

I introduce two mathematical formulations of minimum cost flow optimization:

1. LP (linear program) with continuous variables only

2. MILP (mixed integer linear program) with continuous and discrete variables

I am using following definitions:

Definitions

LP formulation

This formulation only contains decision variables that are continuous, meaning they can have any value as long as all constraints are fulfilled. Decision variables are in this case the flow variables x(u, v) of all edges.

The objective function describes how the costs that are supposed to be minimized are calculated. In this case it is defined as the flow multiplied with the variable cost summed up over all edges:

Constraints are conditions that must be satisfied for the solution to be valid, ensuring that the flow does not exceed capacity limitations.

First, all flows must be non-negative and not exceed to edge capacities:

Flow conservation constraints ensure that the same amount of flow that goes into a node has to come out of the node. These constraints are applied to all nodes that are neither sources nor sinks:

For source and sink nodes the difference of out flow and in flow is smaller or equal the supply of the node:

If v is a source the difference of outflow minus inflow must not exceed the supply s(v). In case v is a sink node we do not allow that more than -s(v) can flow into the node than out of the node (for sinks s(v) is negative).

MILP

Additionally, to the continuous variables of the LP formulation, the MILP formulation also contains discreate variables that can only have specific values. Discrete variables allow to restrict the number of used nodes or edges to certain values. It can also be used to introduce fixed costs for using nodes or edges. In this article I show how to add fixed costs. It is important to note that adding discrete decision variables makes it much more difficult to find an optimal solution, hence this formulation should only be used if a LP formulation is not possible.

The objective function is defined as:

With three terms: variable cost of all edges, fixed cost of all edges, and fixed cost of all nodes.

The maximum flow that can be allocated to an edge depends on the edge’s capacity, the edge selection variable, and the origin node selection variable:

This equation ensures that flow can only be assigned to edges if the edge selection variable and the origin node selection variable are 1.

The flow conservation constraints are equivalent to the LP problem.

Implementation

In this section I explain how to implement a MILP optimization in Python. You can find the code in this repo.

Libraries

To build the flow network, I used NetworkX which is an excellent library (https://networkx.org/) for working with graphs. There are many interesting articles that demonstrate how powerful and easy to use NetworkX is to work with graphs, i.a. customizing NetworkX GraphsNetworkX: Code Demo for Manipulating SubgraphsSocial Network Analysis with NetworkX: A Gentle Introduction.

One important aspect when building an optimization is to make sure that the input is correctly defined. Even one small error can make the problem infeasible or can lead to an unexpected solution. To avoid this, I used Pydantic to validate the user input and raise any issues at the earliest possible stage. This article gives an easy to understand introduction to Pydantic.

To transform the defined network into a mathematical optimization problem I used PuLP. Which allows to define all variables and constraint in an intuitive way. This library also has the advantage that it can use many different solvers in a simple pug-and-play fashion. This article provides good introduction to this library.

Defining nodes and edges

The code below shows how nodes are defined:

from pydantic import BaseModel, model_validator
from typing import Optional

# node and edge definitions
class Node(BaseModel, frozen=True):
    """
    class of network node with attributes:
    name: str - name of node
    demand: float - demand of node (if node is sink)
    supply: float - supply of node (if node is source)
    capacity: float - maximum flow out of node
    type: str - type of node
    x: float - x-coordinate of node
    y: float - y-coordinate of node
    fixed_cost: float - cost of selecting node
    """
    name: str
    demand: Optional[float] = 0.0
    supply: Optional[float] = 0.0
    capacity: Optional[float] = float('inf')
    type: Optional[str] = None
    x: Optional[float] = 0.0
    y: Optional[float] = 0.0
    fixed_cost: Optional[float] = 0.0

    @model_validator(mode='after')
    def validate(self):
        """
        validate if node definition are correct
        """
        # check that demand is non-negative
        if self.demand < 0 or self.demand == float('inf'): raise ValueError('demand must be non-negative and finite')
        # check that supply is non-negative
        if self.supply < 0: raise ValueError('supply must be non-negative')
        # check that capacity is non-negative
        if self.capacity < 0: raise ValueError('capacity must be non-negative')
        # check that fixed_cost is non-negative
        if self.fixed_cost < 0: raise ValueError('fixed_cost must be non-negative')
        return self

Nodes are defined through the Node class which is inherited from Pydantic’s BaseModel. This enables an automatic validation that ensures that all properties are defined with the correct datatype whenever a new object is created. In this case only the name is a required input, all other properties are optional, if they are not provided the specified default value is assigned to them. By setting the “frozen” parameter to True I made all properties immutable, meaning they cannot be changed after the object has been initialized.

The validate method is executed after the object has been initialized and applies more checks to ensure the provided values are as expected. Specifically it checks that demand, supply, capacity, variable cost and fixed cost are not negative. Furthermore, it also does not allow infinite demand as this would lead to an infeasible optimization problem.

These checks look trivial, however their main benefit is that they will trigger an error at the earliest possible stage when an input is incorrect. Thus, they prevent creating a optimization model that is incorrect. Exploring why a model cannot be solved would be much more time consuming as there are many factors that would need to be analyzed, while such “trivial” input error may not be the first aspect to investigate.

Edges are implemented as follows:

class Edge(BaseModel, frozen=True):
"""
class of edge between two nodes with attributes:
origin: 'Node' - origin node of edge
destination: 'Node' - destination node of edge
capacity: float - maximum flow through edge
variable_cost: float - cost per unit flow through edge
fixed_cost: float - cost of selecting edge
"""
origin: Node
destination: Node
capacity: Optional[float] = float('inf')
variable_cost: Optional[float] = 0.0
fixed_cost: Optional[float] = 0.0

@model_validator(mode='after')
def validate(self):
"""
validate of edge definition is correct
"""
# check that node names are different
if self.origin.name == self.destination.name: raise ValueError('origin and destination names must be different')
# check that capacity is non-negative
if self.capacity < 0: raise ValueError('capacity must be non-negative')
# check that variable_cost is non-negative
if self.variable_cost < 0: raise ValueError('variable_cost must be non-negative')
# check that fixed_cost is non-negative
if self.fixed_cost < 0: raise ValueError('fixed_cost must be non-negative')
return self

The required inputs are an origin node and a destination node object. Additionally, capacity, variable cost and fixed cost can be provided. The default value for capacity is infinity which means if no capacity value is provided it is assumed the edge does not have a capacity limitation. The validation ensures that the provided values are non-negative and that origin node name and the destination node name are different.

Initialization of flowgraph object

To define the flowgraph and optimize the flow I created a new class called FlowGraph that is inherited from NetworkX’s DiGraph class. By doing this I can add my own methods that are specific to the flow optimization and at the same time use all methods DiGraph provides:

from networkx import DiGraph
from pulp import LpProblem, LpVariable, LpMinimize, LpStatus

class FlowGraph(DiGraph):
    """
    class to define and solve minimum cost flow problems
    """
    def __init__(self, nodes=[], edges=[]):
        """
        initialize FlowGraph object
        :param nodes: list of nodes
        :param edges: list of edges
        """
        # initialialize digraph
        super().__init__(None)

        # add nodes and edges
        for node in nodes: self.add_node(node)
        for edge in edges: self.add_edge(edge)


    def add_node(self, node):
        """
        add node to graph
        :param node: Node object
        """
        # check if node is a Node object
        if not isinstance(node, Node): raise ValueError('node must be a Node object')
        # add node to graph
        super().add_node(node.name, demand=node.demand, supply=node.supply, capacity=node.capacity, type=node.type, 
                         fixed_cost=node.fixed_cost, x=node.x, y=node.y)
        
    
    def add_edge(self, edge):    
        """
        add edge to graph
        @param edge: Edge object
        """   
        # check if edge is an Edge object
        if not isinstance(edge, Edge): raise ValueError('edge must be an Edge object')
        # check if nodes exist
        if not edge.origin.name in super().nodes: self.add_node(edge.origin)
        if not edge.destination.name in super().nodes: self.add_node(edge.destination)

        # add edge to graph
        super().add_edge(edge.origin.name, edge.destination.name, capacity=edge.capacity, 
                         variable_cost=edge.variable_cost, fixed_cost=edge.fixed_cost)

FlowGraph is initialized by providing a list of nodes and edges. The first step is to initialize the parent class as an empty graph. Next, nodes and edges are added via the methods add_node and add_edge. These methods first check if the provided element is a Node or Edge object. If this is not the case an error will be raised. This ensures that all elements added to the graph have passed the validation of the previous section. Next, the values of these objects are added to the Digraph object. Note that the Digraph class also uses add_node and add_edge methods to do so. By using the same method name I am overwriting these methods to ensure that whenever a new element is added to the graph it must be added through the FlowGraph methods which validate the object type. Thus, it is not possible to build a graph with any element that has not passed the validation tests.

Initializing the optimization problem

The method below converts the network into an optimization model, solves it, and retrieves the optimized values.

  def min_cost_flow(self, verbose=True):
        """
        run minimum cost flow optimization
        @param verbose: bool - print optimization status (default: True)
        @return: status of optimization
        """
        self.verbose = verbose

        # get maximum flow
        self.max_flow = sum(node['demand'] for _, node in super().nodes.data() if node['demand'] > 0)

        start_time = time.time()
        # create LP problem
        self.prob = LpProblem("FlowGraph.min_cost_flow", LpMinimize)
        # assign decision variables
        self._assign_decision_variables()
        # assign objective function
        self._assign_objective_function()
        # assign constraints
        self._assign_constraints()
        if self.verbose: print(f"Model creation time: {time.time() - start_time:.2f} s")

        start_time = time.time()
        # solve LP problem
        self.prob.solve()
        solve_time = time.time() - start_time

        # get status
        status = LpStatus[self.prob.status]

        if verbose:
            # print optimization status
            if status == 'Optimal':
                # get objective value
                objective = self.prob.objective.value()
                print(f"Optimal solution found: {objective:.2f} in {solve_time:.2f} s")
            else:
                print(f"Optimization status: {status} in {solve_time:.2f} s")
        
        # assign variable values
        self._assign_variable_values(status=='Optimal')

        return status

Pulp’s LpProblem is initialized, the constant LpMinimize defines it as a minimization problem — meaning it is supposed to minimize the value of the objective function. In the following lines all decision variables are initialized, the objective function as well as all constraints are defined. These methods will be explained in the following sections.

Next, the problem is solved, in this step the optimal value of all decision variables is determined. Following the status of the optimization is retrieved. When the status is “Optimal” an optimal solution could be found other statuses are “Infeasible” (it is not possible to fulfill all constraints), “Unbounded” (the objective function can have an arbitrary low values), and “Undefined” meaning the problem definition is not complete. In case no optimal solution was found the problem definition needs to be reviewed.

Finally, the optimized values of all variables are retrieved and assigned to the respective nodes and edges.

Defining decision variables

All decision variables are initialized in the method below:

   def _assign_variable_values(self, opt_found):
        """
        assign decision variable values if optimal solution found, otherwise set to None
        @param opt_found: bool - if optimal solution was found
        """
        # assign edge values        
        for _, _, edge in super().edges.data():
            # initialize values
            edge['flow'] = None
            edge['selected'] = None
            # check if optimal solution found
            if opt_found and edge['flow_var'] is not None:                    
                edge['flow'] = edge['flow_var'].varValue                    

                if edge['selection_var'] is not None: 
                    edge['selected'] = edge['selection_var'].varValue

        # assign node values
        for _, node in super().nodes.data():
            # initialize values
            node['selected'] = None
            if opt_found:                
                # check if node has selection variable
                if node['selection_var'] is not None: 
                    node['selected'] = node['selection_var'].varValue

First it iterates through all edges and assigns continuous decision variables if the edge capacity is greater than 0. Furthermore, if fixed costs of the edge are greater than 0 a binary decision variable is defined as well. Next, it iterates through all nodes and assigns binary decision variables to nodes with fixed costs. The total number of continuous and binary decision variables is counted and printed at the end of the method.

Defining objective

After all decision variables have been initialized the objective function can be defined:

    def _assign_objective_function(self):
        """
        define objective function
        """
        objective = 0
 
        # add edge costs
        for _, _, edge in super().edges.data():
            if edge['selection_var'] is not None: objective += edge['selection_var'] * edge['fixed_cost']
            if edge['flow_var'] is not None: objective += edge['flow_var'] * edge['variable_cost']
        
        # add node costs
        for _, node in super().nodes.data():
            # add node selection costs
            if node['selection_var'] is not None: objective += node['selection_var'] * node['fixed_cost']

        self.prob += objective, 'Objective',

The objective is initialized as 0. Then for each edge fixed costs are added if the edge has a selection variable, and variable costs are added if the edge has a flow variable. For all nodes with selection variables fixed costs are added to the objective as well. At the end of the method the objective is added to the LP object.

Defining constraints

All constraints are defined in the method below:

  def _assign_constraints(self):
        """
        define constraints
        """
        # count of contraints
        constr_count = 0
        # add capacity constraints for edges with fixed costs
        for origin_name, destination_name, edge in super().edges.data():
            # get capacity
            capacity = edge['capacity'] if edge['capacity'] < float('inf') else self.max_flow
            rhs = capacity
            if edge['selection_var'] is not None: rhs *= edge['selection_var']
            self.prob += edge['flow_var'] <= rhs, f"capacity_{origin_name}-{destination_name}",
            constr_count += 1
            
            # get origin node
            origin_node = super().nodes[origin_name]
            # check if origin node has a selection variable
            if origin_node['selection_var'] is not None:
                rhs = capacity * origin_node['selection_var'] 
                self.prob += (edge['flow_var'] <= rhs, f"node_selection_{origin_name}-{destination_name}",)
                constr_count += 1

        total_demand = total_supply = 0
        # add flow conservation constraints
        for node_name, node in super().nodes.data():
            # aggregate in and out flows
            in_flow = 0
            for _, _, edge in super().in_edges(node_name, data=True):
                if edge['flow_var'] is not None: in_flow += edge['flow_var']
            
            out_flow = 0
            for _, _, edge in super().out_edges(node_name, data=True):
                if edge['flow_var'] is not None: out_flow += edge['flow_var']

            # add node capacity contraint
            if node['capacity'] < float('inf'):
                self.prob += out_flow = demand - supply
                rhs = node['demand'] - node['supply']
                self.prob += in_flow - out_flow >= rhs, f"flow_balance_{node_name}",
            constr_count += 1

            # update total demand and supply
            total_demand += node['demand']
            total_supply += node['supply']

        if self.verbose:
            print(f"Constraints: {constr_count}")
            print(f"Total supply: {total_supply}, Total demand: {total_demand}")

First, capacity constraints are defined for each edge. If the edge has a selection variable the capacity is multiplied with this variable. In case there is no capacity limitation (capacity is set to infinity) but there is a selection variable, the selection variable is multiplied with the maximum flow that has been calculated by aggregating the demand of all nodes. An additional constraint is added in case the edge’s origin node has a selection variable. This constraint means that flow can only come out of this node if the selection variable is set to 1.

Following, the flow conservation constraints for all nodes are defined. To do so the total in and outflow of the node is calculated. Getting all in and outgoing edges can easily be done by using the in_edges and out_edges methods of the DiGraph class. If the node has a capacity limitation the maximum outflow will be constraint by that value. For the flow conservation it is necessary to check if the node is either a source or sink node or a transshipment node (demand equals supply). In the first case the difference between inflow and outflow must be greater or equal the difference between demand and supply while in the latter case in and outflow must be equal.

The total number of constraints is counted and printed at the end of the method.

Retrieving optimized values

After running the optimization, the optimized variable values can be retrieved with the following method:

    def _assign_variable_values(self, opt_found):
        """
        assign decision variable values if optimal solution found, otherwise set to None
        @param opt_found: bool - if optimal solution was found
        """
        # assign edge values        
        for _, _, edge in super().edges.data():
            # initialize values
            edge['flow'] = None
            edge['selected'] = None
            # check if optimal solution found
            if opt_found and edge['flow_var'] is not None:                    
                edge['flow'] = edge['flow_var'].varValue                    

                if edge['selection_var'] is not None: 
                    edge['selected'] = edge['selection_var'].varValue

        # assign node values
        for _, node in super().nodes.data():
            # initialize values
            node['selected'] = None
            if opt_found:                
                # check if node has selection variable
                if node['selection_var'] is not None: 
                    node['selected'] = node['selection_var'].varValue 

This method iterates through all edges and nodes, checks if decision variables have been assigned and adds the decision variable value via varValue to the respective edge or node.

Demo

To demonstrate how to apply the flow optimization I created a supply chain network consisting of 2 factories, 4 distribution centers (DC), and 15 markets. All goods produced by the factories have to flow through one distribution center until they can be delivered to the markets.

Supply chain problem

Node properties were defined:

Node definitions

Ranges mean that uniformly distributed random numbers were generated to assign these properties. Since Factories and DCs have fixed costs the optimization also needs to decide which of these entities should be selected.

Edges are generated between all Factories and DCs, as well as all DCs and Markets. The variable cost of edges is calculated as the Euclidian distance between origin and destination node. Capacities of edges from Factories to DCs are set to 350 while from DCs to Markets are set to 100.

The code below shows how the network is defined and how the optimization is run:

# Define nodes
factories = [Node(name=f'Factory {i}', supply=700, type='Factory', fixed_cost=100, x=random.uniform(0, 2),
                  y=random.uniform(0, 1)) for i in range(2)]
dcs = [Node(name=f'DC {i}', fixed_cost=25, capacity=500, type='DC', x=random.uniform(0, 2), 
            y=random.uniform(0, 1)) for i in range(4)]
markets = [Node(name=f'Market {i}', demand=random.randint(1, 100), type='Market', x=random.uniform(0, 2), 
                y=random.uniform(0, 1)) for i in range(15)]

# Define edges
edges = []
# Factories to DCs
for factory in factories:
    for dc in dcs:
        distance = ((factory.x - dc.x)**2 + (factory.y - dc.y)**2)**0.5
        edges.append(Edge(origin=factory, destination=dc, capacity=350, variable_cost=distance))

# DCs to Markets
for dc in dcs:
    for market in markets:
        distance = ((dc.x - market.x)**2 + (dc.y - market.y)**2)**0.5
        edges.append(Edge(origin=dc, destination=market, capacity=100, variable_cost=distance))

# Create FlowGraph
G = FlowGraph(edges=edges)

G.min_cost_flow()

The output of flow optimization is as follows:

Variable types: 68 continuous, 6 binary
Constraints: 161
Total supply: 1400.0, Total demand: 909.0
Model creation time: 0.00 s
Optimal solution found: 1334.88 in 0.23 s

The problem consists of 68 continuous variables which are the edges’ flow variables and 6 binary decision variables which are the selection variables of the Factories and DCs. There are 161 constraints in total which consist of edge and node capacity constraints, node selection constraints (edges can only have flow if the origin node is selected), and flow conservation constraints. The next line shows that the total supply is 1400 which is higher than the total demand of 909 (if the demand was higher than the supply the problem would be infeasible). Since this is a small optimization problem, the time to define the optimization model was less than 0.01 seconds. The last line shows that an optimal solution with an objective value of 1335 could be found in 0.23 seconds.

Additionally, to the code I described in this post I also added two methods that visualize the optimized solution. The code of these methods can also be found in the repo.

Flow graph

All nodes are located by their respective x and y coordinates. The node and edge size is relative to the total volume that is flowing through. The edge color refers to its utilization (flow over capacity). Dashed lines show edges without flow allocation.

In the optimal solution both Factories were selected which is inevitable as the maximum supply of one Factory is 700 and the total demand is 909. However, only 3 of the 4 DCs are used (DC 0 has not been selected).

In general the plot shows the Factories are supplying the nearest DCs and DCs the nearest Markets. However, there are a few exceptions to this observation: Factory 0 also supplies DC 3 although Factory 1 is nearer. This is due to the capacity constraints of the edges which only allow to move at most 350 units per edge. However, the closest Markets to DC 3 have a slightly higher demand, hence Factory 0 is moving additional units to DC 3 to meet that demand. Although Market 9 is closest to DC 3 it is supplied by DC 2. This is because DC 3 would require an additional supply from Factory 0 to supply this market and since the total distance from Factory 0 over DC 3 is longer than the distance from Factory 0 through DC 2, Market 9 is supplied via the latter route.

Another way to visualize the results is via a Sankey diagram which focuses on visualizing the flows of the edges:

Sankey flow diagram

The colors represent the edges’ utilizations with lowest utilizations in green changing to yellow and red for the highest utilizations. This diagram shows very well how much flow goes through each node and edge. It highlights the flow from Factory 0 to DC 3 and also that Market 13 is supplied by DC 2 and DC 1.

Summary

Minimum cost flow optimizations can be a very helpful tool in many domains like logistics, transportation, telecommunication, energy sector and many more. To apply this optimization it is important to translate a physical system into a mathematical graph consisting of nodes and edges. This should be done in a way to have as few discrete (e.g. binary) decision variables as necessary as those make it significantly more difficult to find an optimal solution. By combining Python’s NetworkX, Pulp and Pydantic libraries I built an flow optimization class that is intuitive to initialize and at the same time follows a generalized formulation which allows to apply it in many different use cases. Graph and flow diagrams are very helpful to understand the solution found by the optimizer.

If not otherwise stated all images were created by the author.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Why enterprise networks need both reach and resilience

As enterprises expand across regions, so do their cloud platforms and digital ecosystems. But with the rise of AI and its unprecedented appetite for data, networks are now under more pressure. Many businesses are learning the limits of legacy architecture the hard way. In the race to meet today’s standard

Read More »

Can your network handle the demands of today’s connected workplace?

Enterprise innovation is accelerating at a dizzying pace, from AI-powered applications and real-time data platforms to immersive customer experiences. Solutions like Microsoft Copilot are transforming productivity. Platforms-as-a-Service, such as Lattice, are reimagining how teams collaborate and grow. But beneath this digital renaissance lies the hard truth: none of it works

Read More »

White House Highlights Senate Passing of ‘Big Beautiful Bill’

A statement posted on the White House website on Tuesday said the Senate “delivered a resounding victory for American workers, farmers, and small businesses by passing President Donald J. Trump’s One Big Beautiful Bill”. The White House statement described the bill as “a transformative legislative package that locks in historic tax relief, delivers border security, reforms welfare, funds critical infrastructure, and more”. The statement linked to an X post by the “Official Rapid Response account of the Trump 47 White House” which noted that Vice President JD Vance “cast[ed]… the deciding vote as the Senate approve[d]… the One Big Beautiful Bill – moving it back to the House and one step closer to President Trump’s desk”. A summary of the bill on the Congress website, dated May 22, states that it “reduces taxes, reduces or increases spending for various federal programs, increases the statutory debt limit, and otherwise addresses agencies and programs throughout the federal government”. A tracker on the site showed that the bill had to pass the House and the Senate before going to the President and becoming law. In a statement sent to Rigzone late Tuesday, Independent Petroleum Association of America (IPAA) President and CEO Jeff Eshelman said “President Trump’s One Big Beautiful Bill remains a win for American energy”. “The bill passed today improves the ability of independent oil and natural gas producers to supply reliable, affordable energy to the American people,” he noted. “IPAA is pleased that the legislation reinstates oil and natural gas lease sales for onshore and offshore federal lands and makes common sense reforms to the permitting and leasing process on federal lands. IPAA members, the small businesses of the oil patch, are grateful that industry tax treatments including intangible drilling costs and percentage depletion were protected, along with carried interest deductions being preserved,” he added. “While

Read More »

Saudis Led Surge in OPEC Stalwarts’ Oil Exports Last Month

Saudi Arabia led a surge in crude exports from stalwart oil-producing giants in the Middle East in June.  The flood may reflect a race by Riyadh – along with neighboring Kuwait and the United Arab Emirates – to expedite barrels out of the Persian Gulf while a conflict between Israel and Iran threatened the region’s shipping corridor. While the trio are entitled to raise crude production under an OPEC+ accord, their exports appear to have climbed even further. The three nations loaded 11.9 million barrels a day onto tankers in June, ship-tracking compiled by Bloomberg shows. Their collective seaborne exports were the highest since April 2023.  “Amid supply disruption concerns, Middle Eastern oil producers might have looked at additional storage locations around the world,” said Giovanni Staunovo, a commodity analyst UBS Group AG.  The boost from the three producers, who rank among the biggest and most reliable in the Organization of the Petroleum Exporting Countries, signals the group and its partners are pressing on with plans to speed up the return of halted output – a strategy shift that has rattled crude traders and depressed prices, given faltering demand and an impending oversupply. The correlation between exports and production is patchy because countries load barrels from storage and can also ship them to storage. Likewise, the timing of a few large tanker shipments can have a big impact on flow rates. The bottom line, though is that the month-on-month gain from the trio – at 937,000 a day, their largest collective hike since September 2023 – means large amounts of extra supply are now en route toward buyer countries. Riyadh bolstered crude exports by 441,000 barrels a day, or about 7 percent, this month to 6.36 million a day, according to a preliminary analysis of tanker-tracking data compiled by Bloomberg. Kuwait’s exports rose to the highest

Read More »

Eni Forms New Company to Expand Oilfield Chemicals Business

Eni SpA’s chemical arm, Versalis SpA, has transferred its oilfield chemicals assets to a new company called Versalis Oilfield Solutions SRL in an expansion bid, Eni said Tuesday. “The operation aims to consolidate Versalis’ position in the oilfield services sector by bringing together key expertise and strategic activities in a single, focused and operationally efficient entity”, Italy’s state-controlled Eni said in an online statement. “Versalis’ oilfield operations encompass research and development of advanced chemical formulations, outsourcing their production and marketing solvents and additives designed for the oil drilling industry. Products are tailored to meet specific client requirements, while services include sales and after-sales support, ensuring continuous and qualified technical assistance”. Launched 2010, Eni’s oilfield chemicals business now operates in Africa, the Americas, Asia and Europe, according to the company. Versalis Oilfield Solutions will expand “the scope of activities in terms of products and services, achieve higher revenue targets, and maintain strong profitability”, Eni said. Last year Eni finalized a plan to transform Versalis, which makes basic chemicals, chemical products including plastics and biochemical products such as biolubricants. With an investment of around EUR 2 billion ($2.36 billion), the plan “aims to reduce emissions by approximately 1 million tonnes of CO2, currently about 40 percent of Versalis’ emissions in Italy”, Eni said in a press release October 24, 2024. “It includes the set-up of new industrial plants consistent with the energy transition and decarbonization of industrial sites across sustainable chemistry, as well as biorefining and energy storage”, Eni added. “To enable the construction of the new plants, activity at the cracking plants in Brindisi and Priolo, and the polyethylene plant in Ragusa, will be phased out”. “Eni aims to significantly reduce Versalis’ exposure to basic chemicals, a sector that is facing structural and irreversible decline in Europe, and which has led

Read More »

Macquarie Strategists Forecast USA Crude Inventory Drop

Macquarie strategists are forecasting that U.S. crude inventories will be down by 1.3 million barrels for the week ending June 27, an oil and gas report sent to Rigzone by the Macquarie team late Monday revealed. “This follows a 5.8 million barrel draw in the prior week, with the crude balance again realizing significantly tighter than our expectations amidst another week of disappointing net imports,” the strategists stated in the oil and gas report. “For this week’s crude balance, from refineries, we model an increase in crude runs (+0.2 million barrels per day),” the Macquarie strategists added in the report. “Among net imports, we again model a sharp increase, with exports significantly lower (-1.0 million barrels per day) and imports modestly higher (+0.2 million barrels per day) on a nominal basis,” they continued. The strategists highlighted in the report that timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for a nominal reduction (-0.3 million barrels per day) this week,” the strategists said in the report. “Rounding out the picture, we anticipate another small increase in Strategic Petroleum Reserve (SPR) stocks (+0.3 million barrels) this week,” they added. The Macquarie strategists went on to state in the report that “among products”, they “look for a gasoline build (+2.3 million barrels) with distillate stocks slightly higher (+0.2 million barrels), and jet stocks modestly lower (-0.4 million barrels)”. “We model implied demand for these three products at 14.7 million barrels per day for the week ending June 27,” the strategists went on to state. U.S. commercial crude oil inventories, excluding those in the SPR, dropped by 5.8 million barrels from the week ending June 13 to the week ending June 20, the U.S. Energy Information Administration (EIA) highlighted in its

Read More »

Uniper to Supply Solar Power to Baden-Wuerttemberg Universities

German utility Uniper SE has signed a deal to supply the state of Baden-Wuerttemberg with 86 gigawatt hours (GWh) of solar electricity per year for three years, or around 258 gigawatt hours. The power is meant for the universities in Ulm, Mannheim, Hohenheim, and Constance. The delivery period will run from January 1, 2026, to December 31, 2028, the company said in a media release. The contract involves a full green supply with flexible guarantees of origin adapted to customer consumption and is based on electricity from Spanish photovoltaic systems that have recently been connected to the grid, Uniper said. The agreement brings the universities one step closer to a sustainable electricity supply. The University of Konstanz already operates its own combined heat and power plants, which cover 50 percent of its electricity requirements. The University of Mannheim has also been sourcing all of its electricity from renewable energies since 2012, Uniper said. “Our customers’ demand for green electricity has increased enormously and we are pleased to be able to meet this demand – always with the aim of harmonizing security of supply and low CO2 energy. In this context, we are supporting the universities in Baden-Wuerttemberg on their way to a better CO2 balance”, Gundolf Schweppe, CCO for Sales at Uniper, said. Uniper and Baden-Wuerttemberg had partnered for energy supply before. In 2021, they signed a gas contract with the University of Stuttgart for January 2023 to December 2026. “Green power contracts have become a key driver in times of energy transition: they guarantee the supply of renewable energy, protect against price fluctuations, promote innovation, and ultimately reduce emissions”, Uniper said. To contact the author, email [email protected] What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is

Read More »

Petronas Signs Vehicle Fluids Deal with Mahindra

Malaysia’s Petroliam Nasional Berhad (Petronas) has secured a deal with Mahindra & Mahindra Ltd. Petronas Lubricants International (PLI) and Petronas Lubricants (India) Pvt. Ltd. bagged an Aftermarket Service Fill contract from India’s largest sports utility vehicle (SUV) manufacturer. Under the agreement, Petronas Lubricants will be the sole distributor of vehicle fluids to Mahindra’s authorized dealers, workshops, and stockists under the Maximile brand within Mahindra’s South Zone Distribution network in India, Petronas said in a media release. The deal involves engine oils, transmission oils, axle oils, and steering fluids. The deal spans 50 Stock Keeping Units (SKUs), catering to a wide range of passenger cars and SUVs across the region, Petronas said. “This collaboration marks a strategic step for PLIPL, reinforcing its commitment to deliver high-performance, OEM-aligned solutions to India’s rapidly growing automotive market”, Petronas said. “Through decades of engineering expertise, PLIPL’s innovative products will now be accessible to a broader market, enabling more customers to experience its award-winning Fluid Technology Solutions™ while supporting the future of mobility in India with Mahindra as a trusted partner. “With proven capabilities in R&D, manufacturing, and global distribution, PETRONAS Lubricants International is ideally positioned to support Mahindra’s expansive service network and evolving customer expectations”, it added. The agreement was signed by Binu Chandy, Chief Executive Officer of Petronas Lubricants India Pvt. Ltd., and R. Veeraraghavan, Senior Vice President of Strategic Sourcing, Mahindra & Mahindra Ltd., in Mumbai. To contact the author, email [email protected] What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy. MORE FROM THIS

Read More »

Data center capacity continues to shift to hyperscalers

However, even though colocation and on-premises data centers will continue to lose share, they will still continue to grow. They just won’t be growing as fast as hyperscalers. So, it creates the illusion of shrinkage when it’s actually just slower growth. In fact, after a sustained period of essentially no growth, on-premises data center capacity is receiving a boost thanks to genAI applications and GPU infrastructure. “While most enterprise workloads are gravitating towards cloud providers or to off-premise colo facilities, a substantial subset are staying on-premise, driving a substantial increase in enterprise GPU servers,” said John Dinsdale, a chief analyst at Synergy Research Group.

Read More »

Oracle inks $30 billion cloud deal, continuing its strong push into AI infrastructure.

He pointed out that, in addition to its continued growth, OCI has a remaining performance obligation (RPO) — total future revenue expected from contracts not yet reported as revenue — of $138 billion, a 41% increase, year over year. The company is benefiting from the immense demand for cloud computing largely driven by AI models. While traditionally an enterprise resource planning (ERP) company, Oracle launched OCI in 2016 and has been strategically investing in AI and data center infrastructure that can support gigawatts of capacity. Notably, it is a partner in the $500 billion SoftBank-backed Stargate project, along with OpenAI, Arm, Microsoft, and Nvidia, that will build out data center infrastructure in the US. Along with that, the company is reportedly spending about $40 billion on Nvidia chips for a massive new data center in Abilene, Texas, that will serve as Stargate’s first location in the country. Further, the company has signaled its plans to significantly increase its investment in Abu Dhabi to grow out its cloud and AI offerings in the UAE; has partnered with IBM to advance agentic AI; has launched more than 50 genAI use cases with Cohere; and is a key provider for ByteDance, which has said it plans to invest $20 billion in global cloud infrastructure this year, notably in Johor, Malaysia. Ellison’s plan: dominate the cloud world CTO and co-founder Larry Ellison announced in a recent earnings call Oracle’s intent to become No. 1 in cloud databases, cloud applications, and the construction and operation of cloud data centers. He said Oracle is uniquely positioned because it has so much enterprise data stored in its databases. He also highlighted the company’s flexible multi-cloud strategy and said that the latest version of its database, Oracle 23ai, is specifically tailored to the needs of AI workloads. Oracle

Read More »

Datacenter industry calls for investment after EU issues water consumption warning

CISPE’s response to the European Commission’s report warns that the resulting regulatory uncertainty could hurt the region’s economy. “Imposing new, standalone water regulations could increase costs, create regulatory fragmentation, and deter investment. This risks shifting infrastructure outside the EU, undermining both sustainability and sovereignty goals,” CISPE said in its latest policy recommendation, Advancing water resilience through digital innovation and responsible stewardship. “Such regulatory uncertainty could also reduce Europe’s attractiveness for climate-neutral infrastructure investment at a time when other regions offer clear and stable frameworks for green data growth,” it added. CISPE’s recommendations are a mix of regulatory harmonization, increased investment, and technological improvement. Currently, water reuse regulation is directed towards agriculture. Updated regulation across the bloc would encourage more efficient use of water in industrial settings such as datacenters, the asosciation said. At the same time, countries struggling with limited public sector budgets are not investing enough in water infrastructure. This could only be addressed by tapping new investment by encouraging formal public-private partnerships (PPPs), it suggested: “Such a framework would enable the development of sustainable financing models that harness private sector innovation and capital, while ensuring robust public oversight and accountability.” Nevertheless, better water management would also require real-time data gathered through networks of IoT sensors coupled to AI analytics and prediction systems. To that end, cloud datacenters were less a drain on water resources than part of the answer: “A cloud-based approach would allow water utilities and industrial users to centralize data collection, automate operational processes, and leverage machine learning algorithms for improved decision-making,” argued CISPE.

Read More »

HPE-Juniper deal clears DOJ hurdle, but settlement requires divestitures

In HPE’s press release following the court’s decision, the vendor wrote that “After close, HPE will facilitate limited access to Juniper’s advanced Mist AIOps technology.” In addition, the DOJ stated that the settlement requires HPE to divest its Instant On business and mandates that the merged firm license critical Juniper software to independent competitors. Specifically, HPE must divest its global Instant On campus and branch WLAN business, including all assets, intellectual property, R&D personnel, and customer relationships, to a DOJ-approved buyer within 180 days. Instant On is aimed primarily at the SMB arena and offers a cloud-based package of wired and wireless networking gear that’s designed for so-called out-of-the-box installation and minimal IT involvement, according to HPE. HPE and Juniper focused on the positive in reacting to the settlement. “Our agreement with the DOJ paves the way to close HPE’s acquisition of Juniper Networks and preserves the intended benefits of this deal for our customers and shareholders, while creating greater competition in the global networking market,” HPE CEO Antonio Neri said in a statement. “For the first time, customers will now have a modern network architecture alternative that can best support the demands of AI workloads. The combination of HPE Aruba Networking and Juniper Networks will provide customers with a comprehensive portfolio of secure, AI-native networking solutions, and accelerate HPE’s ability to grow in the AI data center, service provider and cloud segments.” “This marks an exciting step forward in delivering on a critical customer need – a complete portfolio of modern, secure networking solutions to connect their organizations and provide essential foundations for hybrid cloud and AI,” said Juniper Networks CEO Rami Rahim. “We look forward to closing this transaction and turning our shared vision into reality for enterprise, service provider and cloud customers.”

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Cisco backs quantum networking startup Qunnect

In partnership with Deutsche Telekom’s T-Labs, Qunnect has set up quantum networking testbeds in New York City and Berlin. “Qunnect understands that quantum networking has to work in the real world, not just in pristine lab conditions,” Vijoy Pandey, general manager and senior vice president of Outshift by Cisco, stated in a blog about the investment. “Their room-temperature approach aligns with our quantum data center vision.” Cisco recently announced it is developing a quantum entanglement chip that could ultimately become part of the gear that will populate future quantum data centers. The chip operates at room temperature, uses minimal power, and functions using existing telecom frequencies, according to Pandey.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »