Stay Ahead, Stay ONMINE

Supercharge Your RAG with Multi-Agent Self-RAG

Introduction Many of us might have tried to build a RAG application and noticed it falls significantly short of addressing real-life needs. Why is that? It’s because many real-world problems require multiple steps of information retrieval and reasoning. We need our agent to perform those as humans normally do, yet most RAG applications fall short […]

Introduction

Many of us might have tried to build a RAG application and noticed it falls significantly short of addressing real-life needs. Why is that? It’s because many real-world problems require multiple steps of information retrieval and reasoning. We need our agent to perform those as humans normally do, yet most RAG applications fall short of this.

This article explores how to supercharge your RAG application by making its data retrieval and reasoning process similar to how a human would, under a multi-agent framework. The framework presented here is based on the Self-RAG strategy but has been significantly modified to enhance its capabilities. Prior knowledge of the original strategy is not necessary for reading this article.

Real-life Case

Consider this: I was going to fly from Delhi to Munich (let’s assume I am taking the flight from an EU airline), but I was denied boarding somehow. Now I want to know what the compensation should be.

These two webpages contain relevant information, I go ahead adding them to my vector store, trying to have my agent answer this for me by retrieving the right information.

Now, I pass this question to the vector store: “how much can I receive if I am denied boarding, for flights from Delhi to Munich?”.

– – – – – – – – – – – – – – – – – – – – – – – – –
Overview of US Flight Compensation Policies To get compensation for delayed flights, you should contact your airline via their customer service or go to the customer service desk. At the same time, you should bear in mind that you will only receive compensation if the delay is not weather-related and is within the carrier`s control. According to the US Department of Transportation, US airlines are not required to compensate you if a flight is cancelled or delayed. You can be compensated if you are bumped or moved from an overbooked flight. If your provider cancels your flight less than two weeks before departure and you decide to cancel your trip entirely, you can receive a refund of both pre-paid baggage fees and your plane ticket. There will be no refund if you choose to continue your journey. In the case of a delayed flight, the airline will rebook you on a different flight. According to federal law, you will not be provided with money or other compensation. Comparative Analysis of EU vs. US Flight Compensation Policies
– – – – – – – – – – – – – – – – – – – – – – – – –
(AUTHOR-ADDED NOTE: IMPORTANT, PAY ATTENTION TO THIS)
Short-distance flight delays – if it is up to 1,500 km, you are due 250 Euro compensation.
Medium distance flight delays – for all the flights between 1,500 and 3,500 km, the compensation should be 400 Euro.
Long-distance flight delays – if it is over 3,500 km, you are due 600 Euro compensation. To receive this kind of compensation, the following conditions must be met; Your flight starts in a non-EU member state or in an EU member state and finishes in an EU member state and is organised by an EU airline. Your flight reaches the final destination with a delay that exceeds three hours. There is no force majeure.
– – – – – – – – – – – – – – – – – – – – – – – – –
Compensation policies in the EU and US are not the same, which implies that it is worth knowing more about them. While you can always count on Skycop flight cancellation compensation, you should still get acquainted with the information below.
– – – – – – – – – – – – – – – – – – – – – – – – –
Compensation for flight regulations EU: The EU does regulate flight delay compensation, which is known as EU261. US: According to the US Department of Transportation, every airline has its own policies about what should be done for delayed passengers. Compensation for flight delays EU: Just like in the United States, compensation is not provided when the flight is delayed due to uncontrollable reasons. However, there is a clear approach to compensation calculation based on distance. For example, if your flight was up to 1,500 km, you can receive 250 euros. US: There are no federal requirements. That is why every airline sets its own limits for compensation in terms of length. However, it is usually set at three hours. Overbooking EU: In the EU, they call for volunteers if the flight is overbooked. These people are entitled to a choice of: Re-routing to their final destination at the earliest opportunity. Refund of their ticket cost within a week if not travelling. Re-routing at a later date at the person`s convenience.

Unfortunately, they contain only generic flight compensation policies, without telling me how much I can expect when denied boarding from Delhi to Munich specifically. If the RAG agent takes these as the sole context, it can only provide a generic answer about flight compensation policy, without giving the answer we want.

However, while the documents are not immediately useful, there is an important insight contained in the 2nd piece of context: compensation varies according to flight distance. If the RAG agent thinks more like human, it should follow these steps to provide an answer:

  1. Based on the retrieved context, reason that compensation varies with flight distance
  2. Next, retrieve the flight distance between Delhi and Munich
  3. Given the distance (which is around 5900km), classify the flight as a long-distance one
  4. Combined with the previously retrieved context, figure out I am due 600 EUR, assuming other conditions are fulfilled

This example demonstrates how a simple RAG, in which a single retrieval is made, fall short for several reasons:

  1. Complex Queries: Users often have questions that a simple search can’t fully address. For example, “What’s the best smartphone for gaming under $500?” requires consideration of multiple factors like performance, price, and features, which a single retrieval step might miss.
  2. Deep Information: Some information lies across documents. For example, research papers, medical records, or legal documents often include references that need to be made sense of, before one can fully understand the content of a given article. Multiple retrieval steps help dig deeper into the content.

Multiple retrievals supplemented with human-like reasoning allow for a more nuanced, comprehensive, and accurate response, adapting to the complexity and depth of user queries.

Multi-Agent Self-RAG

Here I explain the reasoning process behind this strategy, afterwards I will provide the code to show you how to achieve this!

Note: For readers interested in knowing how my approach differs from the original Self-RAG, I will describe the discrepancies in quotation boxes like this. But general readers who are unfamiliar with the original Self-RAG can skip them.

In the below graphs, each circle represents a step (aka Node), which is performed by a dedicated agent working on the specific problem. We orchestrate them to form a multi-agent RAG application.

1st iteration: Simple RAG

A simple RAG chain

This is just the vanilla RAG approach I described in “Real-life Case”, represented as a graph. After Retrieve documents, the new_documents will be used as input for Generate Answer. Nothing special, but it serves as our starting point.

2nd iteration: Digest documents with “Grade documents”

Reasoning like human do

Remember I said in the “Real-life Case” section, that as a next step, the agent should “reason that compensation varies with flight distance”? The Grade documents step is exactly for this purpose.

Given the new_documents, the agent will try to output two items:

  1. useful_documents: Comparing the question asked, it determines if the documents are useful, and retain a memory for those deemed useful for future reference. As an example, since our question does not concern compensation policies for US, documents describing those are discarded, leaving only those for EU
  2. hypothesis: Based on the documents, the agent forms a hypothesis about how the question can be answered, that is, flight distance needs to be identified

Notice how the above reasoning resembles human thinking! But still, while these outputs are useful, we need to instruct the agent to use them as input for performing the next document retrieval. Without this, the answer provided in Generate answer is still not useful.

useful_documents are appended for each document retrieval loop, instead of being overwritten, to keep a memory of documents that are previously deemed useful. hypothesis is formed from useful_documents and new_documents to provide an “abstract reasoning” to inform how query is to be transformed subsequently.

The hypothesis is especially useful when no useful documents can be identified initially, as the agent can still form hypothesis from documents not immediately deemed as useful / only bearing indirect relationship to the question at hand, for informing what questions to ask next

3rd iteration: Brainstorm new questions to ask

Suggest questions for additional information retrieval

We have the agent reflect upon whether the answer is useful and grounded in context. If not, it should proceed to Transform query to ask further questions.

The output new_queries will be a list of new questions that the agent consider useful for obtaining extra information. Given the useful_documents (compensation policies for EU), and hypothesis (need to identify flight distance between Delhi and Munich), it asks questions like “What is the distance between Delhi and Munich?”

Now we are ready to use the new_queries for further retrieval!

The transform_query node will use useful_documents (which are accumulated per iteration, instead of being overwritten) and hypothesis as input for providing the agent directions to ask new questions.

The new questions will be a list of questions (instead of a single question) separated from the original question, so that the original question is kept in state, otherwise the agent could lose track of the original question after multiple iterations.

Final iteration: Further retrieval with new questions

Issuing new queries to retrieve extra documents

The output new_queries from Transform query will be passed to the Retrieve documents step, forming a retrieval loop.

Since the question “What is the distance between Delhi and Munich?” is asked, we can expect the flight distance is then retrieved as new_documents, and subsequently graded as useful_documents, further used as an input for Generate answer.

The grade_documents node will compare the documents against both the original question and new_questions list, so that documents that are considered useful for new_questions, even if not so for the original question, are kept.

This is because those documents might help answer the original question indirectly, by being relevant to new_questions (like “What is the distance between Delhi and Munich?”)

Final answer!

Equipped with this new context about flight distance, the agent is now ready to provide the right answer: 600 EUR!

Next, let us now dive into the code to see how this multi-agent RAG application is created.

Implementation

The source code can be found here. Our multi-agent RAG application involves iterations and loops, and LangGraph is a great library for building such complex multi-agent application. If you are not familiar with LangGraph, you are strongly suggested to have a look at LangGraph’s Quickstart guide to understand more about it!

To keep this article concise, I will focus on the key code snippets only.

Important note: I am using OpenRouter as the Llm interface, but the code can be easily adapted for other LLM interfaces. Also, while in my code I am using Claude 3.5 Sonnet as model, you can use any LLM as long as it support tools as parameter (check this list here), so you can also run this with other models, like DeepSeek V3 and OpenAI o1!

State definition

In the previous section, I have defined various elements e.g. new_documentshypothesis that are to be passed to each step (aka Nodes), in LangGraph’s terminology these elements are called State.

We define the State formally with the following snippet.

from typing import List, Annotated
from typing_extensions import TypedDict

def append_to_list(original: list, new: list) -> list:
original.append(new)
return original

def combine_list(original: list, new: list) -> list:
return original + new

class GraphState(TypedDict):
"""
Represents the state of our graph.

Attributes:
question: question
generation: LLM generation
new_documents: newly retrieved documents for the current iteration
useful_documents: documents that are considered useful
graded_documents: documents that have been graded
new_queries: newly generated questions
hypothesis: hypothesis
"""

question: str
generation: str
new_documents: List[str]
useful_documents: Annotated[List[str], combine_list]
graded_documents: List[str]
new_queries: Annotated[List[str], append_to_list]
hypothesis: str

Graph definition

This is where we combine the different steps to form a “Graph”, which is a representation of our multi-agent application. The definitions of various steps (e.g. grade_documents) are represented by their respective functions.

from langgraph.graph import END, StateGraph, START
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display

workflow = StateGraph(GraphState)

# Define the nodes
workflow.add_node("retrieve", retrieve) # retrieve
workflow.add_node("grade_documents", grade_documents) # grade documents
workflow.add_node("generate", generate) # generatae
workflow.add_node("transform_query", transform_query) # transform_query

# Build graph
workflow.add_edge(START, "retrieve")
workflow.add_edge("retrieve", "grade_documents")
workflow.add_conditional_edges(
"grade_documents",
decide_to_generate,
{
"transform_query": "transform_query",
"generate": "generate",
},
)
workflow.add_edge("transform_query", "retrieve")
workflow.add_conditional_edges(
"generate",
grade_generation_v_documents_and_question,
{
"useful": END,
"not supported": "transform_query",
"not useful": "transform_query",
},
)

# Compile
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
display(Image(app.get_graph(xray=True).draw_mermaid_png()))

Running the above code, you should see this graphical representation of our RAG application. Notice how it is essentially equivalent to the graph I have shown in the final iteration of “Enhanced Self-RAG Strategy”!

Visualizing the multi-agent RAG graph

After generate, if the answer is considered “not supported”, the agent will proceed to transform_query intead of to generate again, so that the agent will look for additional information rather than trying to regenerate answers based on existing context, which might not suffice for providing a “supported” answer

Now we are ready to put the multi-agent application to test! With the below code snippet, we ask this question how much can I receive if I am denied boarding, for flights from Delhi to Munich?

from pprint import pprint
config = {"configurable": {"thread_id": str(uuid4())}}

# Run
inputs = {
"question": "how much can I receive if I am denied boarding, for flights from Delhi to Munich?",
}
for output in app.stream(inputs, config):
for key, value in output.items():
# Node
pprint(f"Node '{key}':")
# Optional: print full state at each node
# print(app.get_state(config).values)
pprint("n---n")

# Final generation
pprint(value["generation"])

While output might vary (sometimes the application provides the answer without any iterations, because it “guessed” the distance between Delhi and Munich), it should look something like this, which shows the application went through multiple rounds of data retrieval for RAG.

---RETRIEVE---
"Node 'retrieve':"
'n---n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'n---n'
---GENERATE---
---CHECK HALLUCINATIONS---
'---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---'
"Node 'generate':"
'n---n'
---TRANSFORM QUERY---
"Node 'transform_query':"
'n---n'
---RETRIEVE---
"Node 'retrieve':"
'n---n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'n---n'
---GENERATE---
---CHECK HALLUCINATIONS---
---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---
---GRADE GENERATION vs QUESTION---
---DECISION: GENERATION ADDRESSES QUESTION---
"Node 'generate':"
'n---n'
('Based on the context provided, the flight distance from Munich to Delhi is '
'5,931 km, which falls into the long-distance category (over 3,500 km). '
'Therefore, if you are denied boarding on a flight from Delhi to Munich '
'operated by an EU airline, you would be eligible for 600 Euro compensation, '
'provided that:n'
'1. The flight is operated by an EU airlinen'
'2. There is no force majeuren'
'3. Other applicable conditions are metn'
'n'
"However, it's important to note that this compensation amount is only valid "
'if all the required conditions are met as specified in the regulations.')

And the final answer is what we aimed for!

Based on the context provided, the flight distance from Munich to Delhi is
5,931 km, which falls into the long-distance category (over 3,500 km).
Therefore, if you are denied boarding on a flight from Delhi to Munich
operated by an EU airline, you would be eligible for 600 Euro compensation,
provided that:
1. The flight is operated by an EU airline
2. There is no force majeure
3. Other applicable conditions are met

However, it's important to note that this compensation amount is only valid
if all the required conditions are met as specified in the regulations.

Inspecting the State, we see how the hypothesis and new_queries enhance the effectiveness of our multi-agent RAG application by mimicking human thinking process.

Hypothesis

print(app.get_state(config).values.get('hypothesis',""))
--- Output ---
To answer this question accurately, I need to determine:

1. Is this flight operated by an EU airline? (Since Delhi is non-EU and Munich is EU)
2. What is the flight distance between Delhi and Munich? (To determine compensation amount)
3. Are we dealing with a denied boarding situation due to overbooking? (As opposed to delay/cancellation)

From the context, I can find information about compensation amounts based on distance, but I need to verify:
- If the flight meets EU compensation eligibility criteria
- The exact distance between Delhi and Munich to determine which compensation tier applies (250€, 400€, or 600€)
- If denied boarding compensation follows the same amounts as delay compensation

The context doesn't explicitly state compensation amounts specifically for denied boarding, though it mentions overbooking situations in the EU require offering volunteers re-routing or refund options.

Would you like me to proceed with the information available, or would you need additional context about denied boarding compensation specifically?

New Queries

for questions_batch in app.get_state(config).values.get('new_queries',""):
for q in questions_batch:
print(q)
--- Output ---
What is the flight distance between Delhi and Munich?
Does EU denied boarding compensation follow the same amounts as flight delay compensation?
Are there specific compensation rules for denied boarding versus flight delays for flights from non-EU to EU destinations?
What are the compensation rules when flying with non-EU airlines from Delhi to Munich?
What are the specific conditions that qualify as denied boarding under EU regulations?

Conclusion

Simple RAG, while easy to build, might fall short in tackling real-life questions. By incorporating human thinking process into a multi-agent RAG framework, we are making RAG applications much more practical.

*Unless otherwise noted, all images are by the author


Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

CompTIA launches SecAI+ certification

The certification covers how to secure AI platforms and functionality, how to use AI to improve processes such as incident response, security analytics, threat intelligence, and penetration testing. It also focuses on how AI can automate compliance and risk management procedures under human guidance, according to CompTIA. “CompTIA SecAI+ addresses

Read More »

Palo Alto to acquire Israeli startup Koi for agentic AI security

Prisma AIRS features AI model scanning, which lets enterprises safely adopt AI models by scanning them for vulnerabilities and secure the AI ecosystem against risks such as model tampering, malicious scripts, and deserialization attacks. Posture management provides enterprises with insight into their security posture as related to the AI ecosystem

Read More »

EBW Warned of Faltering Gas Demand Heading into Holiday Weekend

In a U.S. natural gas focused EBW Analytics Group report sent to Rigzone by the EBW team on Friday, Eli Rubin, an energy analyst at the company, warned of “faltering demand” heading into the President’s Day holiday weekend. “The March contract tested as high as $3.316 yesterday before selling off after a bearish EIA [U.S. Energy Information Administration] storage surprise, and ahead of deteriorating heating demand into President’s Day holiday weekend and an 11 billion cubic foot per day drop into next Wednesday,” Rubin said in Friday’s report. “The threat of cold air in Western Canada and the Pacific Northwest moving into the U.S. remains a primary source of support,” he added. “If the market returns from the holiday weekend without this threat materializing, however, sub-$3.00 per million British thermal units may be in play as the year over year storage deficit flips to a 170 billion cubic foot surplus by late February,” he continued. In the report, Rubin went on to state that “steep storage refill demand east of the Rockies and loose supply/demand fundamentals during recent Marches may offer some medium-term support”. He added, however, that “storage exiting March near 1,800 billion cubic feet, with gathering production tailwinds and decelerating year over year LNG growth into mid to late 2026, suggest a bearish outlook for NYMEX gas futures”. In its latest weekly natural gas storage report, which was released on February 12 and included data for the week ending February 6, the EIA revealed that, according to its estimates, working gas in storage was 2,214 billion cubic feet as of February 6. “This represents a net decrease of 249 billion cubic feet from the previous week,” the EIA highlighted in the report. “Stocks were 97 billion cubic feet less than last year at this time and 130 billion

Read More »

North America Drops 6 Rigs Week on Week

North America dropped six rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was published on February 13. The total U.S. rig count remained unchanged week on week and the total Canada rig count dropped by six during the same period, pushing the total North America rig count down to 773, comprising 551 rigs from the U.S. and 222 rigs from Canada, the count outlined. Of the total U.S. rig count of 551, 531 rigs are categorized as land rigs, 17 are categorized as offshore rigs, and three are categorized as inland water rigs. The total U.S. rig count is made up of 409 oil rigs, 133 gas rigs, and nine miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 481 horizontal rigs, 57 directional rigs, and 13 vertical rigs. Week on week, the U.S. land rig count dropped by one, its offshore rig count rose by one, and its inland water rig count remained unchanged, Baker Hughes highlighted. The U.S. oil rig count decreased by three week on week, while its gas rig count increased by three and its miscellaneous rig count remained unchanged, the count showed. The U.S. horizontal rig count dropped by two week on week, its directional rig count rose by two week on week, and its vertical rig count remained flat during the same period, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Texas dropped three rigs, Oklahoma and North Dakota each dropped one rig, Louisiana added two rigs, and New Mexico, Pennsylvania, and Wyoming each added one rig. A major basin variances subcategory included in the rig count showed that, week on week, the Permian basin dropped three rigs, the Williston basin dropped

Read More »

Aramco Commits to 1 MMtpa for 20 Years from Commonwealth LNG

Saudi Arabian Oil Co (Aramco) has signed a 20-year agreement to buy one million metric tons per annum (MMtpa) of liquefied natural gas from the under-development Commonwealth LNG in Cameron Parish, Louisiana. “Commonwealth is advancing toward a final investment decision with line of sight to secure its remaining capacity”, said a joint statement by the offtake parties. “Aramco Trading joins Glencore, JERA, PETRONAS, Mercuria and EQT among international energy companies entering into long-term offtake contracts with the platform”. Early this month Commonwealth announced a 20-year deal to supply one MMtpa to Geneva, Switzerland-based energy and commodities trader Mercuria. Commonwealth LNG is a project of Kimmeridge Energy Management Co LLC and Mubadala Investment Co through their joint venture Caturus HoldCo LLC. Expected to start operation 2030, Commonwealth LNG is designed to produce up to 9.5 million metric tons a year of LNG. “This agreement highlights the strong international demand for U.S. LNG and underscores how our longstanding relationships and capabilities position Caturus to serve global markets”, said Caturus chief executive David Lawler. “Our contract with Aramco Trading underscores the differentiated value Caturus can bring through our global reach in offering wellhead to water services”, Lawler added. Mohammed K. Al Mulhim, Aramco Trading president and CEO, said, “This agreement reflects Aramco Trading’s efforts to secure a reliable, long-term energy supply for global markets while strengthening our presence in the LNG sector”. The Gulf Coast project is permitted to ship up to 9.5 MMtpa of LNG, equivalent to around 1.21 billion cubic feet per day of gas according to Kimmeridge. The United States Energy Department granted the project authorization to export to countries without a free trade agreement (FTA) with the U.S. in August 2025 and FTA authorization in April 2020. The developers expect the first phase of the project to generate around

Read More »

Enbridge Q4 Profit Up YoY

Enbridge Inc has reported CAD 1.95 billion ($1.43 billion) in earnings and CAD 1.92 billion in adjusted earnings for the fourth quarter of 2025, up from CAD 493 million and CAD 1.64 billion for the same three-month period in 2024 respectively. Q4 2025 income per share of CAD 0.88 ($0.63), adjusted for extraordinary items, beat the Zacks Consensus Estimate of $0.6. Calgary-based Enbridge, which operates oil and gas pipelines in Canada and the United States, earlier bumped up its quarterly dividend by three percent against the prior rate to CAD 0.97. The annualized rate for 2026 is CAD 3.88 per share. Q4 2025 adjusted EBITDA rose 1.62 percent year-on-year to CAD 5.21 billion “due primarily to favorable gas transmission contracting and Venice Extension entering service, colder weather and higher rates and customer growth at Enbridge Gas Ontario, partially offset by the absence in 2025 of equity earnings related to investment tax credits from our investment in Fox Squirrel Solar”, Enbridge said in an online statement. United States gas transmission contributed CAD 997 million to segment adjusted EBITDA, down from CAD 1 billion for Q4 2024. The U.S. figure benefited from the startup of the Venice Extension Project, which expands the Texas Eastern system’s capacity to deliver gas to Gulf Coast markets, and Enbridge’s acquisition of a stake in the Matterhorn Express Pipeline. Enbridge also recognized “favorable contracting and successful rate case settlements on our U.S. Gas Transmission assets”, partially offset by the timing of operating costs. Adjusted EBITDA from Canadian gas transmission increased from CAD 157 million for Q4 2024 to CAD 190 million for Q4 2025, helped by “higher revenues at Aitken Creek due to favorable storage spreads”. Liquid pipelines logged CAD 2.45 billion in adjusted EBITDA, up from CAD 2.4 billion for Q4 2024. The Mainline System, which carries

Read More »

Analyst Highlights Focus of IEW Event

Focus at the London International Energy Week (IEW) last week was the balancing of geopolitics versus assessed surplus of oil globally in 2026. That’s what Skandinaviska Enskilda Banken AB (SEB) Chief Commodities Analyst Bjarne Schieldrop noted in a SEB report sent to Rigzone on Monday morning, adding that one delegate at the event stated that “if OPEC doesn’t cut, we’ll have $45 per barrel in June”. “That may be true,” Schieldrop said in the report. “But OPEC+ is meeting every month, taking a measure of the state of the global oil market and then decides what to do on the back of that. The group has been very explicit that they may cut, increase, or keep production steady depending on their findings,” he added. “We believe they will and thus we do not buy into $45 per barrel by June because, if need-be, they will trim production as they say they will,” he continued, pointing out that OPEC+ is next scheduled to meet on March 1 “to discuss production for April”. Schieldrop highlighted in the report that, in its February oil market report, the International Energy Agency (IEA) “restated its view that the world will only need 25.7 million barrels per day of crude from OPEC in 2026 versus a recent production by the group of 28.8 million barrels per day”. “I.e. that to keep the market balanced the group will need to cut production by some three million barrels per day,” he said. “Though strategic stock building around the world needs to be deducted from that. And the appetite for such stock building could be solid given elevated geopolitical risks. Thus what will flow to commercial stocks in the end remains to be seen,” he stated. Schieldrop went on to note in the report that increased Iranian tension could drive Brent

Read More »

Hungary Asks Croatia to Allow Russian Crude Shipments

Hungary requested that Croatia allow the shipment of Russian crude via the Adriatic pipeline while a key route through Ukraine remains blocked. Hungarian Foreign Minister Peter Szijjarto and Slovak Economy Minister Denisa Sakova jointly wrote to the Croatian government in Zagreb with the request, Szijjarto said in a statement Sunday. Oil transit along the Druzhba pipeline via Ukraine has been halted since late last month amid large-scale Russian attacks on Ukraine’s energy infrastructure, with the governments in Budapest and Kyiv in a standoff over the fallout. Budapest relies on the Druzhba pipeline connecting Hungary with Russia through war-torn Ukraine for most of its oil flows. Hungarian Prime Minister Viktor Orban, who has remained committed to buying Russian energy sources for his landlocked country, has also frequently engaged in debate with neighboring Croatia over the capacity of the Adriatic pipeline.  Energy policy is also likely to feature in Orban’s talks in Budapest with US Secretary of State Marco Rubio on Monday. Orban has found an ally in Slovak counterpart Robert Fico, who on Sunday echoed his views that Ukraine was using the Druzhba pipeline for political leverage, which officials in Kyiv have denied. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.

Read More »

Adani bets $100 billion on AI data centers as India eyes global hub status

The sovereignty question Adani framed the investment as a matter of national digital sovereignty, saying it would reserve a significant portion of GPU capacity for Indian AI startups and research institutions. Analysts were not convinced the structure supported the claim. “I believe it is too distant from digital sovereignty if the majority of the projects are being built to serve leading MNC AI hyperscalers,” said Shah. “Equal investments have to happen for public AI infrastructure, and the data of billions of users — from commerce to content to health — must remain sovereign.” Gogia framed the gap in operational terms. “Ownership alone does not define sovereignty,” he said. “The practical determinants are who controls privileged access during incidents, where critical workloads fail over when grids are stressed, and what regulatory oversight mechanisms are contractually enforceable.” Those are questions Adani has not yet answered and the market, analysts say, will be watching for more than just construction progress. But Banerjee said the market would not wait nine years to judge the announcement. “In practice, I think the market will judge this on near-term proof points, grid capacity secured, power contracting in place, and anchor tenants signed, rather than the headline capex or long-dated targets,” he said.

Read More »

Arista laments ‘horrendous’ memory situation

Digging in on campus Arista has been clear about its plans to grow its presence campus networking environments. Last Fall, Ullal said she expects Arista’s campus and WAN business would grow from the current $750 million-$800 million run rate to $1.25 billion, representing a 60% growth opportunity for the company. “We are committed to our aggressive goal of $1.25 billion for ’26 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine and peering use cases,” Ullal said. “In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue.” Ethernet leads the way “In terms of annual 2025 product lines, our core cloud, AI and data center products built upon our highly differentiated Arista EOS stack is successfully deployed across 10 gig to 800 gigabit Ethernet speeds with 1.6 terabit migration imminent,” Ullal said. “This includes our portfolio of EtherLink AI and our 7000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility for both the front and back-end compute, storage and all of the interconnect zones.” Ullal said she expects Ethernet will get even more of a boost later this year when the multivendor Ethernet for Scale-Up Networking (ESUN) specification is released.  “We have consistently described that today’s configurations are mostly a combination of scale out and scale up were largely based on 800G and smaller ratings. Now that the ESUN specification is well underway, we need a good solid spec. Otherwise, we’ll be shipping proprietary products like some people in the world do today. And so we will tie our

Read More »

From NIMBY to YIMBY: A Playbook for Data Center Community Acceptance

Across many conversations at the start of this year, at PTC and other conferences alike, the word on everyone’s lips seems to be “community.” For the data center industry, that single word now captures a turning point from just a few short years ago: we are no longer a niche, back‑of‑house utility, but a front‑page presence in local politics, school board budgets, and town hall debates. That visibility is forcing a choice in how we tell our story—either accept a permanent NIMBY-reactive framework, or actively build a YIMBY narrative that portrays the real value digital infrastructure brings to the markets and surrounding communities that host it. Speaking regularly with Ilissa Miller, CEO of iMiller Public Relations about this topic, there is work to be done across the ecosystem to build communications. Miller recently reflected: “What we’re seeing in communities isn’t a rejection of digital infrastructure, it’s a rejection of uncertainty driven by anxiety and fear. Most local leaders have never been given a framework to evaluate digital infrastructure developments the way they evaluate roads, water systems, or industrial parks. When there’s no shared planning language, ‘no’ becomes the safest answer.” A Brief History of “No” Community pushback against data centers is no longer episodic; it has become organized, media‑savvy, and politically influential in key markets. In Northern Virginia, resident groups and environmental organizations have mobilized against large‑scale campuses, pressing counties like Loudoun and Prince William to tighten zoning, question incentives, and delay or reshape projects.1 Loudoun County’s move in 2025 to end by‑right approvals for new facilities, requiring public hearings and board votes, marked a watershed moment as the world’s densest data center market signaled that communities now expect more say over where and how these campuses are built. Prince William County’s decision to sharply increase its tax rate on

Read More »

Nomads at the Frontier: PTC 2026 Signals the Digital Infrastructure Industry’s Moment of Execution

Each January, the Pacific Telecommunications Council conference serves as a barometer for where digital infrastructure is headed next. And according to Nomad Futurist founders Nabeel Mahmood and Phillip Koblence, the message from PTC 2026 was unmistakable: The industry has moved beyond hype. The hard work has begun. In the latest episode of The DCF Show Podcast, part of our ongoing ‘Nomads at the Frontier’ series, Mahmood and Koblence joined Data Center Frontier to unpack the tone shift emerging across the AI and data center ecosystem. Attendance continues to grow year over year. Conversations remain energetic. But the character of those conversations has changed. As Mahmood put it: “The hype that the market started to see is actually resulting a bit more into actions now, and those conversations are resulting into some good progress.” The difference from prior years? Less speculation. More execution. From Data Center Cowboys to Real Deployments Koblence offered perhaps the sharpest contrast between PTC conversations in 2024 and those in 2026. Two years ago, many projects felt speculative. Today, developers are arriving with secured power, customers, and construction underway. “If 2024’s PTC was data center cowboys — sites that in someone’s mind could be a data center — this year was: show me the money, show me the power, give me accurate timelines.” In other words, the market is no longer rewarding hypothetical capacity. It is demanding delivered capacity. Operators now speak in terms of deployments already underway, not aspirational campuses still waiting on permits and power commitments. And behind nearly every conversation sits the same gating factor. Power. Power Has Become the Industry’s Defining Constraint Whether discussions centered on AI factories, investment capital, or campus expansion, Mahmood and Koblence noted that every conversation eventually returned to energy availability. “All of those questions are power,” Koblence said.

Read More »

Cooling Consolidation Hits AI Scale: LiquidStack, Submer, and the Future of Data Center Thermal Strategy

As AI infrastructure scales toward ever-higher rack densities and gigawatt-class campuses, cooling has moved from a technical subsystem to a defining strategic issue for the data center industry. A trio of announcements in early February highlights how rapidly the cooling and AI infrastructure stack is consolidating and evolving: Trane Technologies’ acquisition of LiquidStack; Submer’s acquisition of Radian Arc, extending its reach from core data centers into telco edge environments; and Submer’s partnership with Anant Raj to accelerate sovereign AI infrastructure deployment across India. Layered atop these developments is fresh guidance from Oracle Cloud Infrastructure explaining why closed-loop, direct-to-chip cooling is becoming central to next-generation facility design, particularly in regions where water use has become a flashpoint in community discussions around data center growth. Taken together, these developments show how the industry is moving beyond point solutions toward integrated, scalable AI infrastructure ecosystems, where cooling, compute, and deployment models must work together across hyperscale campuses and distributed edge environments alike. Trane Moves to Own the Cooling Stack The most consequential development comes from Trane Technologies, which on February 10 announced it has entered into a definitive agreement to acquire LiquidStack, one of the pioneers and leading innovators in data center liquid cooling. The acquisition significantly strengthens Trane’s ambition to become a full-service thermal partner for data center operators, extending its reach from plant-level systems all the way down to the chip itself. LiquidStack, headquartered in Carrollton, Texas, built its reputation on immersion cooling and advanced direct-to-chip liquid solutions supporting high-density deployments across hyperscale, enterprise, colocation, edge, and blockchain environments. Under Trane, those technologies will now be scaled globally and integrated into a broader thermal portfolio. In practical terms, Trane is positioning itself to deliver cooling across the full thermal chain, including: • Central plant equipment and chillers.• Heat rejection and controls

Read More »

Infrastructure Maturity Defines the Next Phase of AI Deployment

The State of Data Infrastructure Global Report 2025 from Hitachi Vantara arrives at a moment when the data center industry is undergoing one of the most profound structural shifts in its history. The transition from enterprise IT to AI-first infrastructure has moved from aspiration to inevitability, forcing operators, developers, and investors to confront uncomfortable truths about readiness, resilience, and risk. Although framed around “AI readiness,” the report ultimately tells an infrastructure story: one that maps directly onto how data centers are designed, operated, secured, and justified economically. Drawing on a global survey of more than 1,200 IT leaders, the report introduces a proprietary maturity model that evaluates organizations across six dimensions: scalability, reliability, security, governance, sovereignty, and sustainability. Respondents are then grouped into three categories—Emerging, Defined, and Optimized—revealing a stark conclusion: most organizations are not constrained by access to AI models or capital, but by the fragility of the infrastructure supporting their data pipelines. For the data center industry, the implications are immediate, shaping everything from availability design and automation strategies to sustainability planning and evolving customer expectations. In short, extracting value from AI now depends less on experimentation and more on the strength and resilience of the underlying infrastructure. The Focus of the Survey: Infrastructure, Not Algorithms Although the report is positioned as a study of AI readiness, its primary focus is not models, training approaches, or application development, but rather the infrastructure foundations required to operate AI reliably at scale. Drawing on responses from more than 1,200 organizations, Hitachi Vantara evaluates how enterprises are positioned to support production AI workloads across six dimensions as stated above: scalability, reliability, security, governance, sovereignty, and sustainability. These factors closely reflect the operational realities shaping modern data center design and management. The survey’s central argument is that AI success is no longer

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »