Stay Ahead, Stay ONMINE

Supercharge Your RAG with Multi-Agent Self-RAG

Introduction Many of us might have tried to build a RAG application and noticed it falls significantly short of addressing real-life needs. Why is that? It’s because many real-world problems require multiple steps of information retrieval and reasoning. We need our agent to perform those as humans normally do, yet most RAG applications fall short […]

Introduction

Many of us might have tried to build a RAG application and noticed it falls significantly short of addressing real-life needs. Why is that? It’s because many real-world problems require multiple steps of information retrieval and reasoning. We need our agent to perform those as humans normally do, yet most RAG applications fall short of this.

This article explores how to supercharge your RAG application by making its data retrieval and reasoning process similar to how a human would, under a multi-agent framework. The framework presented here is based on the Self-RAG strategy but has been significantly modified to enhance its capabilities. Prior knowledge of the original strategy is not necessary for reading this article.

Real-life Case

Consider this: I was going to fly from Delhi to Munich (let’s assume I am taking the flight from an EU airline), but I was denied boarding somehow. Now I want to know what the compensation should be.

These two webpages contain relevant information, I go ahead adding them to my vector store, trying to have my agent answer this for me by retrieving the right information.

Now, I pass this question to the vector store: “how much can I receive if I am denied boarding, for flights from Delhi to Munich?”.

– – – – – – – – – – – – – – – – – – – – – – – – –
Overview of US Flight Compensation Policies To get compensation for delayed flights, you should contact your airline via their customer service or go to the customer service desk. At the same time, you should bear in mind that you will only receive compensation if the delay is not weather-related and is within the carrier`s control. According to the US Department of Transportation, US airlines are not required to compensate you if a flight is cancelled or delayed. You can be compensated if you are bumped or moved from an overbooked flight. If your provider cancels your flight less than two weeks before departure and you decide to cancel your trip entirely, you can receive a refund of both pre-paid baggage fees and your plane ticket. There will be no refund if you choose to continue your journey. In the case of a delayed flight, the airline will rebook you on a different flight. According to federal law, you will not be provided with money or other compensation. Comparative Analysis of EU vs. US Flight Compensation Policies
– – – – – – – – – – – – – – – – – – – – – – – – –
(AUTHOR-ADDED NOTE: IMPORTANT, PAY ATTENTION TO THIS)
Short-distance flight delays – if it is up to 1,500 km, you are due 250 Euro compensation.
Medium distance flight delays – for all the flights between 1,500 and 3,500 km, the compensation should be 400 Euro.
Long-distance flight delays – if it is over 3,500 km, you are due 600 Euro compensation. To receive this kind of compensation, the following conditions must be met; Your flight starts in a non-EU member state or in an EU member state and finishes in an EU member state and is organised by an EU airline. Your flight reaches the final destination with a delay that exceeds three hours. There is no force majeure.
– – – – – – – – – – – – – – – – – – – – – – – – –
Compensation policies in the EU and US are not the same, which implies that it is worth knowing more about them. While you can always count on Skycop flight cancellation compensation, you should still get acquainted with the information below.
– – – – – – – – – – – – – – – – – – – – – – – – –
Compensation for flight regulations EU: The EU does regulate flight delay compensation, which is known as EU261. US: According to the US Department of Transportation, every airline has its own policies about what should be done for delayed passengers. Compensation for flight delays EU: Just like in the United States, compensation is not provided when the flight is delayed due to uncontrollable reasons. However, there is a clear approach to compensation calculation based on distance. For example, if your flight was up to 1,500 km, you can receive 250 euros. US: There are no federal requirements. That is why every airline sets its own limits for compensation in terms of length. However, it is usually set at three hours. Overbooking EU: In the EU, they call for volunteers if the flight is overbooked. These people are entitled to a choice of: Re-routing to their final destination at the earliest opportunity. Refund of their ticket cost within a week if not travelling. Re-routing at a later date at the person`s convenience.

Unfortunately, they contain only generic flight compensation policies, without telling me how much I can expect when denied boarding from Delhi to Munich specifically. If the RAG agent takes these as the sole context, it can only provide a generic answer about flight compensation policy, without giving the answer we want.

However, while the documents are not immediately useful, there is an important insight contained in the 2nd piece of context: compensation varies according to flight distance. If the RAG agent thinks more like human, it should follow these steps to provide an answer:

  1. Based on the retrieved context, reason that compensation varies with flight distance
  2. Next, retrieve the flight distance between Delhi and Munich
  3. Given the distance (which is around 5900km), classify the flight as a long-distance one
  4. Combined with the previously retrieved context, figure out I am due 600 EUR, assuming other conditions are fulfilled

This example demonstrates how a simple RAG, in which a single retrieval is made, fall short for several reasons:

  1. Complex Queries: Users often have questions that a simple search can’t fully address. For example, “What’s the best smartphone for gaming under $500?” requires consideration of multiple factors like performance, price, and features, which a single retrieval step might miss.
  2. Deep Information: Some information lies across documents. For example, research papers, medical records, or legal documents often include references that need to be made sense of, before one can fully understand the content of a given article. Multiple retrieval steps help dig deeper into the content.

Multiple retrievals supplemented with human-like reasoning allow for a more nuanced, comprehensive, and accurate response, adapting to the complexity and depth of user queries.

Multi-Agent Self-RAG

Here I explain the reasoning process behind this strategy, afterwards I will provide the code to show you how to achieve this!

Note: For readers interested in knowing how my approach differs from the original Self-RAG, I will describe the discrepancies in quotation boxes like this. But general readers who are unfamiliar with the original Self-RAG can skip them.

In the below graphs, each circle represents a step (aka Node), which is performed by a dedicated agent working on the specific problem. We orchestrate them to form a multi-agent RAG application.

1st iteration: Simple RAG

A simple RAG chain

This is just the vanilla RAG approach I described in “Real-life Case”, represented as a graph. After Retrieve documents, the new_documents will be used as input for Generate Answer. Nothing special, but it serves as our starting point.

2nd iteration: Digest documents with “Grade documents”

Reasoning like human do

Remember I said in the “Real-life Case” section, that as a next step, the agent should “reason that compensation varies with flight distance”? The Grade documents step is exactly for this purpose.

Given the new_documents, the agent will try to output two items:

  1. useful_documents: Comparing the question asked, it determines if the documents are useful, and retain a memory for those deemed useful for future reference. As an example, since our question does not concern compensation policies for US, documents describing those are discarded, leaving only those for EU
  2. hypothesis: Based on the documents, the agent forms a hypothesis about how the question can be answered, that is, flight distance needs to be identified

Notice how the above reasoning resembles human thinking! But still, while these outputs are useful, we need to instruct the agent to use them as input for performing the next document retrieval. Without this, the answer provided in Generate answer is still not useful.

useful_documents are appended for each document retrieval loop, instead of being overwritten, to keep a memory of documents that are previously deemed useful. hypothesis is formed from useful_documents and new_documents to provide an “abstract reasoning” to inform how query is to be transformed subsequently.

The hypothesis is especially useful when no useful documents can be identified initially, as the agent can still form hypothesis from documents not immediately deemed as useful / only bearing indirect relationship to the question at hand, for informing what questions to ask next

3rd iteration: Brainstorm new questions to ask

Suggest questions for additional information retrieval

We have the agent reflect upon whether the answer is useful and grounded in context. If not, it should proceed to Transform query to ask further questions.

The output new_queries will be a list of new questions that the agent consider useful for obtaining extra information. Given the useful_documents (compensation policies for EU), and hypothesis (need to identify flight distance between Delhi and Munich), it asks questions like “What is the distance between Delhi and Munich?”

Now we are ready to use the new_queries for further retrieval!

The transform_query node will use useful_documents (which are accumulated per iteration, instead of being overwritten) and hypothesis as input for providing the agent directions to ask new questions.

The new questions will be a list of questions (instead of a single question) separated from the original question, so that the original question is kept in state, otherwise the agent could lose track of the original question after multiple iterations.

Final iteration: Further retrieval with new questions

Issuing new queries to retrieve extra documents

The output new_queries from Transform query will be passed to the Retrieve documents step, forming a retrieval loop.

Since the question “What is the distance between Delhi and Munich?” is asked, we can expect the flight distance is then retrieved as new_documents, and subsequently graded as useful_documents, further used as an input for Generate answer.

The grade_documents node will compare the documents against both the original question and new_questions list, so that documents that are considered useful for new_questions, even if not so for the original question, are kept.

This is because those documents might help answer the original question indirectly, by being relevant to new_questions (like “What is the distance between Delhi and Munich?”)

Final answer!

Equipped with this new context about flight distance, the agent is now ready to provide the right answer: 600 EUR!

Next, let us now dive into the code to see how this multi-agent RAG application is created.

Implementation

The source code can be found here. Our multi-agent RAG application involves iterations and loops, and LangGraph is a great library for building such complex multi-agent application. If you are not familiar with LangGraph, you are strongly suggested to have a look at LangGraph’s Quickstart guide to understand more about it!

To keep this article concise, I will focus on the key code snippets only.

Important note: I am using OpenRouter as the Llm interface, but the code can be easily adapted for other LLM interfaces. Also, while in my code I am using Claude 3.5 Sonnet as model, you can use any LLM as long as it support tools as parameter (check this list here), so you can also run this with other models, like DeepSeek V3 and OpenAI o1!

State definition

In the previous section, I have defined various elements e.g. new_documentshypothesis that are to be passed to each step (aka Nodes), in LangGraph’s terminology these elements are called State.

We define the State formally with the following snippet.

from typing import List, Annotated
from typing_extensions import TypedDict

def append_to_list(original: list, new: list) -> list:
original.append(new)
return original

def combine_list(original: list, new: list) -> list:
return original + new

class GraphState(TypedDict):
"""
Represents the state of our graph.

Attributes:
question: question
generation: LLM generation
new_documents: newly retrieved documents for the current iteration
useful_documents: documents that are considered useful
graded_documents: documents that have been graded
new_queries: newly generated questions
hypothesis: hypothesis
"""

question: str
generation: str
new_documents: List[str]
useful_documents: Annotated[List[str], combine_list]
graded_documents: List[str]
new_queries: Annotated[List[str], append_to_list]
hypothesis: str

Graph definition

This is where we combine the different steps to form a “Graph”, which is a representation of our multi-agent application. The definitions of various steps (e.g. grade_documents) are represented by their respective functions.

from langgraph.graph import END, StateGraph, START
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display

workflow = StateGraph(GraphState)

# Define the nodes
workflow.add_node("retrieve", retrieve) # retrieve
workflow.add_node("grade_documents", grade_documents) # grade documents
workflow.add_node("generate", generate) # generatae
workflow.add_node("transform_query", transform_query) # transform_query

# Build graph
workflow.add_edge(START, "retrieve")
workflow.add_edge("retrieve", "grade_documents")
workflow.add_conditional_edges(
"grade_documents",
decide_to_generate,
{
"transform_query": "transform_query",
"generate": "generate",
},
)
workflow.add_edge("transform_query", "retrieve")
workflow.add_conditional_edges(
"generate",
grade_generation_v_documents_and_question,
{
"useful": END,
"not supported": "transform_query",
"not useful": "transform_query",
},
)

# Compile
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
display(Image(app.get_graph(xray=True).draw_mermaid_png()))

Running the above code, you should see this graphical representation of our RAG application. Notice how it is essentially equivalent to the graph I have shown in the final iteration of “Enhanced Self-RAG Strategy”!

Visualizing the multi-agent RAG graph

After generate, if the answer is considered “not supported”, the agent will proceed to transform_query intead of to generate again, so that the agent will look for additional information rather than trying to regenerate answers based on existing context, which might not suffice for providing a “supported” answer

Now we are ready to put the multi-agent application to test! With the below code snippet, we ask this question how much can I receive if I am denied boarding, for flights from Delhi to Munich?

from pprint import pprint
config = {"configurable": {"thread_id": str(uuid4())}}

# Run
inputs = {
"question": "how much can I receive if I am denied boarding, for flights from Delhi to Munich?",
}
for output in app.stream(inputs, config):
for key, value in output.items():
# Node
pprint(f"Node '{key}':")
# Optional: print full state at each node
# print(app.get_state(config).values)
pprint("n---n")

# Final generation
pprint(value["generation"])

While output might vary (sometimes the application provides the answer without any iterations, because it “guessed” the distance between Delhi and Munich), it should look something like this, which shows the application went through multiple rounds of data retrieval for RAG.

---RETRIEVE---
"Node 'retrieve':"
'n---n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'n---n'
---GENERATE---
---CHECK HALLUCINATIONS---
'---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---'
"Node 'generate':"
'n---n'
---TRANSFORM QUERY---
"Node 'transform_query':"
'n---n'
---RETRIEVE---
"Node 'retrieve':"
'n---n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'n---n'
---GENERATE---
---CHECK HALLUCINATIONS---
---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---
---GRADE GENERATION vs QUESTION---
---DECISION: GENERATION ADDRESSES QUESTION---
"Node 'generate':"
'n---n'
('Based on the context provided, the flight distance from Munich to Delhi is '
'5,931 km, which falls into the long-distance category (over 3,500 km). '
'Therefore, if you are denied boarding on a flight from Delhi to Munich '
'operated by an EU airline, you would be eligible for 600 Euro compensation, '
'provided that:n'
'1. The flight is operated by an EU airlinen'
'2. There is no force majeuren'
'3. Other applicable conditions are metn'
'n'
"However, it's important to note that this compensation amount is only valid "
'if all the required conditions are met as specified in the regulations.')

And the final answer is what we aimed for!

Based on the context provided, the flight distance from Munich to Delhi is
5,931 km, which falls into the long-distance category (over 3,500 km).
Therefore, if you are denied boarding on a flight from Delhi to Munich
operated by an EU airline, you would be eligible for 600 Euro compensation,
provided that:
1. The flight is operated by an EU airline
2. There is no force majeure
3. Other applicable conditions are met

However, it's important to note that this compensation amount is only valid
if all the required conditions are met as specified in the regulations.

Inspecting the State, we see how the hypothesis and new_queries enhance the effectiveness of our multi-agent RAG application by mimicking human thinking process.

Hypothesis

print(app.get_state(config).values.get('hypothesis',""))
--- Output ---
To answer this question accurately, I need to determine:

1. Is this flight operated by an EU airline? (Since Delhi is non-EU and Munich is EU)
2. What is the flight distance between Delhi and Munich? (To determine compensation amount)
3. Are we dealing with a denied boarding situation due to overbooking? (As opposed to delay/cancellation)

From the context, I can find information about compensation amounts based on distance, but I need to verify:
- If the flight meets EU compensation eligibility criteria
- The exact distance between Delhi and Munich to determine which compensation tier applies (250€, 400€, or 600€)
- If denied boarding compensation follows the same amounts as delay compensation

The context doesn't explicitly state compensation amounts specifically for denied boarding, though it mentions overbooking situations in the EU require offering volunteers re-routing or refund options.

Would you like me to proceed with the information available, or would you need additional context about denied boarding compensation specifically?

New Queries

for questions_batch in app.get_state(config).values.get('new_queries',""):
for q in questions_batch:
print(q)
--- Output ---
What is the flight distance between Delhi and Munich?
Does EU denied boarding compensation follow the same amounts as flight delay compensation?
Are there specific compensation rules for denied boarding versus flight delays for flights from non-EU to EU destinations?
What are the compensation rules when flying with non-EU airlines from Delhi to Munich?
What are the specific conditions that qualify as denied boarding under EU regulations?

Conclusion

Simple RAG, while easy to build, might fall short in tackling real-life questions. By incorporating human thinking process into a multi-agent RAG framework, we are making RAG applications much more practical.

*Unless otherwise noted, all images are by the author


Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

IBM expands professional services for Cisco firewalls

With the expanded Cisco partnership, IBM TLS can now support the lifecycle of these Cisco firewalls, whether physical, cloud or virtual, by planning, designing, purchasing, installing, de-installing, and supporting them, helping clients to optimize their core or AI infrastructure, according to Atul Dhall, vice president of product management and global

Read More »

Nvidia looks to power AI factory networks

Nvidia also introduced BlueField 4, a next-generation processor that acts as the operating system for AI factories. It delivers 800Gbit/sec of throughput, double the throughput of its predecessor BlueField 3, and six times more compute than BlueField 3. BlueField 4 combines Arm-based CPUs with the ConnectX-9 SuperNIC to accelerate storage,

Read More »

Noble Quarterly Revenue Falls

Noble Corp on Monday reported $798 million in revenue for the third quarter, down from $849 million for the prior three-month period as lower rig utilization offset lower contract drilling services costs. “Utilization of the 35 marketed rigs was 65 percent in the third quarter of 2025 compared to 73

Read More »

Google Cloud targets enterprise AI builders with upgraded Vertex AI Training

Enterprises can quickly set up managed Slurm environments with automated resiliency and cost optimization through the Dynamic Workload Scheduler. The platform also includes hyperparameter tuning, data optimization, and built-in recipes with frameworks like NVIDIA NeMo to streamline model development. Enterprises weigh AI training gains Building and scaling generative AI models

Read More »

Oil Rally Fades on Oversupply Fears

Oil declined as mounting signs of oversupply quelled a bumper rally triggered by US sanctions on key Russian producers last week. West Texas Intermediate fell 1.9% to again settle close to $60 a barrel, the steepest daily drop in more than two weeks. Investors are better positioned to renew bets now that a looming surplus will push prices down after a massive liquidation of speculative wagers. The amount of oil being shipped across the world’s oceans hit a record high, indicating that excess supplies continue to rise. In addition, OPEC+ may agree to add more production at a meeting this weekend. Last week, US President Donald Trump’s administration imposed sanctions on Russia’s two biggest oil producers — Lukoil and state-controlled Rosneft PJSC — to pressure the Kremlin to end the war in Ukraine. The move spurred the biggest unwinding of crude futures positions on record, with bearish wagers held by hedge funds at an all-time high before the sanctions were announced. The ensuing price spike left cleaner positioning for those looking to bet on a drop. Price swings are also set to be exacerbated by the expiry of tens of thousands of Brent options contracts still held near $65 a barrel. The impact of the sanctions is still unclear, with traders closely following actions taken by Chinese and Indian energy companies, top buyers of Russian crude. “The market is now questioning the actual effectiveness of the sanctions,” said Ole Hvalbye, an analyst at SEB AB. “While a full blacklisting sounds dramatic, the mechanisms for enforcement remain unclear, and so far, there are no signs of disrupted Russian flows.” One Indian refiner plans to seek non-Russian barrels, while others are considering whether they can continue to take some discounted Russian oil cargoes by leaning on small suppliers, instead of Lukoil and

Read More »

Eni, Egypt Sign Agreement for Potential Biogas Projects

Eni SpA and Egypt’s Bioenergy Association for Sustainable Development, affiliated with the Environment Ministry, signed Monday an agreement for a feasibility study on biogas production in the North African country using animal and agricultural waste. “The joint study will assess the feasibility of building a biodigestion plant capable of treating agricultural and animal waste, particularly from livestock farming”, Italy’s state-controlled energy major Eni said in a press release. “The biogas produced by biodigestion can generate renewable electricity and heat, while also producing higher-value organic fertilizers for use in agriculture, further contributing to the circular economy. The initiative would also reduce greenhouse gas emissions from agricultural waste and byproducts, while generating high-quality carbon credits.  “The agreement is in line with the Ministry of Environment’s objective to promote the dissemination of biogas technology across Egyptian governorates and to develop innovative and sustainable energy solutions that contribute to emission reduction and sustainable development. “It also fits in Eni’s long-term strategy to achieve carbon neutrality by 2050 through a multi-faceted approach that includes developing integrated solutions to reduce emissions and enhance resource efficiency”. Elsewhere in Africa Eni earlier this year inaugurated its first vegetable oil extraction plant in the Republic of the Congo, unlocking new feedstock capacity for its biorefineries. The facility in Loudima, in the southern part of the Central African country, can produce up to 30,000 million metric tons per annum (MMtpa) of vegetable oil. The plant will use crops grown on “degraded and underutilized land or through intercropping systems, as part of an innovative regenerative agriculture project developed in collaboration with local stakeholders”, Eni said in a statement June 28. On May 28 Eni said it had signed an agreement with Cote d’Ivoire’s Agriculture Ministry to explore the potential of cultivating biofuel crops in the West African country. The memorandum of understanding “aims to enhance the rubber

Read More »

Energy Department Announces New Partnership with NVIDIA and Oracle to Build Largest DOE AI Supercomputer

WASHINGTON—The U.S. Department of Energy (DOE), Argonne National Laboratory, NVIDIA and Oracle today announced a landmark public-private partnership to deliver the DOE’s largest AI supercomputer and accelerate scientific discovery. The partnership will also immediately deliver world-class AI computing resources to DOE researchers while simultaneously building two next-generation AI supercomputing systems at Argonne National Laboratory. Today’s announcement is in accordance with President Trump’s Executive Orders, Accelerating Federal Permitting of Data Center Infrastructure and Removing Barriers to American Leadership in Artificial Intelligence. The Solstice system, which will feature 100,000 NVIDIA Blackwell GPUs, will be the largest AI supercomputer in the DOE’s lab complex. Another system, called Equinox, will feature 10,000 NVIDIA Blackwell GPUs. Construction at the Argonne Leadership Computing Facility will immediately begin for the Equinox system. It is expected to be delivered in 2026. These AI systems will be seamlessly connected with DOE’s vast network of scientific instruments and data assets to address some of the nation’s most pressing challenges in energy, security, and discovery science. As part of the partnership, Oracle will also immediately provide DOE with access to AI computing resources that use a combination of NVIDIA Hopper and Blackwell architectures. Scientists from Argonne and across the country will have access to new AI capabilities to drive technological leadership for science and energy applications. “Winning the AI race requires new and creative partnerships that will bring together the brightest minds and industries American technology and science has to offer,” said U.S. Secretary of Energy Chris Wright. “The two Argonne systems and the collaboration between the Department of Energy, NVIDIA, and Oracle represent a new commonsense approach to computing partnerships. These systems will be a powerhouse for scientific and technological innovation. Thanks to President Trump, we’re bringing new computing capacity online faster than ever before and turning shared innovation into national strength.”

Read More »

Are sodium-ion batteries finally ready to compete with lithium?

Last month, on the high prairie east of its hometown, Denver-based Peak Energy powered up what it says is the United States’ first grid-scale sodium-ion battery installation and “the first ever fully passive megawatt-hour scale battery storage system” anywhere in the world. Peak’s 3.5-MWh project marks a big step forward for the electrochemical battery chemistry that many experts believe is the most viable challenger to lithium-ion, which today dominates the energy storage market for discharge durations shorter than four hours.  “What’s nice about our technology is the way it looks and feels to a customer is like a new variant of a [lithium-ion battery] system,” said Landon Mossburg, CEO and cofounder of Peak Energy.  Sodium-ion batteries’ allure is growing amid volatile commodity pricing and an on-again, off-again trade war between the United States and China affecting lithium-ion batteries. Sodium-ion storage has a simpler supply chain that eschews traditional battery metals, said Evelina Stoikou, an energy storage analyst with BloombergNEF. The U.S. has the world’s largest known reserves of soda ash, a sodium precursor that is more abundant globally than lithium, nickel and cobalt. “Lithium-ion costs remain highly sensitive to raw material prices, meaning that spikes in lithium, nickel, or cobalt prices could improve sodium-ion’s relative competitiveness,” she said. But Stoikou cautioned that swingy raw materials pricing can cut in the other direction. At the moment, rapidly-falling LFP costs are driving a boom in global lithium battery deployments.  “Expectations among [sodium-ion battery] manufacturers have cooled as LFP prices continue to trend downward, leading to a reduction in our expectations for sodium-ion to scale,” she said. Sodium-ion proponents like Peak Energy believe sodium-ion chemistry, though less energy-dense than lithium, has inherent advantages that will allow it to compete on cost before the decade is done. Those include lower fire risk, higher discharge rates,

Read More »

XRG in Talks to Invest in Argentina LNG Project

The overseas unit of Abu Dhabi’s biggest oil company is in talks to invest in a liquefied natural gas project Argentina’s YPF SA is developing as it pushes to start exporting the fuel, according to people familiar with the matter.  XRG is eyeing a stake in the project as it considers expanding its LNG portfolio in Latin America, the US and Asia, according to one of the people, who asked not to be identified because the matter isn’t public.  State-run YPF is developing the floating terminal as Argentina tries to tap global LNG demand and accelerate output of vast natural gas reserves in the Vaca Muerta shale basin. The project, which requires construction of several liquefaction vessels, is designed to eventually produce 28 million tons of LNG annually. Shell Plc and Eni SpA are working with YPF on the project, but final investment decisions haven’t been made. XRG’s talks with YPF are preliminary, and the company may ultimately decide not to pursue an investment, according to the people. XRG declined to comment. On Monday, YPF’s depositary receipts traded in New York jumped as much as 38% after libertarian President Javier Milei’s party prevailed in legislative elections. The midterms were seen as a pivotal moment for foreign investors looking for opportunities in Argentina, providing a clear sign voters would continue backing Milei’s push to deregulate the economy. XRG is the international arm of Abu Dhabi National Oil Co., backed by Abu Dhabi’s oil wealth. It has already acquired a stake in NextDecade Corp.’s Rio Grande LNG project being built in South Texas, and is in the process of taking over Germany’s Covestro AG as it bets on lasting demand for gas and chemicals in the energy transition. It’s also bought gas assets in Africa and Central Asia.  But the company also had a setback last month when it

Read More »

Venezuela Revokes Trinidad Gas Deals Over USA Alliance

Venezuela revoked energy deals with neighboring Trinidad and Tobago for its support of a US military offensive in the Caribbean, potentially raising the economic cost of the twin-island nation’s alliance with the Trump administration. Speaking on state television Monday evening, President Nicolás Maduro revoked an energy framework agreement with Trinidad that allowed the two countries to forge gas deals. Venezuelan Vice President and Oil Minister Delcy Rodríguez had made the proposal earlier on Monday.  “Faced with the Prime Minister’s threat to turn Trinidad into the aircraft carrier of the US empire against Venezuela, against South America, there is only one alternative,” Maduro said in his weekly program on state TV. “It is completely suspended.” Maduro said he would deliver the proposal to the Supreme Court, National Assembly and State Council to receive their recommendations before taking “a structural measure” very soon. Trinidad needs Venezuelan gas to replenish supply to the industrial backbone of its fragile economy. However, when Rodríguez first made the threat on Monday, Trinidad’s Prime Minister Kamla Persad-Bissessar told AFP that the country’s future “does not depend on Venezuela and never has.”  Trinidad’s government has developed a “hostile attitude” and a “warlike plan” against Venezuela, Rodríguez said during the afternoon, by “siding” with the US’s “military agenda.” Persad-Bissessar has previously said she welcomed the US offensive on drug traffickers, calling for them to be killed “violently.”  Her posture has alienated Trinidad from other English-language countries in the Caribbean that have insisted on maintaining the region as a “zone of peace.” Venezuelan rhetoric is escalating as the US advances a military campaign, blowing up purported drug boats and pointing a finger at Maduro and, increasingly, Colombian President Gustavo Petro for allegedly flooding the US with fentanyl and cocaine.  The US is ramping up its deployment in the southern Caribbean, with a

Read More »

IT shortcuts curb AI returns

Organizations must ensure the infrastructure is AI ready Infrastructure is another area where Cisco found a major difference. Pacesetters are designing their networks for future demands. Seventy-one percent say their networks can scale instantly for new AI projects. Roughly three-quarters of pacesetters are investing in new data center capacity over the next year. Currently, about two-thirds say their infrastructure can accommodate AI workloads. Most pacesetters (93%) also have data systems that are fully prepared for AI, compared with 34% of other companies. About 76% have fully centralized their in-house data, while only 19% of other companies have done the same. Eighty-four percent report strong governance readiness, while 95% have mature processes to measure the impact of AI. If ever there was a technological shift that requires the right infrastructure, it’s AI. AI generates a significant amount of data, needs large amounts of processes and low latency, high-capacity networks. Historically, businesses could operate with networks that operated on the premise of “best effort,” but that’s no longer the case. From the data center to campus to branch offices, in most companies, the network will require a refresh. Scaling AI requires the right processes When it comes to being disciplined, 62% of pacesetters have an established process for generating, piloting, and scaling AI use cases. Only 13% of other organizations (non-pacesetters) have reached this level of maturity. Most pacesetters say their AI models achieve at least 75% accuracy. Almost half also expect a 50% to 100% return on investment (ROI) within a year, far above the average. Cisco notes that over the past six months, pressure has been building for companies to show tangible ROI. Executives and IT leaders are pushing for results, and so are competitors. By contrast, most other companies are in early stages of readiness. Although 83% plan to

Read More »

Qualcomm goes all-in on inferencing with purpose-built cards and racks

From a strategy perspective, there is a longer term enterprise play here, noted Moor’s Kimball; Humain is Qualcomm’s first customer, and a cloud service provider (CSP) or hyperscaler will likely be customer number two. However, at some point, these rack-scale systems will find their way into the enterprise. “If I were the AI200 product marketing lead, I would be thinking about how I demonstrate this as a viable platform for those enterprise workloads that will be getting ‘agentified’ over the next several years,” said Kimball. It seems a natural step, as Qualcomm saw success with its AI100 accelerator, a strong inference chip, he noted. Right now, Nvidia and AMD dominate the training market, with CUDA and ROCm enjoying a “stickiness” with customers. “If I am a semiconductor giant like Qualcomm that is so good at understanding the performance-power balance, this inference market makes perfect sense to really lean in on,” said Kimball. He also pointed to the company’s plans to re-enter the datacenter CPU space with its Oryon CPU, which is featured in Snapdragon and loosely based on technology it acquired with its $1.4 billion Nuvia acquisition. Ultimately, Qualcomm’s move demonstrates how wide open the inference market is, said Kimball. The company, he noted, has been very good at choosing target markets and has seen success when entering those markets. “That the company would decide to go more ‘in’ on the inference market makes sense,” said Kimball. He added that, from an ROI perspective, inferencing will “dwarf” training in terms of volume and dollars.

Read More »

AI data center building boom risks fueling future debt bust, bank warns

However, that’s only one part of the problem. Meeting the power demands of AI data centers will require the energy sector to make large investments. Then there’s data center demand for microprocessors, rare earth elements, and other valuable metals such as copper, which could, in a bust, make data centers the most expensively-assembled unwanted assets in history. “Financial stability consequences of an AI-related asset price fall could arise through multiple channels. If forecasted debt-financed AI infrastructure growth materializes, the potential financial stability consequences of such an event are likely to grow,” warned the BoE blog post. “For companies who depend on the continued demand for massive computational capacity to train and run inference on AI models, an algorithmic breakthrough or other event which challenges that paradigm could cause a significant re-evaluation of asset prices,” it continued. According to Matt Hasan, CEO of AI consultancy aiRESULTS, the underlying problem is the speed with which AI has emerged. “What we’re witnessing isn’t just an incremental expansion, it’s a rush to construct power-hungry, mega-scale data centers,” he told Network World. The dot.com reversal might be the wrong comparison; it dented the NASDAQ and hurt tech investment, but the damage to organizations investing in e-commerce was relatively limited. AI, by contrast, might have wider effects for large enterprises because so many have pinned their business prospects on its potential. “Your reliance on these large providers means you are indirectly exposed to the stability of their debt. If a correction occurs, the fallout can impact the services you rely on,” said Hasan.

Read More »

Intel sees supply shortage, will prioritize data center technology

“Capacity constraints, especially on Intel 10 and Intel 7 [Intel’s semiconductor manufacturing process], limited our ability to fully meet demand in Q3 for both data center and client products,” said Zinsner, adding that Intel isn’t about to add capacity to Intel 10 and 7 when it has moved beyond those nodes. “Given the current tight capacity environment, which we expect to persist into 2026, we are working closely with customers to maximize our available output, including adjusting pricing and mix to shift demand towards products where we have supply and they have demand,” said Zinsner. For that reason, Zinzner projects that the fourth quarter will be roughly flat versus the third quarter in terms of revenue. “We expect Intel products up modestly sequentially but below customer demand as we continue to navigate supply environment,” said Zinsner. “We expect CCG to be down modestly and PC AI to be up strongly sequentially as we prioritize wafer capacity for server shipments over entry-level client parts.”

Read More »

How to set up an AI data center in 90 days

“Personally, I think that a brownfield is very creative way to deal with what I think is the biggest problem that we’ve got right now, which is time and speed to market,” he said. “On a brownfield, I can go into a building that’s already got power coming into the building. Sometimes they’ve already got chiller plants, like what we’ve got with the building I’m in right now.” Patmos certainly made the most of the liquid facilities in the old printing press building. The facility is built to handle anywhere from 50 to over 140 kilowatts per cabinet, a leap far beyond the 1–2 kW densities typical of legacy data centers. The chips used in the servers are Nvidia’s Grace Blackwell processors, which run extraordinarily hot. To manage this heat load, Patmos employs a multi-loop liquid cooling system. The design separates water sources into distinct, closed loops, each serving a specific function and ensuring that municipal water never directly contacts sensitive IT equipment. “We have five different, completely separated water loops in this building,” said Morgan. “The cooling tower uses city water for evaporation, but that water never mixes with the closed loops serving the data hall. Everything is designed to maximize efficiency and protect the hardware.” The building taps into Kansas City’s district chilled water supply, which is sourced from a nearby utility plant. This provides the primary cooling resource for the facility. Inside the data center, a dedicated loop circulates a specialized glycol-based fluid, filtered to extremely low micron levels and formulated to be electronically safe. Heat exchangers transfer heat from the data hall fluid to the district chilled water, keeping the two fluids separate and preventing corrosion or contamination. Liquid-to-chip and rear-door heat exchangers are used for immediate heat removal.

Read More »

INNIO and VoltaGrid: Landmark 2.3 GW Modular Power Deal Signals New Phase for AI Data Centers

Why This Project Marks a Landmark Shift The deployment of 2.3 GW of modular generation represents utility-scale capacity, but what makes it distinct is the delivery model. Instead of a centralized plant, the project uses modular gas-reciprocating “power packs” that can be phased in step with data-hall readiness. This approach allows staged energization and limits the bottlenecks that often stall AI campuses as they outgrow grid timelines or wait in interconnection queues. AI training loads fluctuate sharply, placing exceptional stress on grid stability and voltage quality. The INNIO/VoltaGrid platform was engineered specifically for these GPU-driven dynamics, emphasizing high transient performance (rapid load acceptance) and grid-grade power quality, all without dependence on batteries. Each power pack is also designed for maximum permitting efficiency and sustainability. Compared with diesel generation, modern gas-reciprocating systems materially reduce both criteria pollutants and CO₂ emissions. VoltaGrid markets the configuration as near-zero criteria air emissions and hydrogen-ready, extending allowable runtimes under air permits and making “prime-as-a-service” viable even in constrained or non-attainment markets. 2025: Momentum for Modular Prime Power INNIO has spent 2025 positioning its Jenbacher platform as a next-generation power solution for data centers: combining fast start, high transient performance, and lower emissions compared with diesel. While the 3 MW J620 fast-start lineage dates back to 2019, this year the company sharpened its data center narrative and booked grid stability and peaking projects in markets where rapid data center growth is stressing local grids. This momentum was exemplified by an 80 MW deployment in Indonesia announced earlier in October. The same year saw surging AI-driven demand and INNIO’s growing push into North American data-center markets. Specifications for the 2.3 GW VoltaGrid package highlight the platform’s heat tolerance, efficiency, and transient response, all key attributes for powering modern AI campuses. VoltaGrid’s 2025 Milestones VoltaGrid’s announcements across 2025 reflect

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »