Stay Ahead, Stay ONMINE

Supercharge Your RAG with Multi-Agent Self-RAG

Introduction Many of us might have tried to build a RAG application and noticed it falls significantly short of addressing real-life needs. Why is that? It’s because many real-world problems require multiple steps of information retrieval and reasoning. We need our agent to perform those as humans normally do, yet most RAG applications fall short […]

Introduction

Many of us might have tried to build a RAG application and noticed it falls significantly short of addressing real-life needs. Why is that? It’s because many real-world problems require multiple steps of information retrieval and reasoning. We need our agent to perform those as humans normally do, yet most RAG applications fall short of this.

This article explores how to supercharge your RAG application by making its data retrieval and reasoning process similar to how a human would, under a multi-agent framework. The framework presented here is based on the Self-RAG strategy but has been significantly modified to enhance its capabilities. Prior knowledge of the original strategy is not necessary for reading this article.

Real-life Case

Consider this: I was going to fly from Delhi to Munich (let’s assume I am taking the flight from an EU airline), but I was denied boarding somehow. Now I want to know what the compensation should be.

These two webpages contain relevant information, I go ahead adding them to my vector store, trying to have my agent answer this for me by retrieving the right information.

Now, I pass this question to the vector store: “how much can I receive if I am denied boarding, for flights from Delhi to Munich?”.

– – – – – – – – – – – – – – – – – – – – – – – – –
Overview of US Flight Compensation Policies To get compensation for delayed flights, you should contact your airline via their customer service or go to the customer service desk. At the same time, you should bear in mind that you will only receive compensation if the delay is not weather-related and is within the carrier`s control. According to the US Department of Transportation, US airlines are not required to compensate you if a flight is cancelled or delayed. You can be compensated if you are bumped or moved from an overbooked flight. If your provider cancels your flight less than two weeks before departure and you decide to cancel your trip entirely, you can receive a refund of both pre-paid baggage fees and your plane ticket. There will be no refund if you choose to continue your journey. In the case of a delayed flight, the airline will rebook you on a different flight. According to federal law, you will not be provided with money or other compensation. Comparative Analysis of EU vs. US Flight Compensation Policies
– – – – – – – – – – – – – – – – – – – – – – – – –
(AUTHOR-ADDED NOTE: IMPORTANT, PAY ATTENTION TO THIS)
Short-distance flight delays – if it is up to 1,500 km, you are due 250 Euro compensation.
Medium distance flight delays – for all the flights between 1,500 and 3,500 km, the compensation should be 400 Euro.
Long-distance flight delays – if it is over 3,500 km, you are due 600 Euro compensation. To receive this kind of compensation, the following conditions must be met; Your flight starts in a non-EU member state or in an EU member state and finishes in an EU member state and is organised by an EU airline. Your flight reaches the final destination with a delay that exceeds three hours. There is no force majeure.
– – – – – – – – – – – – – – – – – – – – – – – – –
Compensation policies in the EU and US are not the same, which implies that it is worth knowing more about them. While you can always count on Skycop flight cancellation compensation, you should still get acquainted with the information below.
– – – – – – – – – – – – – – – – – – – – – – – – –
Compensation for flight regulations EU: The EU does regulate flight delay compensation, which is known as EU261. US: According to the US Department of Transportation, every airline has its own policies about what should be done for delayed passengers. Compensation for flight delays EU: Just like in the United States, compensation is not provided when the flight is delayed due to uncontrollable reasons. However, there is a clear approach to compensation calculation based on distance. For example, if your flight was up to 1,500 km, you can receive 250 euros. US: There are no federal requirements. That is why every airline sets its own limits for compensation in terms of length. However, it is usually set at three hours. Overbooking EU: In the EU, they call for volunteers if the flight is overbooked. These people are entitled to a choice of: Re-routing to their final destination at the earliest opportunity. Refund of their ticket cost within a week if not travelling. Re-routing at a later date at the person`s convenience.

Unfortunately, they contain only generic flight compensation policies, without telling me how much I can expect when denied boarding from Delhi to Munich specifically. If the RAG agent takes these as the sole context, it can only provide a generic answer about flight compensation policy, without giving the answer we want.

However, while the documents are not immediately useful, there is an important insight contained in the 2nd piece of context: compensation varies according to flight distance. If the RAG agent thinks more like human, it should follow these steps to provide an answer:

  1. Based on the retrieved context, reason that compensation varies with flight distance
  2. Next, retrieve the flight distance between Delhi and Munich
  3. Given the distance (which is around 5900km), classify the flight as a long-distance one
  4. Combined with the previously retrieved context, figure out I am due 600 EUR, assuming other conditions are fulfilled

This example demonstrates how a simple RAG, in which a single retrieval is made, fall short for several reasons:

  1. Complex Queries: Users often have questions that a simple search can’t fully address. For example, “What’s the best smartphone for gaming under $500?” requires consideration of multiple factors like performance, price, and features, which a single retrieval step might miss.
  2. Deep Information: Some information lies across documents. For example, research papers, medical records, or legal documents often include references that need to be made sense of, before one can fully understand the content of a given article. Multiple retrieval steps help dig deeper into the content.

Multiple retrievals supplemented with human-like reasoning allow for a more nuanced, comprehensive, and accurate response, adapting to the complexity and depth of user queries.

Multi-Agent Self-RAG

Here I explain the reasoning process behind this strategy, afterwards I will provide the code to show you how to achieve this!

Note: For readers interested in knowing how my approach differs from the original Self-RAG, I will describe the discrepancies in quotation boxes like this. But general readers who are unfamiliar with the original Self-RAG can skip them.

In the below graphs, each circle represents a step (aka Node), which is performed by a dedicated agent working on the specific problem. We orchestrate them to form a multi-agent RAG application.

1st iteration: Simple RAG

A simple RAG chain

This is just the vanilla RAG approach I described in “Real-life Case”, represented as a graph. After Retrieve documents, the new_documents will be used as input for Generate Answer. Nothing special, but it serves as our starting point.

2nd iteration: Digest documents with “Grade documents”

Reasoning like human do

Remember I said in the “Real-life Case” section, that as a next step, the agent should “reason that compensation varies with flight distance”? The Grade documents step is exactly for this purpose.

Given the new_documents, the agent will try to output two items:

  1. useful_documents: Comparing the question asked, it determines if the documents are useful, and retain a memory for those deemed useful for future reference. As an example, since our question does not concern compensation policies for US, documents describing those are discarded, leaving only those for EU
  2. hypothesis: Based on the documents, the agent forms a hypothesis about how the question can be answered, that is, flight distance needs to be identified

Notice how the above reasoning resembles human thinking! But still, while these outputs are useful, we need to instruct the agent to use them as input for performing the next document retrieval. Without this, the answer provided in Generate answer is still not useful.

useful_documents are appended for each document retrieval loop, instead of being overwritten, to keep a memory of documents that are previously deemed useful. hypothesis is formed from useful_documents and new_documents to provide an “abstract reasoning” to inform how query is to be transformed subsequently.

The hypothesis is especially useful when no useful documents can be identified initially, as the agent can still form hypothesis from documents not immediately deemed as useful / only bearing indirect relationship to the question at hand, for informing what questions to ask next

3rd iteration: Brainstorm new questions to ask

Suggest questions for additional information retrieval

We have the agent reflect upon whether the answer is useful and grounded in context. If not, it should proceed to Transform query to ask further questions.

The output new_queries will be a list of new questions that the agent consider useful for obtaining extra information. Given the useful_documents (compensation policies for EU), and hypothesis (need to identify flight distance between Delhi and Munich), it asks questions like “What is the distance between Delhi and Munich?”

Now we are ready to use the new_queries for further retrieval!

The transform_query node will use useful_documents (which are accumulated per iteration, instead of being overwritten) and hypothesis as input for providing the agent directions to ask new questions.

The new questions will be a list of questions (instead of a single question) separated from the original question, so that the original question is kept in state, otherwise the agent could lose track of the original question after multiple iterations.

Final iteration: Further retrieval with new questions

Issuing new queries to retrieve extra documents

The output new_queries from Transform query will be passed to the Retrieve documents step, forming a retrieval loop.

Since the question “What is the distance between Delhi and Munich?” is asked, we can expect the flight distance is then retrieved as new_documents, and subsequently graded as useful_documents, further used as an input for Generate answer.

The grade_documents node will compare the documents against both the original question and new_questions list, so that documents that are considered useful for new_questions, even if not so for the original question, are kept.

This is because those documents might help answer the original question indirectly, by being relevant to new_questions (like “What is the distance between Delhi and Munich?”)

Final answer!

Equipped with this new context about flight distance, the agent is now ready to provide the right answer: 600 EUR!

Next, let us now dive into the code to see how this multi-agent RAG application is created.

Implementation

The source code can be found here. Our multi-agent RAG application involves iterations and loops, and LangGraph is a great library for building such complex multi-agent application. If you are not familiar with LangGraph, you are strongly suggested to have a look at LangGraph’s Quickstart guide to understand more about it!

To keep this article concise, I will focus on the key code snippets only.

Important note: I am using OpenRouter as the Llm interface, but the code can be easily adapted for other LLM interfaces. Also, while in my code I am using Claude 3.5 Sonnet as model, you can use any LLM as long as it support tools as parameter (check this list here), so you can also run this with other models, like DeepSeek V3 and OpenAI o1!

State definition

In the previous section, I have defined various elements e.g. new_documentshypothesis that are to be passed to each step (aka Nodes), in LangGraph’s terminology these elements are called State.

We define the State formally with the following snippet.

from typing import List, Annotated
from typing_extensions import TypedDict

def append_to_list(original: list, new: list) -> list:
original.append(new)
return original

def combine_list(original: list, new: list) -> list:
return original + new

class GraphState(TypedDict):
"""
Represents the state of our graph.

Attributes:
question: question
generation: LLM generation
new_documents: newly retrieved documents for the current iteration
useful_documents: documents that are considered useful
graded_documents: documents that have been graded
new_queries: newly generated questions
hypothesis: hypothesis
"""

question: str
generation: str
new_documents: List[str]
useful_documents: Annotated[List[str], combine_list]
graded_documents: List[str]
new_queries: Annotated[List[str], append_to_list]
hypothesis: str

Graph definition

This is where we combine the different steps to form a “Graph”, which is a representation of our multi-agent application. The definitions of various steps (e.g. grade_documents) are represented by their respective functions.

from langgraph.graph import END, StateGraph, START
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display

workflow = StateGraph(GraphState)

# Define the nodes
workflow.add_node("retrieve", retrieve) # retrieve
workflow.add_node("grade_documents", grade_documents) # grade documents
workflow.add_node("generate", generate) # generatae
workflow.add_node("transform_query", transform_query) # transform_query

# Build graph
workflow.add_edge(START, "retrieve")
workflow.add_edge("retrieve", "grade_documents")
workflow.add_conditional_edges(
"grade_documents",
decide_to_generate,
{
"transform_query": "transform_query",
"generate": "generate",
},
)
workflow.add_edge("transform_query", "retrieve")
workflow.add_conditional_edges(
"generate",
grade_generation_v_documents_and_question,
{
"useful": END,
"not supported": "transform_query",
"not useful": "transform_query",
},
)

# Compile
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
display(Image(app.get_graph(xray=True).draw_mermaid_png()))

Running the above code, you should see this graphical representation of our RAG application. Notice how it is essentially equivalent to the graph I have shown in the final iteration of “Enhanced Self-RAG Strategy”!

Visualizing the multi-agent RAG graph

After generate, if the answer is considered “not supported”, the agent will proceed to transform_query intead of to generate again, so that the agent will look for additional information rather than trying to regenerate answers based on existing context, which might not suffice for providing a “supported” answer

Now we are ready to put the multi-agent application to test! With the below code snippet, we ask this question how much can I receive if I am denied boarding, for flights from Delhi to Munich?

from pprint import pprint
config = {"configurable": {"thread_id": str(uuid4())}}

# Run
inputs = {
"question": "how much can I receive if I am denied boarding, for flights from Delhi to Munich?",
}
for output in app.stream(inputs, config):
for key, value in output.items():
# Node
pprint(f"Node '{key}':")
# Optional: print full state at each node
# print(app.get_state(config).values)
pprint("n---n")

# Final generation
pprint(value["generation"])

While output might vary (sometimes the application provides the answer without any iterations, because it “guessed” the distance between Delhi and Munich), it should look something like this, which shows the application went through multiple rounds of data retrieval for RAG.

---RETRIEVE---
"Node 'retrieve':"
'n---n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'n---n'
---GENERATE---
---CHECK HALLUCINATIONS---
'---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---'
"Node 'generate':"
'n---n'
---TRANSFORM QUERY---
"Node 'transform_query':"
'n---n'
---RETRIEVE---
"Node 'retrieve':"
'n---n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'n---n'
---GENERATE---
---CHECK HALLUCINATIONS---
---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---
---GRADE GENERATION vs QUESTION---
---DECISION: GENERATION ADDRESSES QUESTION---
"Node 'generate':"
'n---n'
('Based on the context provided, the flight distance from Munich to Delhi is '
'5,931 km, which falls into the long-distance category (over 3,500 km). '
'Therefore, if you are denied boarding on a flight from Delhi to Munich '
'operated by an EU airline, you would be eligible for 600 Euro compensation, '
'provided that:n'
'1. The flight is operated by an EU airlinen'
'2. There is no force majeuren'
'3. Other applicable conditions are metn'
'n'
"However, it's important to note that this compensation amount is only valid "
'if all the required conditions are met as specified in the regulations.')

And the final answer is what we aimed for!

Based on the context provided, the flight distance from Munich to Delhi is
5,931 km, which falls into the long-distance category (over 3,500 km).
Therefore, if you are denied boarding on a flight from Delhi to Munich
operated by an EU airline, you would be eligible for 600 Euro compensation,
provided that:
1. The flight is operated by an EU airline
2. There is no force majeure
3. Other applicable conditions are met

However, it's important to note that this compensation amount is only valid
if all the required conditions are met as specified in the regulations.

Inspecting the State, we see how the hypothesis and new_queries enhance the effectiveness of our multi-agent RAG application by mimicking human thinking process.

Hypothesis

print(app.get_state(config).values.get('hypothesis',""))
--- Output ---
To answer this question accurately, I need to determine:

1. Is this flight operated by an EU airline? (Since Delhi is non-EU and Munich is EU)
2. What is the flight distance between Delhi and Munich? (To determine compensation amount)
3. Are we dealing with a denied boarding situation due to overbooking? (As opposed to delay/cancellation)

From the context, I can find information about compensation amounts based on distance, but I need to verify:
- If the flight meets EU compensation eligibility criteria
- The exact distance between Delhi and Munich to determine which compensation tier applies (250€, 400€, or 600€)
- If denied boarding compensation follows the same amounts as delay compensation

The context doesn't explicitly state compensation amounts specifically for denied boarding, though it mentions overbooking situations in the EU require offering volunteers re-routing or refund options.

Would you like me to proceed with the information available, or would you need additional context about denied boarding compensation specifically?

New Queries

for questions_batch in app.get_state(config).values.get('new_queries',""):
for q in questions_batch:
print(q)
--- Output ---
What is the flight distance between Delhi and Munich?
Does EU denied boarding compensation follow the same amounts as flight delay compensation?
Are there specific compensation rules for denied boarding versus flight delays for flights from non-EU to EU destinations?
What are the compensation rules when flying with non-EU airlines from Delhi to Munich?
What are the specific conditions that qualify as denied boarding under EU regulations?

Conclusion

Simple RAG, while easy to build, might fall short in tackling real-life questions. By incorporating human thinking process into a multi-agent RAG framework, we are making RAG applications much more practical.

*Unless otherwise noted, all images are by the author


Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

CompTIA training targets workplace AI use

CompTIA AI Essentials (V2) delivers training to help employees, students, and other professionals strengthen the skills they need for effective business use of AI tools such as ChatGPT, Copilot, and Gemini. In its first iteration, CompTIA’s AI Essentials focused on AI fundamentals to help professionals learn how to apply AI technology

Read More »

OPEC Receives Updated Compensation Plans

A statement posted on OPEC’s website this week announced that the OPEC Secretariat has received updated compensation plans from Iraq, the United Arab Emirates (UAE), Kazakhstan, and Oman. A table accompanying this statement showed that these compensation plans amount to a total of 221,000 barrels per day in November, 272,000

Read More »

LogicMonitor closes Catchpoint buy, targets AI observability

The acquisition combines LogicMonitor’s observability platform with Catchpoint’s internet-level intelligence, which monitors performance from thousands of global vantage points. Once integrated, Catchpoint’s synthetic monitoring, network data, and real-user monitoring will feed directly into Edwin AI, LogicMonitor’s intelligence engine. The goal is to let enterprise customers shift from reactive alerting to

Read More »

Akamai acquires Fermyon for edge computing as WebAssembly comes of age

Spin handles compilation from source to WebAssembly bytecode and manages execution on target platforms. The runtime abstracts the underlying technology while preserving WebAssembly’s performance and security characteristics. This bet on WebAssembly standards has paid off as the technology matured.  WebAssembly has evolved significantly beyond its initial browser-focused design to support

Read More »

Var Energi Hits New Oil Discovery Near Goliat

Var Energi ASA on Thursday confirmed a new oil discovery on Norway’s side of the Barents Sea, the second discovery in weeks in its ongoing Goliat Ridge drilling campaign. Located five kilometers (3.11 miles) north of the producing Goliat field, operated by Var Energi, the Goliat North exploration well encountered hydrocarbons in the Realgrunnen and Kobbe formations, the Norwegian company said in an online statement. Estimated gross recoverable resources are up to five million barrels of oil equivalent (MMboe). “Including the latest discovery, the Goliat Ridge is estimated to contain gross discovered resources of 39-108 MMboe and with additional prospective resources taking the total gross potential to up to 200 MMboe”, Var Energi said. “A tie-back of the Goliat Ridge discoveries to the nearby Goliat FPSO [floating production, storage and offloading vessel] is being planned”. Goliat Nord, or well 7122/7-8, aimed to prove hydrocarbons in Lower Jurassic/Upper Triassic and Middle Triassic rocks in the Realgrunnen Subgroup and the Kobbe Formation respectively, according to the Norwegian Offshore Directorate (NOD). “Well 7122/7-8 S encountered an eight-meter [26.25 feet] gas/oil column in the Tubaen Formation in the Realgrunnen Subgroup in reservoir rocks totaling 6.5 meters, with good reservoir quality”, the NOD reported separately. “The gas/oil contact was encountered 1,255 meters below sea level. The oil/water contact was not encountered. “The well also encountered a six-meter gas/oil column in the Fruholmen Formation in the Realgrunnen Subgroup in reservoir rocks with good reservoir quality. The gas/oil contact was encountered 1,285 meters below sea level. The oil/water contact was encountered 1,290 meters below sea level. “In the Kobbe Formation, the well encountered a 17-meter oil column in reservoir rocks totaling 12 meters, with good reservoir quality. The oil/water contact was encountered 2,048 meters below sea level”. Goliat North was drilled to a vertical depth of 2,197 meters below

Read More »

PTTEP Eyes 8 Percent Growth in Sales Volume Next Year

Thailand’s state-owned PTT Exploration and Production Public Company Ltd (PTTEP) on Thursday announced a spending budget of around $7.73 billion for 2026, targeting an eight percent sales volume increase to 556,000 barrels of oil equivalent a day (boed). “This growth reflects strong momentum from our operational expansion in Thailand and overseas this year, which has already translated into higher sales volume and revenue, and will continue to support our performance into 2026 and beyond”, PTTEP chief executive Montri Rawanchaikul said in an online statement. Of the 2026 budget, $5.16 billion is for capital expenditure and $2.56 billion is for operating expenditure. PTTEP said it aims to maximize volumes from current producing assets to strengthen the Southeast Asian country’s energy security. “Main producing projects include G1/61 (Erawan, Platong, Satun and Funan fields), G2/61 (Bongkot field), Arthit, S1, Contract 4 projects and projects in the Malaysia-Thailand Joint Development Area”, PTTEP said. “This plan also includes other overseas projects in Malaysia, Oman and Algeria. The capex budget of USD 3,605 million (equivalent to THB 118,064 million) is allocated to support these activities”. It said it has allotted $118 million for emission reduction activities including a carbon capture and storage (CCS) project in the Arthit field in the Gulf of Thailand. It announced a positive final investment decision on the CCS project on September 8, earmarking a five-year investment of $320 million. Expected to start operations 2028, the project is designed to capture and store up to one million metric tons of carbon dioxide a year. The 2026 plan also involves “accelerating the activities of key projects under the development phase, including Ghasha Concession, Abu Dhabi Offshore 2, Mozambique Area 1, Malaysia greenfields such as Malaysia SK405B, Malaysia SK417 and Malaysia SK438 Projects, to achieve production start-up timelines as planned, with the allocated capex budget

Read More »

U.S. Department of Energy Announces New Research, Technology, and Economic Security Framework

The U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy announced the release of a memo by the Deputy Secretary of Energy that describes a framework designed to minimize foreign risks to the scientific enterprise of DOE and the National Nuclear Security Administration (NNSA). The newly published Research, Technology & Economic Security (RTES) Framework highlights DOE’s goals, process, high level risk factors, and commitment to mitigation when assessing RTES risk. This framework outlines a harmonized approach across all DOE/NNSA funding offices that undertakes to protect DOE’s early-stage research and development (R&D) in academic settings, applied R&D stage projects, and demonstration and deployment stage projects while maintaining an open collaborative, and world leading scientific enterprise. The framework also highlights DOE’s commitment to mitigation when assessing RTES risk and outlines its goals and processes. Join an RTES Informational Webinar To Learn More To assist the applicant and recipient community in understanding and adapting to the recently published framework, DOE will host a webinar to introduce the approach and answer questions. Funding awardees and prospective applicants are encouraged to review the framework and attend Monday, December 16, 2024. Register today. About DOE’s RTES Office DOE’s Office of Research, Technology & Economic Security (RTES), situated in DOE’s Office of International Affairs, undertakes several risk mitigation activities that support DOE’s responsibility to protect federal funding from undue foreign influence and to accomplish its mission in ways that protect and further energy security and technological advancement of the United States. Specifically, RTES identifies and addresses potential security risks that threaten the scientific enterprise; establishes best practices for programs; conducts outreach activities for stakeholders; educates Department programs on potential security risks; and conducts or facilitates risk assessments of DOE proposals, loans, and awards. More information about RTES’s mission, activities, events, and ways to get involved

Read More »

@H2Spotlight: Fall 2024

Spotlight on Success: First Megawatt-Scale Demonstration of Hydrogen Fuel Cells for Data Center Backup Power Earlier this year, Caterpillar Inc. announced successful completion of a first-of-a-kind collaboration with Microsoft and Ballard Power Systems to demonstrate the viability of using large-format hydrogen fuel cells to supply reliable backup power for data centers.  The demonstration, hosted at Microsoft’s Cheyenne, Wyoming, data center, simulated a 48-hour power outage, providing critical insights into the capabilities of fuel cell systems to power multi-megawatt data centers, ensuring uninterrupted power supply to meet 99.999% uptime requirements. Caterpillar served as project lead, providing overall system integration, power electronics, and microgrid controls that form the central structure of the hydrogen power solution. Hardware for the demonstration included two Caterpillar power grid stabilization storage systems alongside a 1.5-MW hydrogen fuel cell supplied by Ballard Power Systems. Over the course of the project, researchers evaluated the cost and performance of the fuel cell system including analysis of key performance characteristics such as power transfer time and load acceptance. Launched in 2020 and completed this year, the project was supported and partially funded by DOE under the H2@Scale initiative, which brings stakeholders together to advance affordable hydrogen production, transport, storage, and utilization in multiple energy sectors. During the demonstration, researchers at DOE’s National Renewable Energy Laboratory (NREL) analyzed safety, techno-economics, and greenhouse gas impacts.

Read More »

U.S. Department of Energy Releases Request for Information on Defining Sustainable Maritime Fuels in the United States

To support and advance future maritime fuel technology and investment, the U.S. Department of Energy (DOE) released a Request for Information (RFI) to establish a consistent and reliable definition for sustainable maritime fuel (SMF) that informs and aligns community, industry, governments, and other maritime stakeholders. The Action Plan for Maritime Energy and Emissions Innovation (Action Plan), a summary of which was released in December 2024, builds on the 2023 U.S. National Blueprint for Transportation Decarbonization to define actions that aim to achieve a clean, safe, accessible and affordable U.S. maritime transportation system. The Action Plan calls for the federal government to define “Sustainable Maritime Fuel,” which is critical to evaluating and determining future SMF production volume goals in the Action Plan and alternative fuels that align with the U.S. 2050 net emission goals. “The global maritime sector is pursuing sustainable maritime fuels. The United States is well positioned to be a global leader in producing, distributing, and selling these sustainable fuels that can provide more affordable options to the market,” said Michael Berube, deputy assistant secretary for sustainable transportation and fuels, Office of Energy Efficiency and Renewable Energy. “This Request for Information will help align the industry around common definitions, enabling broader adoption across the economy.” The U.S. maritime sector connects virtually every aspect of American life—from our clothes and food, to our cars, and the oil and natural gas used to heat and cool homes. About 99% of U.S. overseas trade enters or leaves the United States by ship. This waterborne cargo and associated activity contribute more than $500 billion to the U.S. gross domestic product and sustain over 10 million U.S. jobs. However, the Action Plan estimates the total amount of greenhouse (GHG) emissions from fuel sold in the United States for use in maritime applications accounts for 4% of the U.S. transportation sector’s GHG

Read More »

Hydrogen-Powered Heavy-Duty Truck Establishes New Threshold by Traveling 1,800 Miles on a Single Fill

The U.S. Department of Energy’s (DOE’s) Hydrogen and Fuel Cell Technologies Office (HFTO) today highlighted a recent groundbreaking achievement in hydrogen-powered transportation: a prototype H2Rescue truck, built and powered by Accelera with funding support from DOE and other federal agency partners, last month established a new world record by traveling 1,806 miles on a single fill of hydrogen fuel. The truck completed its record-setting journey in California and was closely monitored and validated by an adjudicator from Guinness World Records who confirmed the truck’s hydrogen tank was sealed before the journey began. Powered by a Cummins Accelera fuel cell engine and a 250-kilowatt traction motor, the truck carried 175 kilograms of hydrogen and consumed 168 kilograms while navigating rush hour traffic, between 50 to 55 mph, on public roads, operating in temperatures varying from 60 to 80 degrees Fahrenheit. Accelera researchers confirmed that over the 1,800-mile trip, the hydrogen-filled truck emitted zero pounds of carbon dioxide (CO2), a stark contrast to the 664 pounds of CO2 a standard internal combustion engine vehicle would have emitted over the same distance. Using hydrogen in this type of truck—which is typically used in emergency response, military, and utility applications—can displace approximately 1,825 gallons of fuel and reduce greenhouse gas emissions by 2.5 metric tons annually. This demonstration vehicle, weighing approximately 33,000 pounds, is the result of an innovative collaboration between Accelera, HFTO, DOE’s Vehicle Technologies Office, the U.S. Department of Homeland Security’s Science and Technology Directorate, the Federal Emergency Management Agency, and the U.S. Department of Defense.

Read More »

With AI Factories, AWS aims to help enterprises scale AI while respecting data sovereignty

“The AWS AI Factory seeks to resolve the tension between cloud-native innovation velocity and sovereign control. Historically, these objectives lived in opposition. CIOs faced an unsustainable dilemma: choose between on-premises security or public cloud cost and speed benefits,” he said. “This is arguably AWS’s most significant move in the sovereign AI landscape.” On premises GPUs are already a thing AI Factories isn’t the first attempt to put cloud-managed AI accelerators in customers’ data centers. Oracle introduced Nvidia processors to its Cloud@Customer managed on-premises offering in March, while Microsoft announced last month that it will add Nvidia processors to its Azure Local service. Google Distributed Cloud also includes a GPU offering, and even AWS offers lower-powered Nvidia processors in its AWS Outposts. AWS’ AI Factories is also likely to square off against from a range of similar products, such as Nvidia’s AI Factory, Dell’s AI Factory stack, and HPE’s Private Cloud for AI — each tightly coupled with Nvidia GPUs, networking, or software, and all vying to become the default on-premises AI platform. But, said Sopko, AWS will have an advantage over rivals due to its hardware-software integration and operational maturity: “The secret sauce is the software, not the infrastructure,” he said. Omdia principal analyst Alexander Harrowell expects AWS’s AI Factories to combine the on-premises control of Outposts with the flexibility and ability to run a wider variety of services offered by AWS Local Zones, which puts small data centers close to large population centers to reduce service latency. Sopko cautioned that enterprises are likely to face high commitment costs, drawing a parallel with Oracle’s OCI Dedicated Region, one of its Cloud@Customer offerings.

Read More »

HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships

On the hardware front, HPE is targeting the AI data center edge with a new MX router and the scale-out networking delivery with a new QFX switch. Juniper’s MX series is its flagship routing family aimed at carriers, large-scale enterprise data center and WAN customers, while the QFX line services data center customers anchoring spine/leaf networks as well as top-of-rack systems. The new 1U, 1.6Tbps MX301 multiservice edge router, available now, is aimed at bringing AI inferencing closer to the source of data generation and can be positioned in metro, mobile backhaul, and enterprise routing applications, Rahim said. It includes high-density support for 16 x 1/1025/50GbE, 10 x 100Gb and 4 x 400Gb interfaces. “The MX301 is essentially the on-ramp to provide high speed, secure connections from distributed inference cluster users, devices and agents from the edge all the way to the AI data center,” Rami said. “The requirements here are typically around high performance, but also very high logical skills and integrated security.” In the QFX arena, the new QFX5250 switch, available in 1Q 2026, is a fully liquid-cooled box aimed at tying together Nvidia Rubin and/or AMD MI400 GPUs for AI consumption across the data center. It is built on Broadcom Tomahawk 6 silicon and supports up to 102.4Tbps Ethernet bandwidth, Rahim said.  “The QFX5250 combines HPE liquid cooling technology with Juniper networking software (Junos) and integrated AIops intelligence to deliver a high-performance, power-efficient and simplified operations for next-generation AI inference,” Rami said. Partnership expansions Also key to HPE/Juniper’s AI networking plans are its partnerships with Nvidia and AMD. The company announced its relationship with Nvidia now includes HPE Juniper edge onramp and long-haul data center interconnect (DCI) support in its Nvidia AI Computing by HPE portfolio. This extension uses the MX and Junipers PTX hyperscaler routers to support high-scale, secure

Read More »

What is co-packaged optics? A solution for surging capacity in AI data center networks

When it announced its CPO-capable switches, Nvidia said they would improve resiliency by 10 times at scale compared to previous switch generations. Several factors contribute to this claim, including the fact that the optical switches require four times fewer lasers, Shainer says. Whereas the laser source was previously part of the transceiver, the optical engine is now incorporated onto the ASIC, allowing multiple optical channels to share a single laser. Additionally, in Nvidia’s implementation, the laser source is located outside of the switch. “We want to keep the ability to replace a laser source in case it has failed and needs to be replaced,” he says. “They are completely hot-swappable, so you don’t need to shut down the switch.” Nonetheless, you may often hear that when something fails in a CPO box, you need to replace the entire box. That may be true if it’s the photonics engine embedded in silicon inside the box. “But they shouldn’t fail that often. There are not a lot of moving parts in there,” Wilkinson says. While he understands the argument around failures, he doesn’t expect it to pan out as CPO gets deployed. “It’s a fallacy,” he says. There’s also a simple workaround to the resiliency issue, which hyperscalers are already talking about, Karavalas says: overbuild. “Have 10% more ports than you need or 5%,” he says. “If you lose a port because the optic goes bad, you just move it and plug it in somewhere else.” Which vendors are backing co-packaged optics? In terms of vendors that have or plan to have CPO offerings, the list is not long, unless you include various component players like TSMC. But in terms of major switch vendors, here’s a rundown: Broadcom has been making steady progress on CPO since 2021. It is now shipping “to

Read More »

Nvidia’s $2B Synopsys stake tests independence of open AI interconnect standard

But the concern for enterprise IT leaders is whether Nvidia’s financial stakes in UALink consortium members could influence the development of an open standard specifically designed to compete with Nvidia’s proprietary technology and to give enterprises more choices in the datacenter. Organizations planning major AI infrastructure investments view such open standards as critical to avoiding vendor lock-in and maintaining competitive pricing. “This does put more pressure on UALink since Intel is also a member and also took investment from Nvidia,” Sag said. UALink and Synopsys’s critical role UALink represents the industry’s most significant effort to prevent vendor lock-in for AI infrastructure. The consortium ratified its UALink 200G 1.0 Specification in April, defining an open standard for connecting up to 1,024 AI accelerators within computing pods at 200 Gbps per lane — directly competing with Nvidia’s NVLink for scale-up applications. Synopsys plays a critical role. The company joined UALink’s board in January and in December announced the industry’s first UALink design components, enabling chip designers to build UALink-compatible accelerators. Analysts flag governance concerns Gaurav Gupta, VP analyst at Gartner, acknowledged the tension. “The Nvidia-Synopsys deal does raise questions around the future of UALink as Synopsys is a key partner of the consortium and holds critical IP for UALink, which competes with Nvidia’s proprietary NVLink,” he said. Sanchit Vir Gogia, chief analyst at Greyhound Research, sees deeper structural concerns. “Synopsys is not a peripheral player in this standard; it is the primary supplier of UALink IP and a board member within the UALink Consortium,” he said. “Nvidia’s entry into Synopsys’ shareholder structure risks contaminating that neutrality.”

Read More »

Cooling crisis at CME: A wakeup call for modern infrastructure governance

Organizations should reassess redundancy However, he pointed out, “the deeper concern is that CME had a secondary data center ready to take the load, yet the failover threshold was set too high, and the activation sequence remained manually gated. The decision to wait for the cooling issue to self-correct rather than trigger the backup site immediately revealed a governance model that had not evolved to keep pace with the operational tempo of modern markets.” Thermal failures, he said, “do not unfold on the timelines assumed in traditional disaster recovery playbooks. They escalate within minutes and demand automated responses that do not depend on human certainty about whether a facility will recover in time.” Matt Kimball, VP and principal analyst at Moor Insights & Strategy, said that to some degree what happened in Aurora highlights an issue that may arise on occasion: “the communications gap that can exist between IT executives and data center operators. Think of ‘rack in versus rack out’ mindsets.” Often, he said, the operational elements of that data center environment, such as cooling, power, fire hazards, physical security, and so forth, fall outside the realm of an IT executive focused on delivering IT services to the business. “And even if they don’t fall outside the realm, these elements are certainly not a primary focus,” he noted. “This was certainly true when I was living in the IT world.” Additionally, said Kimball, “this highlights the need for organizations to reassess redundancy and resilience in a new light. Again, in IT, we tend to focus on resilience and redundancy at the app, server, and workload layers. Maybe even cluster level. But as we continue to place more and more of a premium on data, and the terms ‘business critical’ or ‘mission critical’ have real relevance, we have to zoom out

Read More »

Microsoft loses two senior AI infrastructure leaders as data center pressures mount

Microsoft did not immediately respond to a request for comment. Microsoft’s constraints Analysts say the twin departures mark a significant setback for Microsoft at a critical moment in the AI data center race, with pressure mounting from both OpenAI’s model demands and Google’s infrastructure scale. “Losing some of the best professionals working on this challenge could set Microsoft back,” said Neil Shah, partner and co-founder at Counterpoint Research. “Solving the energy wall is not trivial, and there may have been friction or strategic differences that contributed to their decision to move on, especially if they saw an opportunity to make a broader impact and do so more lucratively at a company like Nvidia.” Even so, Microsoft has the depth and ecosystem strength to continue doubling down on AI data centers, said Prabhu Ram, VP for industry research at Cybermedia Research. According to Sanchit Gogia, chief analyst at Greyhound Research, the departures come at a sensitive moment because Microsoft is trying to expand its AI infrastructure faster than physical constraints allow. “The executives who have left were central to GPU cluster design, data center engineering, energy procurement, and the experimental power and cooling approaches Microsoft has been pursuing to support dense AI workloads,” Gogia said. “Their exit coincides with pressures the company has already acknowledged publicly. GPUs are arriving faster than the company can energize the facilities that will house them, and power availability has overtaken chip availability as the real bottleneck.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »