Stay Ahead, Stay ONMINE

Overcome Failing Document Ingestion & RAG Strategies with Agentic Knowledge Distillation

Introduction Many generative AI use cases still revolve around Retrieval Augmented Generation (RAG), yet consistently fall short of user expectations. Despite the growing body of research on RAG improvements and even adding Agents into the process, many solutions still fail to return exhaustive results, miss information that is critical but infrequently mentioned in the documents, require multiple search iterations, and generally struggle to reconcile key themes across multiple documents. To top it all off, many implementations still rely on cramming as much “relevant” information as possible into the model’s context window alongside detailed system and user prompts. Reconciling all this information often exceeds the model’s cognitive capacity and compromises response quality and consistency. This is where our Agentic Knowledge Distillation + Pyramid Search Approach comes into play. Instead of chasing the best chunking strategy, retrieval algorithm, or inference-time reasoning method, my team, Jim Brown, Mason Sawtell, Sandi Besen, and I, take an agentic approach to document ingestion. We leverage the full capability of the model at ingestion time to focus exclusively on distilling and preserving the most meaningful information from the document dataset. This fundamentally simplifies the RAG process by allowing the model to direct its reasoning abilities toward addressing the user/system instructions rather than struggling to understand formatting and disparate information across document chunks.  We specifically target high-value questions that are often difficult to evaluate because they have multiple correct answers or solution paths. These cases are where traditional RAG solutions struggle most and existing RAG evaluation datasets are largely insufficient for testing this problem space. For our research implementation, we downloaded annual and quarterly reports from the last year for the 30 companies in the DOW Jones Industrial Average. These documents can be found through the SEC EDGAR website. The information on EDGAR is accessible and able to be downloaded for free or can be queried through EDGAR public searches. See the SEC privacy policy for additional details, information on the SEC website is “considered public information and may be copied or further distributed by users of the web site without the SEC’s permission”. We selected this dataset for two key reasons: first, it falls outside the knowledge cutoff for the models evaluated, ensuring that the models cannot respond to questions based on their knowledge from pre-training; second, it’s a close approximation for real-world business problems while allowing us to discuss and share our findings using publicly available data.  While typical RAG solutions excel at factual retrieval where the answer is easily identified in the document dataset (e.g., “When did Apple’s annual shareholder’s meeting occur?”), they struggle with nuanced questions that require a deeper understanding of concepts across documents (e.g., “Which of the DOW companies has the most promising AI strategy?”). Our Agentic Knowledge Distillation + Pyramid Search Approach addresses these types of questions with much greater success compared to other standard approaches we tested and overcomes limitations associated with using knowledge graphs in RAG systems.  In this article, we’ll cover how our knowledge distillation process works, key benefits of this approach, examples, and an open discussion on the best way to evaluate these types of systems where, in many cases, there is no singular “right” answer. Building the pyramid: How Agentic Knowledge Distillation works Image by author and team depicting pyramid structure for document ingestion. Robots meant to represent agents building the pyramid. Overview Our knowledge distillation process creates a multi-tiered pyramid of information from the raw source documents. Our approach is inspired by the pyramids used in deep learning computer vision-based tasks, which allow a model to analyze an image at multiple scales. We take the contents of the raw document, convert it to markdown, and distill the content into a list of atomic insights, related concepts, document abstracts, and general recollections/memories. During retrieval it’s possible to access any or all levels of the pyramid to respond to the user request.  How to distill documents and build the pyramid:  Convert documents to Markdown: Convert all raw source documents to Markdown. We’ve found models process markdown best for this task compared to other formats like JSON and it is more token efficient. We used Azure Document Intelligence to generate the markdown for each page of the document, but there are many other open-source libraries like MarkItDown which do the same thing. Our dataset included 331 documents and 16,601 pages.  Extract atomic insights from each page: We process documents using a two-page sliding window, which allows each page to be analyzed twice. This gives the agent the opportunity to correct any potential mistakes when processing the page initially. We instruct the model to create a numbered list of insights that grows as it processes the pages in the document. The agent can overwrite insights from the previous page if they were incorrect since it sees each page twice. We instruct the model to extract insights in simple sentences following the subject-verb-object (SVO) format and to write sentences as if English is the second language of the user. This significantly improves performance by encouraging clarity and precision. Rolling over each page multiple times and using the SVO format also solves the disambiguation problem, which is a huge challenge for knowledge graphs. The insight generation step is also particularly helpful for extracting information from tables since the model captures the facts from the table in clear, succinct sentences. Our dataset produced 216,931 total insights, about 13 insights per page and 655 insights per document. Distilling concepts from insights: From the detailed list of insights, we identify higher-level concepts that connect related information about the document. This step significantly reduces noise and redundant information in the document while preserving essential information and themes. Our dataset produced 14,824 total concepts, about 1 concept per page and 45 concepts per document.  Creating abstracts from concepts: Given the insights and concepts in the document, the LLM writes an abstract that appears both better than any abstract a human would write and more information-dense than any abstract present in the original document. The LLM generated abstract provides incredibly comprehensive knowledge about the document with a small token density that carries a significant amount of information. We produce one abstract per document, 331 total. Storing recollections/memories across documents: At the top of the pyramid we store critical information that is useful across all tasks. This can be information that the user shares about the task or information the agent learns about the dataset over time by researching and responding to tasks. For example, we can store the current 30 companies in the DOW as a recollection since this list is different from the 30 companies in the DOW at the time of the model’s knowledge cutoff. As we conduct more and more research tasks, we can continuously improve our recollections and maintain an audit trail of which documents these recollections originated from. For example, we can keep track of AI strategies across companies, where companies are making major investments, etc. These high-level connections are super important since they reveal relationships and information that are not apparent in a single page or document. Sample subset of insights extracted from IBM 10Q, Q3 2024 (page 4) We store the text and embeddings for each layer of the pyramid (pages and up) in Azure PostgreSQL. We originally used Azure AI Search, but switched to PostgreSQL for cost reasons. This required us to write our own hybrid search function since PostgreSQL doesn’t yet natively support this feature. This implementation would work with any vector database or vector index of your choosing. The key requirement is to store and efficiently retrieve both text and vector embeddings at any level of the pyramid.  This approach essentially creates the essence of a knowledge graph, but stores information in natural language, the way an LLM natively wants to interact with it, and is more efficient on token retrieval. We also let the LLM pick the terms used to categorize each level of the pyramid, this seemed to let the model decide for itself the best way to describe and differentiate between the information stored at each level. For example, the LLM preferred “insights” to “facts” as the label for the first level of distilled knowledge. Our goal in doing this was to better understand how an LLM thinks about the process by letting it decide how to store and group related information.  Using the pyramid: How it works with RAG & Agents At inference time, both traditional RAG and agentic approaches benefit from the pre-processed, distilled information ingested in our knowledge pyramid. The pyramid structure allows for efficient retrieval in both the traditional RAG case, where only the top X related pieces of information are retrieved or in the Agentic case, where the Agent iteratively plans, retrieves, and evaluates information before returning a final response.  The benefit of the pyramid approach is that information at any and all levels of the pyramid can be used during inference. For our implementation, we used PydanticAI to create a search agent that takes in the user request, generates search terms, explores ideas related to the request, and keeps track of information relevant to the request. Once the search agent determines there’s sufficient information to address the user request, the results are re-ranked and sent back to the LLM to generate a final reply. Our implementation allows a search agent to traverse the information in the pyramid as it gathers details about a concept/search term. This is similar to walking a knowledge graph, but in a way that’s more natural for the LLM since all the information in the pyramid is stored in natural language. Depending on the use case, the Agent could access information at all levels of the pyramid or only at specific levels (e.g. only retrieve information from the concepts). For our experiments, we did not retrieve raw page-level data since we wanted to focus on token efficiency and found the LLM-generated information for the insights, concepts, abstracts, and recollections was sufficient for completing our tasks. In theory, the Agent could also have access to the page data; this would provide additional opportunities for the agent to re-examine the original document text; however, it would also significantly increase the total tokens used.  Here is a high-level visualization of our Agentic approach to responding to user requests: Image created by author and team providing an overview of the agentic research & response process Results from the pyramid: Real-world examples To evaluate the effectiveness of our approach, we tested it against a variety of question categories, including typical fact-finding questions and complex cross-document research and analysis tasks.  Fact-finding (spear fishing):  These tasks require identifying specific information or facts that are buried in a document. These are the types of questions typical RAG solutions target but often require many searches and consume lots of tokens to answer correctly.  Example task: “What was IBM’s total revenue in the latest financial reporting?” Example response using pyramid approach: “IBM’s total revenue for the third quarter of 2024 was $14.968 billion [ibm-10q-q3-2024.pdf, pg. 4] Total tokens used to research and generate response This result is correct (human-validated) and was generated using only 9,994 total tokens, with 1,240 tokens in the generated final response.  Complex research and analysis:  These tasks involve researching and understanding multiple concepts to gain a broader understanding of the documents and make inferences and informed assumptions based on the gathered facts. Example task: “Analyze the investments Microsoft and NVIDIA are making in AI and how they are positioning themselves in the market. The report should be clearly formatted.” Example response: Response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA. The result is a comprehensive report that executed quickly and contains detailed information about each of the companies. 26,802 total tokens were used to research and respond to the request with a significant percentage of them used for the final response (2,893 tokens or ~11%). These results were also reviewed by a human to verify their validity. Snippet indicating total token usage for the task Example task: “Create a report on analyzing the risks disclosed by the various financial companies in the DOW. Indicate which risks are shared and unique.” Example response: Part 1 of response generated by the agent on disclosed risks. Part 2 of response generated by the agent on disclosed risks. Similarly, this task was completed in 42.7 seconds and used 31,685 total tokens, with 3,116 tokens used to generate the final report.  Snippet indicating total token usage for the task These results for both fact-finding and complex analysis tasks demonstrate that the pyramid approach efficiently creates detailed reports with low latency using a minimal amount of tokens. The tokens used for the tasks carry dense meaning with little noise allowing for high-quality, thorough responses across tasks. Benefits of the pyramid: Why use it? Overall, we found that our pyramid approach provided a significant boost in response quality and overall performance for high-value questions.  Some of the key benefits we observed include:  Reduced model’s cognitive load: When the agent receives the user task, it retrieves pre-processed, distilled information rather than the raw, inconsistently formatted, disparate document chunks. This fundamentally improves the retrieval process since the model doesn’t waste its cognitive capacity on trying to break down the page/chunk text for the first time.  Superior table processing: By breaking down table information and storing it in concise but descriptive sentences, the pyramid approach makes it easier to retrieve relevant information at inference time through natural language queries. This was particularly important for our dataset since financial reports contain lots of critical information in tables.  Improved response quality to many types of requests: The pyramid enables more comprehensive context-aware responses to both precise, fact-finding questions and broad analysis based tasks that involve many themes across numerous documents.  Preservation of critical context: Since the distillation process identifies and keeps track of key facts, important information that might appear only once in the document is easier to maintain. For example, noting that all tables are represented in millions of dollars or in a particular currency. Traditional chunking methods often cause this type of information to slip through the cracks.  Optimized token usage, memory, and speed: By distilling information at ingestion time, we significantly reduce the number of tokens required during inference, are able to maximize the value of information put in the context window, and improve memory use.  Scalability: Many solutions struggle to perform as the size of the document dataset grows. This approach provides a much more efficient way to manage a large volume of text by only preserving critical information. This also allows for a more efficient use of the LLMs context window by only sending it useful, clear information. Efficient concept exploration: The pyramid enables the agent to explore related information similar to navigating a knowledge graph, but does not require ever generating or maintaining relationships in the graph. The agent can use natural language exclusively and keep track of important facts related to the concepts it’s exploring in a highly token-efficient and fluid way.  Emergent dataset understanding: An unexpected benefit of this approach emerged during our testing. When asking questions like “what can you tell me about this dataset?” or “what types of questions can I ask?”, the system is able to respond and suggest productive search topics because it has a more robust understanding of the dataset context by accessing higher levels in the pyramid like the abstracts and recollections.  Beyond the pyramid: Evaluation challenges & future directions Challenges While the results we’ve observed when using the pyramid search approach have been nothing short of amazing, finding ways to establish meaningful metrics to evaluate the entire system both at ingestion time and during information retrieval is challenging. Traditional RAG and Agent evaluation frameworks often fail to address nuanced questions and analytical responses where many different responses are valid. Our team plans to write a research paper on this approach in the future, and we are open to any thoughts and feedback from the community, especially when it comes to evaluation metrics. Many of the existing datasets we found were focused on evaluating RAG use cases within one document or precise information retrieval across multiple documents rather than robust concept and theme analysis across documents and domains.  The main use cases we are interested in relate to broader questions that are representative of how businesses actually want to interact with GenAI systems. For example, “tell me everything I need to know about customer X” or “how do the behaviors of Customer A and B differ? Which am I more likely to have a successful meeting with?”. These types of questions require a deep understanding of information across many sources. The answers to these questions typically require a person to synthesize data from multiple areas of the business and think critically about it. As a result, the answers to these questions are rarely written or saved anywhere which makes it impossible to simply store and retrieve them through a vector index in a typical RAG process.  Another consideration is that many real-world use cases involve dynamic datasets where documents are consistently being added, edited, and deleted. This makes it difficult to evaluate and track what a “correct” response is since the answer will evolve as the available information changes.  Future directions In the future, we believe that the pyramid approach can address some of these challenges by enabling more effective processing of dense documents and storing learned information as recollections. However, tracking and evaluating the validity of the recollections over time will be critical to the system’s overall success and remains a key focus area for our ongoing work.  When applying this approach to organizational data, the pyramid process could also be used to identify and assess discrepancies across areas of the business. For example, uploading all of a company’s sales pitch decks could surface where certain products or services are being positioned inconsistently. It could also be used to compare insights extracted from various line of business data to help understand if and where teams have developed conflicting understandings of topics or different priorities. This application goes beyond pure information retrieval use cases and would allow the pyramid to serve as an organizational alignment tool that helps identify divergences in messaging, terminology, and overall communication.  Conclusion: Key takeaways and why the pyramid approach matters The knowledge distillation pyramid approach is significant because it leverages the full power of the LLM at both ingestion and retrieval time. Our approach allows you to store dense information in fewer tokens which has the added benefit of reducing noise in the dataset at inference. Our approach also runs very quickly and is incredibly token efficient, we are able to generate responses within seconds, explore potentially hundreds of searches, and on average use

Introduction

Many generative AI use cases still revolve around Retrieval Augmented Generation (RAG), yet consistently fall short of user expectations. Despite the growing body of research on RAG improvements and even adding Agents into the process, many solutions still fail to return exhaustive results, miss information that is critical but infrequently mentioned in the documents, require multiple search iterations, and generally struggle to reconcile key themes across multiple documents. To top it all off, many implementations still rely on cramming as much “relevant” information as possible into the model’s context window alongside detailed system and user prompts. Reconciling all this information often exceeds the model’s cognitive capacity and compromises response quality and consistency.

This is where our Agentic Knowledge Distillation + Pyramid Search Approach comes into play. Instead of chasing the best chunking strategy, retrieval algorithm, or inference-time reasoning method, my team, Jim Brown, Mason Sawtell, Sandi Besen, and I, take an agentic approach to document ingestion.

We leverage the full capability of the model at ingestion time to focus exclusively on distilling and preserving the most meaningful information from the document dataset. This fundamentally simplifies the RAG process by allowing the model to direct its reasoning abilities toward addressing the user/system instructions rather than struggling to understand formatting and disparate information across document chunks. 

We specifically target high-value questions that are often difficult to evaluate because they have multiple correct answers or solution paths. These cases are where traditional RAG solutions struggle most and existing RAG evaluation datasets are largely insufficient for testing this problem space. For our research implementation, we downloaded annual and quarterly reports from the last year for the 30 companies in the DOW Jones Industrial Average. These documents can be found through the SEC EDGAR website. The information on EDGAR is accessible and able to be downloaded for free or can be queried through EDGAR public searches. See the SEC privacy policy for additional details, information on the SEC website is “considered public information and may be copied or further distributed by users of the web site without the SEC’s permission”. We selected this dataset for two key reasons: first, it falls outside the knowledge cutoff for the models evaluated, ensuring that the models cannot respond to questions based on their knowledge from pre-training; second, it’s a close approximation for real-world business problems while allowing us to discuss and share our findings using publicly available data. 

While typical RAG solutions excel at factual retrieval where the answer is easily identified in the document dataset (e.g., “When did Apple’s annual shareholder’s meeting occur?”), they struggle with nuanced questions that require a deeper understanding of concepts across documents (e.g., “Which of the DOW companies has the most promising AI strategy?”). Our Agentic Knowledge Distillation + Pyramid Search Approach addresses these types of questions with much greater success compared to other standard approaches we tested and overcomes limitations associated with using knowledge graphs in RAG systems. 

In this article, we’ll cover how our knowledge distillation process works, key benefits of this approach, examples, and an open discussion on the best way to evaluate these types of systems where, in many cases, there is no singular “right” answer.

Building the pyramid: How Agentic Knowledge Distillation works

AI-generated image showing a pyramid structure for document ingestion with labelled sections.
Image by author and team depicting pyramid structure for document ingestion. Robots meant to represent agents building the pyramid.

Overview

Our knowledge distillation process creates a multi-tiered pyramid of information from the raw source documents. Our approach is inspired by the pyramids used in deep learning computer vision-based tasks, which allow a model to analyze an image at multiple scales. We take the contents of the raw document, convert it to markdown, and distill the content into a list of atomic insights, related concepts, document abstracts, and general recollections/memories. During retrieval it’s possible to access any or all levels of the pyramid to respond to the user request. 

How to distill documents and build the pyramid: 

  1. Convert documents to Markdown: Convert all raw source documents to Markdown. We’ve found models process markdown best for this task compared to other formats like JSON and it is more token efficient. We used Azure Document Intelligence to generate the markdown for each page of the document, but there are many other open-source libraries like MarkItDown which do the same thing. Our dataset included 331 documents and 16,601 pages. 
  2. Extract atomic insights from each page: We process documents using a two-page sliding window, which allows each page to be analyzed twice. This gives the agent the opportunity to correct any potential mistakes when processing the page initially. We instruct the model to create a numbered list of insights that grows as it processes the pages in the document. The agent can overwrite insights from the previous page if they were incorrect since it sees each page twice. We instruct the model to extract insights in simple sentences following the subject-verb-object (SVO) format and to write sentences as if English is the second language of the user. This significantly improves performance by encouraging clarity and precision. Rolling over each page multiple times and using the SVO format also solves the disambiguation problem, which is a huge challenge for knowledge graphs. The insight generation step is also particularly helpful for extracting information from tables since the model captures the facts from the table in clear, succinct sentences. Our dataset produced 216,931 total insights, about 13 insights per page and 655 insights per document.
  3. Distilling concepts from insights: From the detailed list of insights, we identify higher-level concepts that connect related information about the document. This step significantly reduces noise and redundant information in the document while preserving essential information and themes. Our dataset produced 14,824 total concepts, about 1 concept per page and 45 concepts per document. 
  4. Creating abstracts from concepts: Given the insights and concepts in the document, the LLM writes an abstract that appears both better than any abstract a human would write and more information-dense than any abstract present in the original document. The LLM generated abstract provides incredibly comprehensive knowledge about the document with a small token density that carries a significant amount of information. We produce one abstract per document, 331 total.
  5. Storing recollections/memories across documents: At the top of the pyramid we store critical information that is useful across all tasks. This can be information that the user shares about the task or information the agent learns about the dataset over time by researching and responding to tasks. For example, we can store the current 30 companies in the DOW as a recollection since this list is different from the 30 companies in the DOW at the time of the model’s knowledge cutoff. As we conduct more and more research tasks, we can continuously improve our recollections and maintain an audit trail of which documents these recollections originated from. For example, we can keep track of AI strategies across companies, where companies are making major investments, etc. These high-level connections are super important since they reveal relationships and information that are not apparent in a single page or document.
Sample subset of insights extracted from IBM 10Q, Q3 2024
Sample subset of insights extracted from IBM 10Q, Q3 2024 (page 4)

We store the text and embeddings for each layer of the pyramid (pages and up) in Azure PostgreSQL. We originally used Azure AI Search, but switched to PostgreSQL for cost reasons. This required us to write our own hybrid search function since PostgreSQL doesn’t yet natively support this feature. This implementation would work with any vector database or vector index of your choosing. The key requirement is to store and efficiently retrieve both text and vector embeddings at any level of the pyramid. 

This approach essentially creates the essence of a knowledge graph, but stores information in natural language, the way an LLM natively wants to interact with it, and is more efficient on token retrieval. We also let the LLM pick the terms used to categorize each level of the pyramid, this seemed to let the model decide for itself the best way to describe and differentiate between the information stored at each level. For example, the LLM preferred “insights” to “facts” as the label for the first level of distilled knowledge. Our goal in doing this was to better understand how an LLM thinks about the process by letting it decide how to store and group related information. 

Using the pyramid: How it works with RAG & Agents

At inference time, both traditional RAG and agentic approaches benefit from the pre-processed, distilled information ingested in our knowledge pyramid. The pyramid structure allows for efficient retrieval in both the traditional RAG case, where only the top X related pieces of information are retrieved or in the Agentic case, where the Agent iteratively plans, retrieves, and evaluates information before returning a final response. 

The benefit of the pyramid approach is that information at any and all levels of the pyramid can be used during inference. For our implementation, we used PydanticAI to create a search agent that takes in the user request, generates search terms, explores ideas related to the request, and keeps track of information relevant to the request. Once the search agent determines there’s sufficient information to address the user request, the results are re-ranked and sent back to the LLM to generate a final reply. Our implementation allows a search agent to traverse the information in the pyramid as it gathers details about a concept/search term. This is similar to walking a knowledge graph, but in a way that’s more natural for the LLM since all the information in the pyramid is stored in natural language.

Depending on the use case, the Agent could access information at all levels of the pyramid or only at specific levels (e.g. only retrieve information from the concepts). For our experiments, we did not retrieve raw page-level data since we wanted to focus on token efficiency and found the LLM-generated information for the insights, concepts, abstracts, and recollections was sufficient for completing our tasks. In theory, the Agent could also have access to the page data; this would provide additional opportunities for the agent to re-examine the original document text; however, it would also significantly increase the total tokens used. 

Here is a high-level visualization of our Agentic approach to responding to user requests:

Overview of the agentic research & response process
Image created by author and team providing an overview of the agentic research & response process

Results from the pyramid: Real-world examples

To evaluate the effectiveness of our approach, we tested it against a variety of question categories, including typical fact-finding questions and complex cross-document research and analysis tasks. 

Fact-finding (spear fishing): 

These tasks require identifying specific information or facts that are buried in a document. These are the types of questions typical RAG solutions target but often require many searches and consume lots of tokens to answer correctly. 

Example task: “What was IBM’s total revenue in the latest financial reporting?”

Example response using pyramid approach: “IBM’s total revenue for the third quarter of 2024 was $14.968 billion [ibm-10q-q3-2024.pdf, pg. 4]

Screenshot of total tokens used to research and generate response
Total tokens used to research and generate response

This result is correct (human-validated) and was generated using only 9,994 total tokens, with 1,240 tokens in the generated final response. 

Complex research and analysis: 

These tasks involve researching and understanding multiple concepts to gain a broader understanding of the documents and make inferences and informed assumptions based on the gathered facts.

Example task: “Analyze the investments Microsoft and NVIDIA are making in AI and how they are positioning themselves in the market. The report should be clearly formatted.”

Example response:

Screenshot of the response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA.
Response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA.

The result is a comprehensive report that executed quickly and contains detailed information about each of the companies. 26,802 total tokens were used to research and respond to the request with a significant percentage of them used for the final response (2,893 tokens or ~11%). These results were also reviewed by a human to verify their validity.

Screenshot of snippet indicating total token usage for the task
Snippet indicating total token usage for the task

Example task: “Create a report on analyzing the risks disclosed by the various financial companies in the DOW. Indicate which risks are shared and unique.”

Example response:

Screenshot of part 1 of a response generated by the agent on disclosed risks.
Part 1 of response generated by the agent on disclosed risks.
Screenshot of part 2 of a response generated by the agent on disclosed risks.
Part 2 of response generated by the agent on disclosed risks.

Similarly, this task was completed in 42.7 seconds and used 31,685 total tokens, with 3,116 tokens used to generate the final report. 

Screenshot of a snippet indicating total token usage for the task
Snippet indicating total token usage for the task

These results for both fact-finding and complex analysis tasks demonstrate that the pyramid approach efficiently creates detailed reports with low latency using a minimal amount of tokens. The tokens used for the tasks carry dense meaning with little noise allowing for high-quality, thorough responses across tasks.

Benefits of the pyramid: Why use it?

Overall, we found that our pyramid approach provided a significant boost in response quality and overall performance for high-value questions. 

Some of the key benefits we observed include: 

  • Reduced model’s cognitive load: When the agent receives the user task, it retrieves pre-processed, distilled information rather than the raw, inconsistently formatted, disparate document chunks. This fundamentally improves the retrieval process since the model doesn’t waste its cognitive capacity on trying to break down the page/chunk text for the first time. 
  • Superior table processing: By breaking down table information and storing it in concise but descriptive sentences, the pyramid approach makes it easier to retrieve relevant information at inference time through natural language queries. This was particularly important for our dataset since financial reports contain lots of critical information in tables. 
  • Improved response quality to many types of requests: The pyramid enables more comprehensive context-aware responses to both precise, fact-finding questions and broad analysis based tasks that involve many themes across numerous documents. 
  • Preservation of critical context: Since the distillation process identifies and keeps track of key facts, important information that might appear only once in the document is easier to maintain. For example, noting that all tables are represented in millions of dollars or in a particular currency. Traditional chunking methods often cause this type of information to slip through the cracks. 
  • Optimized token usage, memory, and speed: By distilling information at ingestion time, we significantly reduce the number of tokens required during inference, are able to maximize the value of information put in the context window, and improve memory use. 
  • Scalability: Many solutions struggle to perform as the size of the document dataset grows. This approach provides a much more efficient way to manage a large volume of text by only preserving critical information. This also allows for a more efficient use of the LLMs context window by only sending it useful, clear information.
  • Efficient concept exploration: The pyramid enables the agent to explore related information similar to navigating a knowledge graph, but does not require ever generating or maintaining relationships in the graph. The agent can use natural language exclusively and keep track of important facts related to the concepts it’s exploring in a highly token-efficient and fluid way. 
  • Emergent dataset understanding: An unexpected benefit of this approach emerged during our testing. When asking questions like “what can you tell me about this dataset?” or “what types of questions can I ask?”, the system is able to respond and suggest productive search topics because it has a more robust understanding of the dataset context by accessing higher levels in the pyramid like the abstracts and recollections. 

Beyond the pyramid: Evaluation challenges & future directions

Challenges

While the results we’ve observed when using the pyramid search approach have been nothing short of amazing, finding ways to establish meaningful metrics to evaluate the entire system both at ingestion time and during information retrieval is challenging. Traditional RAG and Agent evaluation frameworks often fail to address nuanced questions and analytical responses where many different responses are valid.

Our team plans to write a research paper on this approach in the future, and we are open to any thoughts and feedback from the community, especially when it comes to evaluation metrics. Many of the existing datasets we found were focused on evaluating RAG use cases within one document or precise information retrieval across multiple documents rather than robust concept and theme analysis across documents and domains. 

The main use cases we are interested in relate to broader questions that are representative of how businesses actually want to interact with GenAI systems. For example, “tell me everything I need to know about customer X” or “how do the behaviors of Customer A and B differ? Which am I more likely to have a successful meeting with?”. These types of questions require a deep understanding of information across many sources. The answers to these questions typically require a person to synthesize data from multiple areas of the business and think critically about it. As a result, the answers to these questions are rarely written or saved anywhere which makes it impossible to simply store and retrieve them through a vector index in a typical RAG process. 

Another consideration is that many real-world use cases involve dynamic datasets where documents are consistently being added, edited, and deleted. This makes it difficult to evaluate and track what a “correct” response is since the answer will evolve as the available information changes. 

Future directions

In the future, we believe that the pyramid approach can address some of these challenges by enabling more effective processing of dense documents and storing learned information as recollections. However, tracking and evaluating the validity of the recollections over time will be critical to the system’s overall success and remains a key focus area for our ongoing work. 

When applying this approach to organizational data, the pyramid process could also be used to identify and assess discrepancies across areas of the business. For example, uploading all of a company’s sales pitch decks could surface where certain products or services are being positioned inconsistently. It could also be used to compare insights extracted from various line of business data to help understand if and where teams have developed conflicting understandings of topics or different priorities. This application goes beyond pure information retrieval use cases and would allow the pyramid to serve as an organizational alignment tool that helps identify divergences in messaging, terminology, and overall communication. 

Conclusion: Key takeaways and why the pyramid approach matters

The knowledge distillation pyramid approach is significant because it leverages the full power of the LLM at both ingestion and retrieval time. Our approach allows you to store dense information in fewer tokens which has the added benefit of reducing noise in the dataset at inference. Our approach also runs very quickly and is incredibly token efficient, we are able to generate responses within seconds, explore potentially hundreds of searches, and on average use (this includes all the search iterations!). 

We find that the LLM is much better at writing atomic insights as sentences and that these insights effectively distill information from both text-based and tabular data. This distilled information written in natural language is very easy for the LLM to understand and navigate at inference since it does not have to expend unnecessary energy reasoning about and breaking down document formatting or filtering through noise

The ability to retrieve and aggregate information at any level of the pyramid also provides significant flexibility to address a variety of query types. This approach offers promising performance for large datasets and enables high-value use cases that require nuanced information retrieval and analysis. 


Note: The opinions expressed in this article are solely my own and do not necessarily reflect the views or policies of my employer.

Interested in discussing further or collaborating? Reach out on LinkedIn!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Observability platforms gain AI capabilities

LogicMonitor also announced Oracle Infrastructure (OCI) Monitoring to expand its multi-cloud coverage, provide visibility across AWS, Azure, GCP, and OCI, and offer observability capabilities across several cloud platforms. The company also made its LM Uptime and Dynamic Service Insights capabilities generally available to help enterprise IT organizations find issues sooner

Read More »

Cisco strengthens integrated IT/OT network and security controls

Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along

Read More »

Liberia Awards First Exploration Leases in Years to TotalEnergies

TotalEnergies SE has signed production sharing contracts (PSCs) for four adjoining exploration blocks spanning about 12,700 square kilometers (4,903.49 square miles) in Liberian waters. LB-6, LB-11, LB-17 and LB-29, awarded under Liberia’s 2024 Direct Negotiation Licensing Round, mark the first upstream hydrocarbon agreements signed by the West African country in over a decade, according to the Liberian Petroleum Regulatory Authority (LPRA). “The work program includes acquiring one firm 3D seismic survey”, the French global energy giant said in a statement on its website. “TotalEnergies is enthusiastic to be part of the resumption of exploration activities in offshore Liberia”, commented Kevin McLachlan, senior vice president for exploration at TotalEnergies. “Entering these blocks aligns with our strategy of diversifying our exploration portfolio in high-potential new oil-prone basins. “These areas hold significant potential for prospects that have the potential for large-scale discoveries that lead to cost-effective, low-emission developments, leveraging the company’s proven expertise in deepwater operations”. The LPRA said in a separate statement online, “The signing of the PSCs represents one of the most important foreign investments in Liberia’s oil and gas sector in more than a decade”. “As one of the world’s largest integrated energy companies, with a proven track record in deepwater exploration across Africa and globally, TotalEnergies’ entry into Liberia signals renewed international confidence in the country’s hydrocarbon potential”, the LPRA added. LPRA director-general Marilyn T. Logan said the contract signing in Paris “stands as a vote of confidence in the reforms we have undertaken to attract responsible operators”. The LPRA added, “The contracts incorporate environmental and social safeguards, transparent revenue management provisions and robust local content requirements, ensuring Liberians benefit directly from sector growth”. Earlier this month TotalEnergies announced new offshore exploration licenses in two other African countries: two in Nigeria and one in the Republic of the Congo. The Nzombo

Read More »

ICYMI: Secretary Wright Advances President Trump’s Energy Dominance Agenda in Europe

Secretary Wright participated in the 2025 GasTech Conference in Milan, met with EU leaders in Brussels, and delivered the U.S. National Statement at the International Atomic Energy Agency’s 69th General Conference in Vienna  WASHINGTON— This week, U.S. Secretary of Energy Chris Wright concluded a 10-day trip across Europe with stops in Milan, Brussels, and Vienna, where he built upon President Trump’s bold energy agenda, strengthened long-term partnerships with European allies, and encouraged nations to join the United States in building a secure and prosperous energy future. The trip highlighted progress made in President Trump’s recent historic trade deal with the EU, which included an agreement from the EU to purchase $750 billion in U.S. energy and invest $600 billion in the United States by 2028.  Watch: Secretary Wright Joins Brian Sullivan for GasTech 2025 Fireside Chat — September 10, 2025  Secretary Wright participated in a keynote fireside chat and press conference with energy officials and natural gas providers at the 2025 GasTech Conference in Milan, Italy. He highlighted President Trump’s commitment to growing gas exports and how U.S. gas strengthens global stability, lowers prices, and provides a reliable alternative to adversarial energy sources. Thanks to President Trump’s reversal of the Biden administration’s reckless pause on LNG exports, the United States has already approved more LNG export capacity than the volume exported by the world’s second-largest LNG supplier.  In Brussels, Belgium, Secretary Wright met with members of the European Parliament and Commission, stressing the benefits of U.S.-E.U. energy partnerships, ending Europe’s reliance on Russian oil and gas, and the need to shift away from policies that lead to more expensive energy and inhibit long-term energy agreements in the EU.  In Vienna, Austria, Secretary Wright delivered the U.S. National Statement at the International Atomic Energy Agency’s (IAEA) 69th General Conference, where he

Read More »

Oil Drops as Trump Says Low Prices Will End RUS-UKR War

Oil edged down in a choppy session after US President Donald Trump implied that he favored low prices over sanctions as a means of pressuring Russia to end its war in Ukraine.  West Texas Intermediate fell 0.7% to trade below $64 a barrel after swinging in a roughly $1 range as Trump reiterated a commitment to low oil prices, limiting investors’ conviction that global efforts to squeeze Russian flows will pan out. Washington has signaled that the US wouldn’t follow through with threats to penalize Moscow’s crude unless Europe also acts.  Futures slid further after Trump told reporters that “if we get oil down, the war ends,” a sign of his preferred strategy to halt the flow of petrodollars that fund Russia’s war effort. He also repeated his calls for countries to stop buying Russian oil.  The commodity also followed fluctuations in US Treasury yields, with the optimism over monetary loosening after Wednesday’s quarter-point reduction in US interest rates tempered by the Fed’s cautious tone.  After the Fed’s cut, “we are back focusing on sanctions and geopolitics versus weak fundamentals,” said Arne Lohmann Rasmussen, chief analyst at A/S Global Risk Management.  Traders have honed in on Russian flows over recent weeks amid intensifying Ukrainian attacks on the country’s energy infrastructure and as the European Union unveils a fresh package of sanctions on Moscow. Two more Russian oil refineries were attacked on Thursday as Ukraine stepped up strikes, and further closures threaten to tighten global oil balances and dent the Kremlin’s war chest.  As a result of the repeated Ukrainian strikes, Russian refining runs have now dropped below 5 million barrels a day, the lowest since April 2022, according to estimates from JPMorgan Chase & Co.  In the US, meanwhile, inventories of distillates — a group of fuels that includes diesel — reached

Read More »

Octopus Energy Plans to Spin Off Technology Arm

Octopus Energy Group Ltd. plans to spin off Kraken Technologies Ltd., a software platform that helps utilities manage the transition to cleaner energy.  Kraken has been key to Octopus Energy’s growth into the UK’s largest electricity supplier, leapfrogging industry incumbents to serve more than 7 million customers in the country. The software allows it to balance out power flows to households as energy-transition technologies like electric vehicles, home batteries, solar panels and heat pumps become more widespread. The software platform is already being licensed to other energy providers such as Electricite de France SA, serving more than 70 million household and business accounts worldwide. Committed annual revenue has increased fourfold to $500 million in just three years and the spinoff will accelerate the expansion, Octopus said in a statement on Thursday. “Kraken is now a globally successful business in its own right,” Chief Executive Officer Amir Orad said in the statement. “Completing our journey to full independence is a strategic and inevitable next step.” Tim Wan has joined Kraken as its chief financial officer, the same role he previously held at US software firm Asana Inc., according to the statement. He was involved in Asana’s US listing in New York in 2020.  Kraken could be valued at as much as $14 billion, Sky News reported in July, citing a person familiar with the matter, who also said the spinoff could be part of plans for Octopus Energy to sell a stake in Kraken to external investors. The demerger and any stake sale could “bring transparency to the value of Kraken,” said Martin Young, founder of consulting firm Aquaicity Ltd. He said that could be a precursor to further sales in the future, and possibly an initial public offering. “Separation offers a cleaner structure and puts to bed the question: ‘Is

Read More »

Ukraine Hits 2 Russian Oil Refineries

Two Russian oil refineries were attacked on Thursday as Ukraine stepped up strikes on its enemy’s energy infrastructure. Gazprom’s Neftekhim Salavat petrochemical facility in the Bashkortostan region was set on fire after being hit by drones, local governor Radiy Khabirov said. The site is more than 1,300 kilometers (800 miles) from territory under Ukraine’s control, making it one of Kyiv’s deepest strikes inside Russian territory. Ukraine’s Special Operations Forces also claimed an attack on Lukoil PJSC’s major Volgograd refinery in the Volga region. As a result of the attack, the facility, which has a capacity of around 300,000 barrels a day, halted operations, Ukraine’s Special Operations Forces said. Bloomberg couldn’t independently verify the claim, and Lukoil didn’t immediately respond to an emailed request for comment.  Since last month, Ukrainian military forces have intensified drone attacks on Russian energy infrastructure, including oil refineries, aiming to curb fuel supplies to the front lines. In August, at least 13 strikes were made, the largest monthly number since the start of the invasion in Ukraine. So far in September there have been at least six attacks. Last week, drones also hit Russia’s largest Baltic oil terminal in Primorsk, and Ukraine claimed strikes on pumping stations feeding another Baltic hub, the Ust-Luga terminal.  Ukrainian drones hit one of the primary processing units at the Salavat facility, according to a person familiar with the matter. The unit has a design capacity to process 4 million tons of condensate per year, which is equivalent to about 80,000 barrels a day, according to the website of the refinery. The entire facility is designed to have a crude-oil-processing capacity of around 200,000 barrels a day. Meanwhile, the press service for governor Khabirov said in a separate statement that the Salavat refinery continues normal operations and that the fire has been localized. Neither claim could be independently verified. As a

Read More »

Energy Department Launches Speed to Power Initiative, Accelerating Large-Scale Grid Infrastructure Projects

WASHINGTON—The U.S. Department of Energy (DOE) announced today the Speed to Power initiative, to accelerate the speed of large-scale grid infrastructure project development for both transmission and generation. The Speed to Power initiative will help ensure the United States has the power needed to win the global artificial intelligence (AI) race while continuing to meet growing demand for affordable, reliable and secure energy. DOE analysis shows that the current rate of project development is inadequate to support the country’s rapidly expanding manufacturing needs and the reindustrialization of the U.S. economy. DOE is committed to collaborating with stakeholders to identify large-scale grid infrastructure projects that can bring speed to power and overcome the complex challenges facing the grid.   “In the coming years, Americans will require more energy to power their homes and businesses – and with President Trump’s leadership, the Department of Energy is ensuring we can meet this growing demand while fueling AI and data center development with affordable, reliable and secure sources,” said Energy Secretary Chris Wright. “With the Speed to Power initiative, we’re leveraging the expertise of the private sector to harness all forms of energy that are affordable, reliable and secure to ensure the United States is able to win the AI race.”   To kickstart the Speed to Power initiative, DOE is issuing a Request for Information focused on large-scale grid infrastructure projects, both transmission and generation, that can accelerate the United States speed to power. This includes input on near-term investment opportunities, project readiness, load growth expectations, and infrastructure constraints that DOE can address. The DOE is requesting stakeholder input on how to best leverage its funding programs and authorities to rapidly expand energy generation and transmission grid capacity.  President Trump’s Executive Order, Declaring a National Energy Emergency, signed on his first day in office asserted that the integrity

Read More »

OpenAI and Oracle’s $300B Stargate Deal: Building AI’s National-Scale Infrastructure

Oracle’s ‘Astonishing’ Quarter Stuns Wall Street, Targeting Cloud Growth and Global Data Center Expansion Oracle’s FY Q1 2026 earnings report on September 9 — along with its massive cloud backlog — stunned Wall Street with its blow-out Q1 earnings. The market reacted positively to the huge growth in infrastructure revenue and performance obligations (RPO), a measure of future revenue from customer contracts, which indicates significant growth potential and Oracle’s increasing role in AI technology—even as earnings and revenue missed estimates. After the earnings announcement, Oracle stock soared more than 36%, marking its biggest daily gain since December 1992 and adding more than $250 billion in market value to the company. The company’s stock surge came even as the software giant’s earnings and lower-than-expected revenue. Leaders reported company’s RPO jumped about 360% in the quarter to $455 billion, indicating its potential growth and demand for its cloud services and infrastructure. As a result, Oracle CEO Safra Catz projects that its GPU‑heavy Oracle Cloud Infrastructure (OCI) business will grow 77% to $18 billion in its current fiscal year (2026) and soar to $144 billion in 2030. The earnings announcement also made Oracle’s Co-Founder, Chairman and CTO Larry Ellison the richest person in the world briefly, with shares of Oracle surging as much as 43%. By the end of the trading day, his wealth increased nearly $90 billion to $383 billion, just shy of Tesla CEO Elon Musk’s $384 billion fortune. Also on the earnings call, Ellison announced that in October at the Oracle AI World event, the company will introduce the Oracle AI Database OCI for customers to use the Large Language Model (LLM) of their choice—including Google’s Gemini, OpenAI’s ChatGPT, xAI’s Grok, etc.—directly on top of the Oracle Database to easily access and analyze all existing database data. Capital Expenditure Strategy These astonishing numbers are due

Read More »

Ethernet, InfiniBand, and Omni-Path battle for the AI-optimized data center

IEEE 802.3df-2024. The IEEE 802.3df-2024 standard, completed in February 2024 marked a watershed moment for AI data center networking. The 800 Gigabit Ethernet specification provides the foundation for next-generation AI clusters. It uan 8-lane parallel structure that enables flexible port configurations from a single 800GbE port: 2×400GbE, 4×200GbE or 8×100GbE depending on workload requirements. The standard maintains backward compatibility with existing 100Gb/s electrical and optical signaling. This protects existing infrastructure investments while enabling seamless migration paths. UEC 1.0. The Ultra Ethernet Consortium represents the industry’s most ambitious attempt to optimize Ethernet for AI workloads. The consortium released its UEC 1.0 specification in 2025, marking a critical milestone for AI networking. The specification introduces modern RDMA implementations, enhanced transport protocols and advanced congestion control mechanisms that eliminate the need for traditional lossless networks. UEC 1.0 enables packet spraying at the switch level with reordering at the NIC, delivering capabilities previously available only in proprietary systems The UEC specification also includes Link Level Retry (LLR) for lossless transmission without traditional Priority Flow Control, addressing one of Ethernet’s historical weaknesses versus InfiniBand.LLR operates at the link layer to detect and retransmit lost packets locally, avoiding expensive recovery mechanisms at higher layers. Packet Rate Improvement (PRI) with header compression reduces protocol overhead, while network probes provide real-time congestion visibility. InfiniBand extends architectural advantages to 800Gb/s InfiniBand emerged in the late 1990s as a high-performance interconnect designed specifically for server-to-server communication in data centers. Unlike Ethernet, which evolved from local area networking,InfiniBand was purpose-built for the demanding requirements of clustered computing. The technology provides lossless, ultra-low latency communication through hardware-based flow control and specialized network adapters. The technology’s key advantage lies in its credit-based flow control. Unlike Ethernet’s packet-based approach, InfiniBand prevents packet loss by ensuring receiving buffers have space before transmission begins. This eliminates

Read More »

Land and Expand: CleanArc Data Centers, Google, Duke Energy, Aligned’s ODATA, Fermi America

Land and Expand is a monthly feature at Data Center Frontier highlighting the latest data center development news, including new sites, land acquisitions and campus expansions. Here are some of the new and notable developments from hyperscale and colocation data center operators about which we’ve been reading lately. Caroline County, VA, Approves 650-Acre Data Center Campus from CleanArc Caroline County, Virginia, has approved redevelopment of the former Virginia Bazaar property in Ruther Glen into a 650-acre data center campus in partnership with CleanArc Data Centers Operating, LLC. On September 9, 2025, the Caroline County Board of Supervisors unanimously approved an economic development performance agreement with CleanArc to transform the long-vacant flea market site just off I-95. The agreement allows for the phased construction of three initial data center buildings, each measuring roughly 500,000 square feet, which CleanArc plans to lease to major operators. The project represents one of the county’s largest-ever private investments. While CleanArc has not released a final capital cost, county filings suggest the development could reach into the multi-billion-dollar range over its full buildout. Key provisions include: Local hiring: At least 50 permanent jobs at no less than 150% of the prevailing county wage. Revenue sharing: Caroline County will provide annual incentive grants equal to 25% of incremental tax revenue generated by the campus. Water stewardship: CleanArc is prohibited from using potable county water for data center cooling, requiring the developer to pursue alternative technologies such as non-potable sources, recycled water, or advanced liquid cooling systems. Local officials have emphasized the deal’s importance for diversifying the county’s tax base, while community observers will be watching closely to see which cooling strategies CleanArc adopts in order to comply with the water-use restrictions. Google to Build $10 Billion Data Center Campus in Arkansas Moses Tucker Partners, one of Arkansas’

Read More »

Hyperion and Alice & Bob Call on HPC Centers to Prepare Now for Early Fault-Tolerant Quantum Computing

As the data center industry continues to chase greater performance for AI and scientific workloads, a new joint report from Hyperion Research and Alice & Bob is urging high performance computing (HPC) centers to take immediate steps toward integrating early fault-tolerant quantum computing (eFTQC) into their infrastructure. The report, “Seizing Quantum’s Edge: Why and How HPC Should Prepare for eFTQC,” paints a clear picture: the next five years will demand hybrid HPC-quantum workflows if institutions want to stay at the forefront of computational science. According to the analysis, up to half of current HPC workloads at U.S. government research labs—Los Alamos National Laboratory, the National Energy Research Scientific Computing Center, and Department of Energy leadership computing facilities among them—could benefit from the speedups and efficiency gains of eFTQC. “Quantum technologies are a pivotal opportunity for the HPC community, offering the potential to significantly accelerate a wide range of critical science and engineering applications in the near-term,” said Bob Sorensen, Senior VP and Chief Analyst for Quantum Computing at Hyperion Research. “However, these machines won’t be plug-and-play, so HPC centers should begin preparing for integration now, ensuring they can influence system design and gain early operational expertise.” The HPC Bottleneck: Why Quantum is Urgent The report underscores a familiar challenge for the HPC community: classical performance gains have slowed as transistor sizes approach physical limits and energy efficiency becomes increasingly difficult to scale. Meanwhile, the threshold for useful quantum applications is drawing nearer. Advances in qubit stability and error correction, particularly Alice & Bob’s cat qubit technology, have compressed the resource requirements for algorithms like Shor’s by an estimated factor of 1,000. Within the next five years, the report projects that quantum computers with 100–1,000 logical qubits and logical error rates between 10⁻⁶ and 10⁻¹⁰ will accelerate applications across materials science, quantum

Read More »

Google Partners With Utilities to Ease AI Data Center Grid Strain

Transmission and Power Strategy These agreements build on Google’s growing set of strategies to manage electricity needs. In June of 2025, Google announced a deal with CTC Global to upgrade transmission lines with high-capacity composite conductors that increase throughput without requiring new towers. In July 2025, Google and Brookfield Asset Management unveiled a hydropower framework agreement worth up to $3 billion, designed to secure firm clean energy for data centers in PJM and Eastern markets. Alongside renewable deals, Google has signed nuclear supply agreements as well, most notably a landmark contract with Kairos Power for small modular reactor capacity. Each of these moves reflects Google’s effort to create more headroom on the grid while securing firm, carbon-free power. Workload Flexibility and Grid Innovation The demand-response strategy is uniquely suited to AI data centers because of workload diversity. Machine learning training runs can sometimes be paused or rescheduled, unlike latency-sensitive workloads. This flexibility allows Google to throttle certain compute-heavy processes in coordination with utilities. In practice, Google can preemptively pause or shift workloads when notified of peak events, ensuring critical services remain uninterrupted while still creating significant grid relief. Local Utility Impact For utilities like I&M and TVA, partnering with hyperscale customers has a dual benefit: stabilizing the grid while keeping large customers satisfied and growing within their service territories. It also signals to regulators and ratepayers that data centers, often criticized for their heavy energy footprint, can actively contribute to reliability. These agreements may help avoid contentious rate cases or delays in permitting new power plants. Policy, Interconnection Queues, and the Economics of Speed One of the biggest hurdles for data center development today is the long wait in interconnection queues. In regions like PJM Interconnection, developers often face waits of three to five years before new projects can connect

Read More »

Generators, Gas, and Grid Strategy: Inside Generac’s Data Center Play

A Strategic Leap Generac’s entry represents a strategic leap. Long established as a leader in residential, commercial, and industrial generation—particularly in the sub-2 megawatt range—the company has now expanded into mission-critical applications with new products spanning 2.2 to 3.5 megawatts. Navarro said the timing was deliberate, citing market constraints that have slowed hyperscale and colocation growth. “The current OEMs serving this market are actually limiting the ability to produce and to grow the data center market,” he noted. “Having another player … with enough capacity to compensate those shortfalls has been received very, very well.” While Generac isn’t seeking to reinvent the wheel, it is intent on differentiation. Customers, Navarro explained, want a good quality product, uneventful deployment, and a responsive support network. On top of those essentials, Generac is leveraging its ongoing transformation from generator manufacturer to energy technology company, a shift accelerated by a series of acquisitions in areas like telemetry, monitoring, and energy management. “We’ve made several acquisitions to move away from being just a generator manufacturer to actually being an energy technology company,” Navarro said. “So we are entering this space of energy efficiency, energy management—monitoring, telemetrics, everything that improves the experience and improves the usage of those generators and the energy management at sites.” That foundation positions Generac to meet the newest challenge reshaping backup generation: the rise of AI-centric workloads. Natural Gas Interest—and the Race to Shorter Lead Times As the industry looks beyond diesel, customer interest in natural gas generation is rising. Navarro acknowledged the shift, but noted that diesel still retains an edge. “We’ve seen an increase on gas requests,” he said. “But the power density of diesel is more convenient than gas today.” That tradeoff, however, could narrow. Navarro pointed to innovations such as industrial storage paired with gas units, which

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »