Stay Ahead, Stay ONMINE

Overcome Failing Document Ingestion & RAG Strategies with Agentic Knowledge Distillation

Introduction Many generative AI use cases still revolve around Retrieval Augmented Generation (RAG), yet consistently fall short of user expectations. Despite the growing body of research on RAG improvements and even adding Agents into the process, many solutions still fail to return exhaustive results, miss information that is critical but infrequently mentioned in the documents, require multiple search iterations, and generally struggle to reconcile key themes across multiple documents. To top it all off, many implementations still rely on cramming as much “relevant” information as possible into the model’s context window alongside detailed system and user prompts. Reconciling all this information often exceeds the model’s cognitive capacity and compromises response quality and consistency. This is where our Agentic Knowledge Distillation + Pyramid Search Approach comes into play. Instead of chasing the best chunking strategy, retrieval algorithm, or inference-time reasoning method, my team, Jim Brown, Mason Sawtell, Sandi Besen, and I, take an agentic approach to document ingestion. We leverage the full capability of the model at ingestion time to focus exclusively on distilling and preserving the most meaningful information from the document dataset. This fundamentally simplifies the RAG process by allowing the model to direct its reasoning abilities toward addressing the user/system instructions rather than struggling to understand formatting and disparate information across document chunks.  We specifically target high-value questions that are often difficult to evaluate because they have multiple correct answers or solution paths. These cases are where traditional RAG solutions struggle most and existing RAG evaluation datasets are largely insufficient for testing this problem space. For our research implementation, we downloaded annual and quarterly reports from the last year for the 30 companies in the DOW Jones Industrial Average. These documents can be found through the SEC EDGAR website. The information on EDGAR is accessible and able to be downloaded for free or can be queried through EDGAR public searches. See the SEC privacy policy for additional details, information on the SEC website is “considered public information and may be copied or further distributed by users of the web site without the SEC’s permission”. We selected this dataset for two key reasons: first, it falls outside the knowledge cutoff for the models evaluated, ensuring that the models cannot respond to questions based on their knowledge from pre-training; second, it’s a close approximation for real-world business problems while allowing us to discuss and share our findings using publicly available data.  While typical RAG solutions excel at factual retrieval where the answer is easily identified in the document dataset (e.g., “When did Apple’s annual shareholder’s meeting occur?”), they struggle with nuanced questions that require a deeper understanding of concepts across documents (e.g., “Which of the DOW companies has the most promising AI strategy?”). Our Agentic Knowledge Distillation + Pyramid Search Approach addresses these types of questions with much greater success compared to other standard approaches we tested and overcomes limitations associated with using knowledge graphs in RAG systems.  In this article, we’ll cover how our knowledge distillation process works, key benefits of this approach, examples, and an open discussion on the best way to evaluate these types of systems where, in many cases, there is no singular “right” answer. Building the pyramid: How Agentic Knowledge Distillation works Image by author and team depicting pyramid structure for document ingestion. Robots meant to represent agents building the pyramid. Overview Our knowledge distillation process creates a multi-tiered pyramid of information from the raw source documents. Our approach is inspired by the pyramids used in deep learning computer vision-based tasks, which allow a model to analyze an image at multiple scales. We take the contents of the raw document, convert it to markdown, and distill the content into a list of atomic insights, related concepts, document abstracts, and general recollections/memories. During retrieval it’s possible to access any or all levels of the pyramid to respond to the user request.  How to distill documents and build the pyramid:  Convert documents to Markdown: Convert all raw source documents to Markdown. We’ve found models process markdown best for this task compared to other formats like JSON and it is more token efficient. We used Azure Document Intelligence to generate the markdown for each page of the document, but there are many other open-source libraries like MarkItDown which do the same thing. Our dataset included 331 documents and 16,601 pages.  Extract atomic insights from each page: We process documents using a two-page sliding window, which allows each page to be analyzed twice. This gives the agent the opportunity to correct any potential mistakes when processing the page initially. We instruct the model to create a numbered list of insights that grows as it processes the pages in the document. The agent can overwrite insights from the previous page if they were incorrect since it sees each page twice. We instruct the model to extract insights in simple sentences following the subject-verb-object (SVO) format and to write sentences as if English is the second language of the user. This significantly improves performance by encouraging clarity and precision. Rolling over each page multiple times and using the SVO format also solves the disambiguation problem, which is a huge challenge for knowledge graphs. The insight generation step is also particularly helpful for extracting information from tables since the model captures the facts from the table in clear, succinct sentences. Our dataset produced 216,931 total insights, about 13 insights per page and 655 insights per document. Distilling concepts from insights: From the detailed list of insights, we identify higher-level concepts that connect related information about the document. This step significantly reduces noise and redundant information in the document while preserving essential information and themes. Our dataset produced 14,824 total concepts, about 1 concept per page and 45 concepts per document.  Creating abstracts from concepts: Given the insights and concepts in the document, the LLM writes an abstract that appears both better than any abstract a human would write and more information-dense than any abstract present in the original document. The LLM generated abstract provides incredibly comprehensive knowledge about the document with a small token density that carries a significant amount of information. We produce one abstract per document, 331 total. Storing recollections/memories across documents: At the top of the pyramid we store critical information that is useful across all tasks. This can be information that the user shares about the task or information the agent learns about the dataset over time by researching and responding to tasks. For example, we can store the current 30 companies in the DOW as a recollection since this list is different from the 30 companies in the DOW at the time of the model’s knowledge cutoff. As we conduct more and more research tasks, we can continuously improve our recollections and maintain an audit trail of which documents these recollections originated from. For example, we can keep track of AI strategies across companies, where companies are making major investments, etc. These high-level connections are super important since they reveal relationships and information that are not apparent in a single page or document. Sample subset of insights extracted from IBM 10Q, Q3 2024 (page 4) We store the text and embeddings for each layer of the pyramid (pages and up) in Azure PostgreSQL. We originally used Azure AI Search, but switched to PostgreSQL for cost reasons. This required us to write our own hybrid search function since PostgreSQL doesn’t yet natively support this feature. This implementation would work with any vector database or vector index of your choosing. The key requirement is to store and efficiently retrieve both text and vector embeddings at any level of the pyramid.  This approach essentially creates the essence of a knowledge graph, but stores information in natural language, the way an LLM natively wants to interact with it, and is more efficient on token retrieval. We also let the LLM pick the terms used to categorize each level of the pyramid, this seemed to let the model decide for itself the best way to describe and differentiate between the information stored at each level. For example, the LLM preferred “insights” to “facts” as the label for the first level of distilled knowledge. Our goal in doing this was to better understand how an LLM thinks about the process by letting it decide how to store and group related information.  Using the pyramid: How it works with RAG & Agents At inference time, both traditional RAG and agentic approaches benefit from the pre-processed, distilled information ingested in our knowledge pyramid. The pyramid structure allows for efficient retrieval in both the traditional RAG case, where only the top X related pieces of information are retrieved or in the Agentic case, where the Agent iteratively plans, retrieves, and evaluates information before returning a final response.  The benefit of the pyramid approach is that information at any and all levels of the pyramid can be used during inference. For our implementation, we used PydanticAI to create a search agent that takes in the user request, generates search terms, explores ideas related to the request, and keeps track of information relevant to the request. Once the search agent determines there’s sufficient information to address the user request, the results are re-ranked and sent back to the LLM to generate a final reply. Our implementation allows a search agent to traverse the information in the pyramid as it gathers details about a concept/search term. This is similar to walking a knowledge graph, but in a way that’s more natural for the LLM since all the information in the pyramid is stored in natural language. Depending on the use case, the Agent could access information at all levels of the pyramid or only at specific levels (e.g. only retrieve information from the concepts). For our experiments, we did not retrieve raw page-level data since we wanted to focus on token efficiency and found the LLM-generated information for the insights, concepts, abstracts, and recollections was sufficient for completing our tasks. In theory, the Agent could also have access to the page data; this would provide additional opportunities for the agent to re-examine the original document text; however, it would also significantly increase the total tokens used.  Here is a high-level visualization of our Agentic approach to responding to user requests: Image created by author and team providing an overview of the agentic research & response process Results from the pyramid: Real-world examples To evaluate the effectiveness of our approach, we tested it against a variety of question categories, including typical fact-finding questions and complex cross-document research and analysis tasks.  Fact-finding (spear fishing):  These tasks require identifying specific information or facts that are buried in a document. These are the types of questions typical RAG solutions target but often require many searches and consume lots of tokens to answer correctly.  Example task: “What was IBM’s total revenue in the latest financial reporting?” Example response using pyramid approach: “IBM’s total revenue for the third quarter of 2024 was $14.968 billion [ibm-10q-q3-2024.pdf, pg. 4] Total tokens used to research and generate response This result is correct (human-validated) and was generated using only 9,994 total tokens, with 1,240 tokens in the generated final response.  Complex research and analysis:  These tasks involve researching and understanding multiple concepts to gain a broader understanding of the documents and make inferences and informed assumptions based on the gathered facts. Example task: “Analyze the investments Microsoft and NVIDIA are making in AI and how they are positioning themselves in the market. The report should be clearly formatted.” Example response: Response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA. The result is a comprehensive report that executed quickly and contains detailed information about each of the companies. 26,802 total tokens were used to research and respond to the request with a significant percentage of them used for the final response (2,893 tokens or ~11%). These results were also reviewed by a human to verify their validity. Snippet indicating total token usage for the task Example task: “Create a report on analyzing the risks disclosed by the various financial companies in the DOW. Indicate which risks are shared and unique.” Example response: Part 1 of response generated by the agent on disclosed risks. Part 2 of response generated by the agent on disclosed risks. Similarly, this task was completed in 42.7 seconds and used 31,685 total tokens, with 3,116 tokens used to generate the final report.  Snippet indicating total token usage for the task These results for both fact-finding and complex analysis tasks demonstrate that the pyramid approach efficiently creates detailed reports with low latency using a minimal amount of tokens. The tokens used for the tasks carry dense meaning with little noise allowing for high-quality, thorough responses across tasks. Benefits of the pyramid: Why use it? Overall, we found that our pyramid approach provided a significant boost in response quality and overall performance for high-value questions.  Some of the key benefits we observed include:  Reduced model’s cognitive load: When the agent receives the user task, it retrieves pre-processed, distilled information rather than the raw, inconsistently formatted, disparate document chunks. This fundamentally improves the retrieval process since the model doesn’t waste its cognitive capacity on trying to break down the page/chunk text for the first time.  Superior table processing: By breaking down table information and storing it in concise but descriptive sentences, the pyramid approach makes it easier to retrieve relevant information at inference time through natural language queries. This was particularly important for our dataset since financial reports contain lots of critical information in tables.  Improved response quality to many types of requests: The pyramid enables more comprehensive context-aware responses to both precise, fact-finding questions and broad analysis based tasks that involve many themes across numerous documents.  Preservation of critical context: Since the distillation process identifies and keeps track of key facts, important information that might appear only once in the document is easier to maintain. For example, noting that all tables are represented in millions of dollars or in a particular currency. Traditional chunking methods often cause this type of information to slip through the cracks.  Optimized token usage, memory, and speed: By distilling information at ingestion time, we significantly reduce the number of tokens required during inference, are able to maximize the value of information put in the context window, and improve memory use.  Scalability: Many solutions struggle to perform as the size of the document dataset grows. This approach provides a much more efficient way to manage a large volume of text by only preserving critical information. This also allows for a more efficient use of the LLMs context window by only sending it useful, clear information. Efficient concept exploration: The pyramid enables the agent to explore related information similar to navigating a knowledge graph, but does not require ever generating or maintaining relationships in the graph. The agent can use natural language exclusively and keep track of important facts related to the concepts it’s exploring in a highly token-efficient and fluid way.  Emergent dataset understanding: An unexpected benefit of this approach emerged during our testing. When asking questions like “what can you tell me about this dataset?” or “what types of questions can I ask?”, the system is able to respond and suggest productive search topics because it has a more robust understanding of the dataset context by accessing higher levels in the pyramid like the abstracts and recollections.  Beyond the pyramid: Evaluation challenges & future directions Challenges While the results we’ve observed when using the pyramid search approach have been nothing short of amazing, finding ways to establish meaningful metrics to evaluate the entire system both at ingestion time and during information retrieval is challenging. Traditional RAG and Agent evaluation frameworks often fail to address nuanced questions and analytical responses where many different responses are valid. Our team plans to write a research paper on this approach in the future, and we are open to any thoughts and feedback from the community, especially when it comes to evaluation metrics. Many of the existing datasets we found were focused on evaluating RAG use cases within one document or precise information retrieval across multiple documents rather than robust concept and theme analysis across documents and domains.  The main use cases we are interested in relate to broader questions that are representative of how businesses actually want to interact with GenAI systems. For example, “tell me everything I need to know about customer X” or “how do the behaviors of Customer A and B differ? Which am I more likely to have a successful meeting with?”. These types of questions require a deep understanding of information across many sources. The answers to these questions typically require a person to synthesize data from multiple areas of the business and think critically about it. As a result, the answers to these questions are rarely written or saved anywhere which makes it impossible to simply store and retrieve them through a vector index in a typical RAG process.  Another consideration is that many real-world use cases involve dynamic datasets where documents are consistently being added, edited, and deleted. This makes it difficult to evaluate and track what a “correct” response is since the answer will evolve as the available information changes.  Future directions In the future, we believe that the pyramid approach can address some of these challenges by enabling more effective processing of dense documents and storing learned information as recollections. However, tracking and evaluating the validity of the recollections over time will be critical to the system’s overall success and remains a key focus area for our ongoing work.  When applying this approach to organizational data, the pyramid process could also be used to identify and assess discrepancies across areas of the business. For example, uploading all of a company’s sales pitch decks could surface where certain products or services are being positioned inconsistently. It could also be used to compare insights extracted from various line of business data to help understand if and where teams have developed conflicting understandings of topics or different priorities. This application goes beyond pure information retrieval use cases and would allow the pyramid to serve as an organizational alignment tool that helps identify divergences in messaging, terminology, and overall communication.  Conclusion: Key takeaways and why the pyramid approach matters The knowledge distillation pyramid approach is significant because it leverages the full power of the LLM at both ingestion and retrieval time. Our approach allows you to store dense information in fewer tokens which has the added benefit of reducing noise in the dataset at inference. Our approach also runs very quickly and is incredibly token efficient, we are able to generate responses within seconds, explore potentially hundreds of searches, and on average use

Introduction

Many generative AI use cases still revolve around Retrieval Augmented Generation (RAG), yet consistently fall short of user expectations. Despite the growing body of research on RAG improvements and even adding Agents into the process, many solutions still fail to return exhaustive results, miss information that is critical but infrequently mentioned in the documents, require multiple search iterations, and generally struggle to reconcile key themes across multiple documents. To top it all off, many implementations still rely on cramming as much “relevant” information as possible into the model’s context window alongside detailed system and user prompts. Reconciling all this information often exceeds the model’s cognitive capacity and compromises response quality and consistency.

This is where our Agentic Knowledge Distillation + Pyramid Search Approach comes into play. Instead of chasing the best chunking strategy, retrieval algorithm, or inference-time reasoning method, my team, Jim Brown, Mason Sawtell, Sandi Besen, and I, take an agentic approach to document ingestion.

We leverage the full capability of the model at ingestion time to focus exclusively on distilling and preserving the most meaningful information from the document dataset. This fundamentally simplifies the RAG process by allowing the model to direct its reasoning abilities toward addressing the user/system instructions rather than struggling to understand formatting and disparate information across document chunks. 

We specifically target high-value questions that are often difficult to evaluate because they have multiple correct answers or solution paths. These cases are where traditional RAG solutions struggle most and existing RAG evaluation datasets are largely insufficient for testing this problem space. For our research implementation, we downloaded annual and quarterly reports from the last year for the 30 companies in the DOW Jones Industrial Average. These documents can be found through the SEC EDGAR website. The information on EDGAR is accessible and able to be downloaded for free or can be queried through EDGAR public searches. See the SEC privacy policy for additional details, information on the SEC website is “considered public information and may be copied or further distributed by users of the web site without the SEC’s permission”. We selected this dataset for two key reasons: first, it falls outside the knowledge cutoff for the models evaluated, ensuring that the models cannot respond to questions based on their knowledge from pre-training; second, it’s a close approximation for real-world business problems while allowing us to discuss and share our findings using publicly available data. 

While typical RAG solutions excel at factual retrieval where the answer is easily identified in the document dataset (e.g., “When did Apple’s annual shareholder’s meeting occur?”), they struggle with nuanced questions that require a deeper understanding of concepts across documents (e.g., “Which of the DOW companies has the most promising AI strategy?”). Our Agentic Knowledge Distillation + Pyramid Search Approach addresses these types of questions with much greater success compared to other standard approaches we tested and overcomes limitations associated with using knowledge graphs in RAG systems. 

In this article, we’ll cover how our knowledge distillation process works, key benefits of this approach, examples, and an open discussion on the best way to evaluate these types of systems where, in many cases, there is no singular “right” answer.

Building the pyramid: How Agentic Knowledge Distillation works

AI-generated image showing a pyramid structure for document ingestion with labelled sections.
Image by author and team depicting pyramid structure for document ingestion. Robots meant to represent agents building the pyramid.

Overview

Our knowledge distillation process creates a multi-tiered pyramid of information from the raw source documents. Our approach is inspired by the pyramids used in deep learning computer vision-based tasks, which allow a model to analyze an image at multiple scales. We take the contents of the raw document, convert it to markdown, and distill the content into a list of atomic insights, related concepts, document abstracts, and general recollections/memories. During retrieval it’s possible to access any or all levels of the pyramid to respond to the user request. 

How to distill documents and build the pyramid: 

  1. Convert documents to Markdown: Convert all raw source documents to Markdown. We’ve found models process markdown best for this task compared to other formats like JSON and it is more token efficient. We used Azure Document Intelligence to generate the markdown for each page of the document, but there are many other open-source libraries like MarkItDown which do the same thing. Our dataset included 331 documents and 16,601 pages. 
  2. Extract atomic insights from each page: We process documents using a two-page sliding window, which allows each page to be analyzed twice. This gives the agent the opportunity to correct any potential mistakes when processing the page initially. We instruct the model to create a numbered list of insights that grows as it processes the pages in the document. The agent can overwrite insights from the previous page if they were incorrect since it sees each page twice. We instruct the model to extract insights in simple sentences following the subject-verb-object (SVO) format and to write sentences as if English is the second language of the user. This significantly improves performance by encouraging clarity and precision. Rolling over each page multiple times and using the SVO format also solves the disambiguation problem, which is a huge challenge for knowledge graphs. The insight generation step is also particularly helpful for extracting information from tables since the model captures the facts from the table in clear, succinct sentences. Our dataset produced 216,931 total insights, about 13 insights per page and 655 insights per document.
  3. Distilling concepts from insights: From the detailed list of insights, we identify higher-level concepts that connect related information about the document. This step significantly reduces noise and redundant information in the document while preserving essential information and themes. Our dataset produced 14,824 total concepts, about 1 concept per page and 45 concepts per document. 
  4. Creating abstracts from concepts: Given the insights and concepts in the document, the LLM writes an abstract that appears both better than any abstract a human would write and more information-dense than any abstract present in the original document. The LLM generated abstract provides incredibly comprehensive knowledge about the document with a small token density that carries a significant amount of information. We produce one abstract per document, 331 total.
  5. Storing recollections/memories across documents: At the top of the pyramid we store critical information that is useful across all tasks. This can be information that the user shares about the task or information the agent learns about the dataset over time by researching and responding to tasks. For example, we can store the current 30 companies in the DOW as a recollection since this list is different from the 30 companies in the DOW at the time of the model’s knowledge cutoff. As we conduct more and more research tasks, we can continuously improve our recollections and maintain an audit trail of which documents these recollections originated from. For example, we can keep track of AI strategies across companies, where companies are making major investments, etc. These high-level connections are super important since they reveal relationships and information that are not apparent in a single page or document.
Sample subset of insights extracted from IBM 10Q, Q3 2024
Sample subset of insights extracted from IBM 10Q, Q3 2024 (page 4)

We store the text and embeddings for each layer of the pyramid (pages and up) in Azure PostgreSQL. We originally used Azure AI Search, but switched to PostgreSQL for cost reasons. This required us to write our own hybrid search function since PostgreSQL doesn’t yet natively support this feature. This implementation would work with any vector database or vector index of your choosing. The key requirement is to store and efficiently retrieve both text and vector embeddings at any level of the pyramid. 

This approach essentially creates the essence of a knowledge graph, but stores information in natural language, the way an LLM natively wants to interact with it, and is more efficient on token retrieval. We also let the LLM pick the terms used to categorize each level of the pyramid, this seemed to let the model decide for itself the best way to describe and differentiate between the information stored at each level. For example, the LLM preferred “insights” to “facts” as the label for the first level of distilled knowledge. Our goal in doing this was to better understand how an LLM thinks about the process by letting it decide how to store and group related information. 

Using the pyramid: How it works with RAG & Agents

At inference time, both traditional RAG and agentic approaches benefit from the pre-processed, distilled information ingested in our knowledge pyramid. The pyramid structure allows for efficient retrieval in both the traditional RAG case, where only the top X related pieces of information are retrieved or in the Agentic case, where the Agent iteratively plans, retrieves, and evaluates information before returning a final response. 

The benefit of the pyramid approach is that information at any and all levels of the pyramid can be used during inference. For our implementation, we used PydanticAI to create a search agent that takes in the user request, generates search terms, explores ideas related to the request, and keeps track of information relevant to the request. Once the search agent determines there’s sufficient information to address the user request, the results are re-ranked and sent back to the LLM to generate a final reply. Our implementation allows a search agent to traverse the information in the pyramid as it gathers details about a concept/search term. This is similar to walking a knowledge graph, but in a way that’s more natural for the LLM since all the information in the pyramid is stored in natural language.

Depending on the use case, the Agent could access information at all levels of the pyramid or only at specific levels (e.g. only retrieve information from the concepts). For our experiments, we did not retrieve raw page-level data since we wanted to focus on token efficiency and found the LLM-generated information for the insights, concepts, abstracts, and recollections was sufficient for completing our tasks. In theory, the Agent could also have access to the page data; this would provide additional opportunities for the agent to re-examine the original document text; however, it would also significantly increase the total tokens used. 

Here is a high-level visualization of our Agentic approach to responding to user requests:

Overview of the agentic research & response process
Image created by author and team providing an overview of the agentic research & response process

Results from the pyramid: Real-world examples

To evaluate the effectiveness of our approach, we tested it against a variety of question categories, including typical fact-finding questions and complex cross-document research and analysis tasks. 

Fact-finding (spear fishing): 

These tasks require identifying specific information or facts that are buried in a document. These are the types of questions typical RAG solutions target but often require many searches and consume lots of tokens to answer correctly. 

Example task: “What was IBM’s total revenue in the latest financial reporting?”

Example response using pyramid approach: “IBM’s total revenue for the third quarter of 2024 was $14.968 billion [ibm-10q-q3-2024.pdf, pg. 4]

Screenshot of total tokens used to research and generate response
Total tokens used to research and generate response

This result is correct (human-validated) and was generated using only 9,994 total tokens, with 1,240 tokens in the generated final response. 

Complex research and analysis: 

These tasks involve researching and understanding multiple concepts to gain a broader understanding of the documents and make inferences and informed assumptions based on the gathered facts.

Example task: “Analyze the investments Microsoft and NVIDIA are making in AI and how they are positioning themselves in the market. The report should be clearly formatted.”

Example response:

Screenshot of the response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA.
Response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA.

The result is a comprehensive report that executed quickly and contains detailed information about each of the companies. 26,802 total tokens were used to research and respond to the request with a significant percentage of them used for the final response (2,893 tokens or ~11%). These results were also reviewed by a human to verify their validity.

Screenshot of snippet indicating total token usage for the task
Snippet indicating total token usage for the task

Example task: “Create a report on analyzing the risks disclosed by the various financial companies in the DOW. Indicate which risks are shared and unique.”

Example response:

Screenshot of part 1 of a response generated by the agent on disclosed risks.
Part 1 of response generated by the agent on disclosed risks.
Screenshot of part 2 of a response generated by the agent on disclosed risks.
Part 2 of response generated by the agent on disclosed risks.

Similarly, this task was completed in 42.7 seconds and used 31,685 total tokens, with 3,116 tokens used to generate the final report. 

Screenshot of a snippet indicating total token usage for the task
Snippet indicating total token usage for the task

These results for both fact-finding and complex analysis tasks demonstrate that the pyramid approach efficiently creates detailed reports with low latency using a minimal amount of tokens. The tokens used for the tasks carry dense meaning with little noise allowing for high-quality, thorough responses across tasks.

Benefits of the pyramid: Why use it?

Overall, we found that our pyramid approach provided a significant boost in response quality and overall performance for high-value questions. 

Some of the key benefits we observed include: 

  • Reduced model’s cognitive load: When the agent receives the user task, it retrieves pre-processed, distilled information rather than the raw, inconsistently formatted, disparate document chunks. This fundamentally improves the retrieval process since the model doesn’t waste its cognitive capacity on trying to break down the page/chunk text for the first time. 
  • Superior table processing: By breaking down table information and storing it in concise but descriptive sentences, the pyramid approach makes it easier to retrieve relevant information at inference time through natural language queries. This was particularly important for our dataset since financial reports contain lots of critical information in tables. 
  • Improved response quality to many types of requests: The pyramid enables more comprehensive context-aware responses to both precise, fact-finding questions and broad analysis based tasks that involve many themes across numerous documents. 
  • Preservation of critical context: Since the distillation process identifies and keeps track of key facts, important information that might appear only once in the document is easier to maintain. For example, noting that all tables are represented in millions of dollars or in a particular currency. Traditional chunking methods often cause this type of information to slip through the cracks. 
  • Optimized token usage, memory, and speed: By distilling information at ingestion time, we significantly reduce the number of tokens required during inference, are able to maximize the value of information put in the context window, and improve memory use. 
  • Scalability: Many solutions struggle to perform as the size of the document dataset grows. This approach provides a much more efficient way to manage a large volume of text by only preserving critical information. This also allows for a more efficient use of the LLMs context window by only sending it useful, clear information.
  • Efficient concept exploration: The pyramid enables the agent to explore related information similar to navigating a knowledge graph, but does not require ever generating or maintaining relationships in the graph. The agent can use natural language exclusively and keep track of important facts related to the concepts it’s exploring in a highly token-efficient and fluid way. 
  • Emergent dataset understanding: An unexpected benefit of this approach emerged during our testing. When asking questions like “what can you tell me about this dataset?” or “what types of questions can I ask?”, the system is able to respond and suggest productive search topics because it has a more robust understanding of the dataset context by accessing higher levels in the pyramid like the abstracts and recollections. 

Beyond the pyramid: Evaluation challenges & future directions

Challenges

While the results we’ve observed when using the pyramid search approach have been nothing short of amazing, finding ways to establish meaningful metrics to evaluate the entire system both at ingestion time and during information retrieval is challenging. Traditional RAG and Agent evaluation frameworks often fail to address nuanced questions and analytical responses where many different responses are valid.

Our team plans to write a research paper on this approach in the future, and we are open to any thoughts and feedback from the community, especially when it comes to evaluation metrics. Many of the existing datasets we found were focused on evaluating RAG use cases within one document or precise information retrieval across multiple documents rather than robust concept and theme analysis across documents and domains. 

The main use cases we are interested in relate to broader questions that are representative of how businesses actually want to interact with GenAI systems. For example, “tell me everything I need to know about customer X” or “how do the behaviors of Customer A and B differ? Which am I more likely to have a successful meeting with?”. These types of questions require a deep understanding of information across many sources. The answers to these questions typically require a person to synthesize data from multiple areas of the business and think critically about it. As a result, the answers to these questions are rarely written or saved anywhere which makes it impossible to simply store and retrieve them through a vector index in a typical RAG process. 

Another consideration is that many real-world use cases involve dynamic datasets where documents are consistently being added, edited, and deleted. This makes it difficult to evaluate and track what a “correct” response is since the answer will evolve as the available information changes. 

Future directions

In the future, we believe that the pyramid approach can address some of these challenges by enabling more effective processing of dense documents and storing learned information as recollections. However, tracking and evaluating the validity of the recollections over time will be critical to the system’s overall success and remains a key focus area for our ongoing work. 

When applying this approach to organizational data, the pyramid process could also be used to identify and assess discrepancies across areas of the business. For example, uploading all of a company’s sales pitch decks could surface where certain products or services are being positioned inconsistently. It could also be used to compare insights extracted from various line of business data to help understand if and where teams have developed conflicting understandings of topics or different priorities. This application goes beyond pure information retrieval use cases and would allow the pyramid to serve as an organizational alignment tool that helps identify divergences in messaging, terminology, and overall communication. 

Conclusion: Key takeaways and why the pyramid approach matters

The knowledge distillation pyramid approach is significant because it leverages the full power of the LLM at both ingestion and retrieval time. Our approach allows you to store dense information in fewer tokens which has the added benefit of reducing noise in the dataset at inference. Our approach also runs very quickly and is incredibly token efficient, we are able to generate responses within seconds, explore potentially hundreds of searches, and on average use (this includes all the search iterations!). 

We find that the LLM is much better at writing atomic insights as sentences and that these insights effectively distill information from both text-based and tabular data. This distilled information written in natural language is very easy for the LLM to understand and navigate at inference since it does not have to expend unnecessary energy reasoning about and breaking down document formatting or filtering through noise

The ability to retrieve and aggregate information at any level of the pyramid also provides significant flexibility to address a variety of query types. This approach offers promising performance for large datasets and enables high-value use cases that require nuanced information retrieval and analysis. 


Note: The opinions expressed in this article are solely my own and do not necessarily reflect the views or policies of my employer.

Interested in discussing further or collaborating? Reach out on LinkedIn!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

IP Fabric 7.9 boosts visibility across hybrid environments

Multicloud and hybrid network viability has also been extended to include IPv6 path analysis, helping teams reason about connectivity in dual-stack and hybrid environments. This capability addresses a practical challenge for enterprises deploying IPv6 alongside existing IPv4 infrastructure. Network teams can now validate that applications can reach IPv6 endpoints and

Read More »

Veteran Gas Executive Leaving Mercuria

Steve Hill, who was hired by Mercuria Energy Group in 2024 to build out its liquefied natural gas business, is leaving the trading house. Hill was part of the company’s efforts to expand into the fast-growing global LNG market. Before joining, he was responsible for the vast LNG, gas and power marketing and trading business at energy giant Shell Plc. He was one of a trio of heavyweight hires Mercuria made after reaping bumper profits, setting off a renewed push into trading physical commodities, along with Kostas Bintas in metals and Nick O’Kane in gas and power. Known as one of the world’s biggest traders of oil and gas, the firm has been a relative latecomer behind other trading house rivals in building out a large-scale physical trading business for LNG. During Hill’s relatively brief tenure, Mercuria signed deals to offtake LNG from Oman, as well as supply Turkey and China. He also hired several of his former colleagues from Shell, though one — Singapore-based Dong Yuan — recently left the company. A spokesperson for Mercuria confirmed Hill is leaving the company. Hill didn’t immediately respond to a request for comment. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Crude Settles Higher After Volatile Week

Oil edged higher at the end of a volatile week, as traders weighed tensions in Iran and positive sentiment in wider markets. West Texas Intermediate settled near $60 a barrel after plunging 4.6% on Thursday, the most since June. President Donald Trump said in a social media post that he “greatly” respects Iran’s decision to cancel scheduled hangings of protesters. His rhetoric over recent days has reduced expectations of an immediate US response to violent protests in the Islamic Republic, which could have led to disruptions to the country’s roughly 3.3 million barrel-per-day oil production, as well as shipping. Nevertheless, Washington is boosting its military presence in the Middle East. At least one aircraft carrier is moving into the region and other military assets are expected to be shifted there in the coming days and weeks, Fox News reported, citing military sources. Traders have in the past covered bearish wagers ahead of the weekend in periods of heightened geopolitical risks. “While the risk of imminent intervention from the US against Iran has subsided, it’s pretty clear that the risk is still present, which should keep the market on its toes in the short term,” said Warren Patterson, head of commodities strategy at ING Groep NV. “However, the longer this goes on without a US response, the risk premium will continue to evaporate, allowing more bearish fundamentals to take center stage.” Disruption to Kazakh exports from the Black Sea, short-term tightness in the North Sea and a host of financial flows from options markets to commodity index rebalancing have also helped lift an oil market coming off its biggest drop since 2020 on rising supplies. In a sign that lower prices are starting to bite, Harold Hamm, the billionaire wildcatter who helped kick off the US shale revolution, said his firm

Read More »

U.S. Energy Secretary and Slovakia’s Prime Minister Sign Agreement to Advance U.S.-Slovakia Civil Nuclear Program

WASHINGTON—U.S. Secretary of Energy Chris Wright and Slovak Prime Minister Robert Fico today signed an Intergovernmental Agreement (IGA) to advance cooperation on Slovakia’s civil nuclear power program. This landmark agreement includes the development of a new, state-owned American 1,200 MWe nuclear unit at the Jaslovské Bohunice Nuclear Power Plant, deepening the U.S.-Slovakia strategic partnership and strengthening European energy security. The agreement builds on President Trump’s commitment to advancing American energy leadership. A project of this scale is expected to create thousands of American jobs across engineering, advanced manufacturing, construction, nuclear fuel services, and project management, while reinforcing U.S. supply chains and expanding access to global markets for American-made nuclear technology. These efforts lay the foundation for sustained U.S. engagement in Slovakia’s nuclear energy program and support future civil nuclear projects across the region. It also supports Slovakia’s efforts to diversify its energy supply, strengthen long-term energy security, and integrate advanced American nuclear technology into Central Europe’s energy infrastructure. “The United States is proud to partner with Slovakia as a trusted ally as we expand cooperation across the energy sector,” said Energy Secretary Chris Wright. “Today’s civil nuclear agreement reflects our shared commitment to strengthening European energy security and sovereignty for decades to come. By deploying America’s leading nuclear technology, we are creating thousands of good-paying American jobs, expanding global markets for U.S. nuclear companies, and driving economic growth at home”. “I see this moment as a significant milestone in our bilateral relations, but also as a clear signal that Slovakia and the United States are united by a common strategic thinking about the future of energy – about its safety, sustainability, and technological maturity,” said the Prime Minister of the Slovak Republic Robert Fico. The planned nuclear unit represents a multibillion-dollar energy infrastructure investment and one of the largest in

Read More »

Valero to Cut 200+ Jobs as California Refinery Closes

Valero Energy Corp. plans to let go of 237 employees at its Benicia refinery as it winds down operations at one of California’s few remaining fuel-making plants. Valero expects the shutdown to be permanent and 237 jobs will be cut March 15 to July 1, the company said in a letter to California’s employment regulator and local officials. Those losing jobs are not represented by a union and represent the bulk of the plant’s 348-person staff.  “We do not plan to coordinate services with the local workforce development board or any other entity,” refinery manager Lauren Bird, whose position is being eliminated, said in the letter. The Texas-based oil company announced in 2025 plans to close the plant and last-ditch efforts by Governor Gavin Newsom, regulators and local officials to keep the gates open were unsuccessful. Multiple California refineries have closed or converted to making biofuels in recent years, dwindling fuel supply in a state where drivers regularly pay the highest gasoline prices in the nation. Last week, Newsom praised plans by Valero to continue supplying the state with gasoline amid the shutdown, saying the decision to import fuel to the region was a constructive development from an earlier possibility of a full-on exit. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Trump Administration Calls for Emergency Power Auction to Build Big Power Plants Again

WASHINGTON—U.S. Secretary of Energy Chris Wright and Secretary of the Interior Doug Burgum, vice-chair and chair of the National Energy Dominance Council (NEDC) respectively, today joined Mid-Atlantic governors urging PJM Interconnection, L.L.C. (PJM) to temporarily overhaul its market rules to strengthen grid reliability and reduce electricity costs for American families and businesses by building more than $15 billion of reliable baseload power generation.  The initiative calls on PJM to conduct an emergency procurement auction to address escalating electricity prices and growing reliability risks across the mid-Atlantic region of the United States. The action follows a series of PJM policies over the years that have weakened the electric grid, including the premature shutdown of reliable power generation.  President Trump declared a National Energy Emergency on his first day in office, warning that the previous administrations energy subtraction agenda left the country vulnerable to blackouts and soaring electricity prices. During the Biden administration, PJM forced nearly 17 gigawatts of reliable baseload power generation offline. For the first time in history, PJM’s capacity auction failed to secure enough generation resources to meet basic reliability requirements. If not fixed, it will lead to further rising prices and blackouts.  “High electricity prices are a choice,” said Energy Secretary Chris Wright. “The Biden administration’s forceful closures of coal and natural gas plants without reliable replacements left the United States in an energy emergency. Perhaps no region in America is more at risk than in PJM. That’s why President Trump asked governors across the Mid-Atlantic to come together and call upon PJM to allow America to build big reliable power plants again. Our directives will restore affordable and reliable electricity so American families thrive and America’s manufacturing industries once again boom. President Trump promised to unleash American energy and put the American people first. This plan keeps

Read More »

Russian Oil and Gas Revenue Falls to Lowest in 5 Years

Russia’s revenues from its oil and gas industry, vital to financing its war in Ukraine, dropped to a five-year low in 2025 as crude prices slumped and gas exports declined. The nation’s budget received a total of 8.48 trillion rubles ($108 billion) in oil and gas taxes last year, Finance Ministry said on Thursday. That’s 24 percent less than in 2024 and the lowest level since the start of the decade, historic figures show.  Russia, a top-three global oil producer and home to the world’s largest gas reserves, heavily relies on tax revenues from the two industries to fill its state coffers. The decline, mainly driven by a combination of weaker global oil prices, stronger ruble and energy sanctions against Russia, comes as the Kremlin has boosted military spending significantly above what it planned to fund the war, which is about to enter a fifth year. To bridge the widening gap between revenues and spending, the government in Moscow has eaten into more than half of the country’s National Wellbeing Fund – a buffer against economic shocks – and turned to expensive borrowings that will take years to pay back.   Oil revenues dropped more than 22 percent year on year to 7.13 trillion rubles, reaching the lowest level since 2023, Bloomberg calculations show. Concerns about an oversupply in the global crude market, and discounts for Russian barrels in particular due to western sanctions, hit the flow of money into state coffers. The official data show that the average price of Urals, Russia’s main oil-export blend, for tax purposes was $57.65 a barrel in 2025, a 15 percent drop from a year earlier.   Starting from November, when the US blacklisted two major oil producers Rosneft PJSC and Lukoil PJSC, the discount of Urals to the Brent benchmark widened to about $27 a barrel at

Read More »

NVIDIA’s Rubin Redefines the AI Factory

The Architecture Shift: From “GPU Server” to “Rack-Scale Supercomputer” NVIDIA’s Rubin architecture is built around a single design thesis: “extreme co-design.” In practice, that means GPUs, CPUs, networking, security, software, power delivery, and cooling are architected together; treating the data center as the compute unit, not the individual server. That logic shows up most clearly in the NVL72 system. NVLink 6 serves as the scale-up spine, designed to let 72 GPUs communicate all-to-all with predictable latency, something NVIDIA argues is essential for mixture-of-experts routing and synchronization-heavy inference paths. NVIDIA is not vague about what this requires. Its technical materials describe the Rubin GPU as delivering 50 PFLOPS of NVFP4 inference and 35 PFLOPS of NVFP4 training, with 22 TB/s of HBM4 bandwidth and 3.6 TB/s of NVLink bandwidth per GPU. The point of that bandwidth is not headline-chasing. It is to prevent a rack from behaving like 72 loosely connected accelerators that stall on communication. NVIDIA wants the rack to function as a single engine because that is what it will take to drive down cost per token at scale. The New Idea NVIDIA Is Elevating: Inference Context Memory as Infrastructure If there is one genuinely new concept in the Rubin announcements, it is the elevation of context memory, and the admission that GPU memory alone will not carry the next wave of inference. NVIDIA describes a new tier called NVIDIA Inference Context Memory Storage, powered by BlueField-4, designed to persist and share inference state (such as KV caches) across requests and nodes for long-context and agentic workloads. NVIDIA says this AI-native context tier can boost tokens per second by up to 5× and improve power efficiency by up to 5× compared with traditional storage approaches. The implication is clear: the path to cheaper inference is not just faster GPUs.

Read More »

Power shortages, carbon capture, and AI automation: What’s ahead for data centers in 2026

“Despite a broader use of AI tools in enterprises and by consumers, that does not mean that AI compute, AI infrastructure in general, will be more evenly spread out,” said Daniel Bizo, research director at Uptime Institute, during the webinar. “The concentration of AI compute infrastructure is only increasing in the coming years.” For enterprises, the infrastructure investment remains relatively modest, Uptime Institute found. Enterprises will limit investment to inference and only some training, and inference workloads don’t require dramatic capacity increases. “Our prediction, our observation, was that the concentration of AI compute infrastructure is only increasing in the coming years by a couple of points. By the end of this year, 2026, we are projecting that around 10 gigawatts of new IT load will have been added to the global data center world, specifically to run generative AI workloads and adjacent workloads, but definitely centered on generative AI,” Bizo said. “This means these 10 gigawatts or so load, we are talking about anywhere between 13 to 15 million GPUs and accelerators deployed globally. We are anticipating that a majority of these are and will be deployed in supercomputing style.” 2. Developers will not outrun the power shortage The most pressing challenge facing the industry, according to Uptime, is that data centers can be built in less than three years, but power generation takes much longer. “It takes three to six years to deploy a solar or wind farm, around six years for a combined-cycle gas turbine plant, and even optimistically, it probably takes more than 10 years to deploy a conventional nuclear power plant,” said Max Smolaks, research analyst at Uptime Institute. This mismatch was manageable when data centers were smaller and growth was predictable, the report notes. But with projects now measured in tens and sometimes hundreds of

Read More »

Google warns transmission delays are now the biggest threat to data center expansion

The delays stem from aging transmission infrastructure unable to handle concentrated power demands. Building regional transmission lines currently takes seven to eleven years just for permitting, Hanna told the gathering. Southwest Power Pool has projected 115 days of potential loss of load if transmission infrastructure isn’t built to match demand growth, he added. These systemic delays are forcing enterprises to reconsider fundamental assumptions about cloud capacity. Regions including Northern Virginia and Santa Clara that were prime locations for hyperscale builds are running out of power capacity. The infrastructure constraints are also reshaping cloud competition around power access rather than technical capabilities. “This is no longer about who gets to market with the most GPU instances,” Gogia said. “It’s about who gets to the grid first.” Co-location emerges as a faster alternative to grid delays Unable to wait years for traditional grid connections, hyperscalers are pursuing co-location arrangements that place data centers directly adjacent to power plants, bypassing the transmission system entirely. Pricing for these arrangements has jumped 20% in power-constrained markets as demand outstrips availability, with costs flowing through to cloud customers via regional pricing differences, Gogia said. Google is exploring such arrangements, though Hanna said the company’s “strong preference is grid-connected load.” “This is a speed to power play for us,” he said, noting Google wants facilities to remain “front of the meter” to serve the broader grid rather than operating as isolated power sources. Other hyperscalers are negotiating directly with utilities, acquiring land near power plants, and exploring ownership stakes in power infrastructure from batteries to small modular nuclear reactors, Hanna said.

Read More »

OpenAI turns to Cerebras in a mega deal to scale AI inference infrastructure

Analysts expect AI workloads to grow more varied and more demanding in the coming years, driving the need for architectures tuned for inference performance and putting added pressure on data center networks. “This is prompting hyperscalers to diversify their computing systems, using Nvidia GPUs for general-purpose AI workloads, in-house AI accelerators for highly optimized tasks, and systems such as Cerebras for specialized low-latency workloads,” said Neil Shah, vice president for research at Counterpoint Research. As a result, AI platforms operating at hyperscale are pushing infrastructure providers away from monolithic, general-purpose clusters toward more tiered and heterogeneous infrastructure strategies. “OpenAI’s move toward Cerebras inference capacity reflects a broader shift in how AI data centers are being designed,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “This move is less about replacing Nvidia and more about diversification as inference scales.” At this level, infrastructure begins to resemble an AI factory, where city-scale power delivery, dense east–west networking, and low-latency interconnects matter more than peak FLOPS, Ram added. “At this magnitude, conventional rack density, cooling models, and hierarchical networks become impractical,” said Manish Rawat, semiconductor analyst at TechInsights. “Inference workloads generate continuous, latency-sensitive traffic rather than episodic training bursts, pushing architectures toward flatter network topologies, higher-radix switching, and tighter integration of compute, memory, and interconnect.”

Read More »

Cisco’s 2026 agenda prioritizes AI-ready infrastructure, connectivity

While most of the demand for AI data center capacity today comes from hyperscalers and neocloud providers, that will change as enterprise customers delve more into the AI networking world. “The other ecosystem members and enterprises themselves are becoming responsible for an increasing proportion of the AI infrastructure buildout as inferencing and agentic AI, sovereign cloud, and edge AI become more mainstream,” Katz wrote. More enterprises will move to host AI on premises via the introduction of AI agents that are designed to inject intelligent insight into applications and help improve operations. That’s where the AI impact on enterprise network traffic will appear, suggests Nolle. “Enterprises need to host AI to create AI network impact. Just accessing it doesn’t do much to traffic. Having cloud agents access local data center resources (RAG etc.) creates a governance issue for most corporate data, so that won’t go too far either,” Nolle said.  “Enterprises are looking at AI agents, not the way hyperscalers tout agentic AI, but agents running on small models, often open-source, and are locally hosted. This is where real AI traffic will develop, and Cisco could be vulnerable if they don’t understand this point and at least raise it in dialogs where AI hosting comes up,” Nolle said. “I don’t expect they’d go too far, because the real market for enterprise AI networking is probably a couple years out.” Meanwhile, observers expect Cisco to continue bolstering AI networking capabilities for enterprise branch, campus and data centers as well as hyperscalers, including through optical support and other gear.

Read More »

Microsoft tells communities it will ‘pay its way’ as AI data center resource usage sparks backlash

It will work with utilities and public commissions to set the rates it pays high enough to cover data center electricity costs (including build-outs, additions, and active use). “Our goal is straightforward: To ensure that the electricity cost of serving our data centers is not passed on to residential customers,” Smith emphasized. For example, the company is supporting a new rate structure Wisconsin that would charge a class of “very large customers,” including data centers, the true cost of the electricity required to serve them. It will collaborate “early, closely, and transparently” with local utilities to add electricity and supporting infrastructure to existing grids when needed. For instance, Microsoft has contracted with the Midcontinent Independent System Operator (MISO) to add 7.9GW of new electricity generation to the grid, “more than double our current consumption,” Smith noted. It will pursue ways to make data centers more efficient. For example, it is already experimenting with AI to improve planning, extract more electricity from existing infrastructure, improve system resilience, and speed development of new infrastructure and technologies (like nuclear energy). It will advocate for state and national public policies that ensure electricity access that is affordable, reliable, and sustainable in neighboring communities. Microsoft previously established priorities for electricity policy advocacy, Smith noted, but “progress has been uneven. This needs to change.” Microsoft is similarly committed when it comes to data center water use, promising four actions: Reducing the overall amount of water its data centers use, initially improving it by 40% by 2030. The company is exploring innovations in cooling, including closed-loop systems that recirculate cooling liquids. It will collaborate with local utilities to map out water, wastewater, and pressure needs, and will “fully fund” infrastructure required for growth. For instance, in Quincy, Washington, Microsoft helped construct a water reuse utility that recirculates

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »