Graphrag local search vs global search vs normal RAG#1877
Open
majidsh97 wants to merge 1 commit intomicrosoft:mainfrom
Open
Graphrag local search vs global search vs normal RAG#1877majidsh97 wants to merge 1 commit intomicrosoft:mainfrom
majidsh97 wants to merge 1 commit intomicrosoft:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This pull request introduces a new Jupyter Notebook example that demonstrates how to compare the results from standard RAG, GraphRAG with local search, and GraphRAG with global search.
The goal is to offer users a clear, hands-on way to evaluate the effectiveness of different GraphRAG search strategies against a baseline RAG implementation for their own datasets and questions.
Proposed Changes
Input: The notebook takes a path to the root directory and some questions for comparison.
it generates answers for the provided questions using three distinct methods:
Evaluation: The generated answers from all three methods are then passed to a Language Model (LLM) which is prompted to provide ratings based on completeness, directness, empowerment, and diversity.
Output: The notebook produces:
Four Parquet tables containing the questions, the generated answers from each method, and their corresponding LLM-generated scores.
A plot that visually summarizes the scores, allowing for an easy comparison of the performance across the different RAG methods.
[Yes] I have tested these changes locally.
[Yes] I have reviewed the code changes.
[Yes] I have updated the documentation (if necessary, e.g., adding a mention of the new example).
[No] I have added appropriate unit tests (if applicable, though less common for example notebooks).