-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Fix MCP server documentation to reference correct entry point #1179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
andreibogdan
wants to merge
1
commit into
getzep:main
Choose a base branch
from
andreibogdan:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+8
−8
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Member
|
All contributors have signed the CLA ✍️ ✅ |
Contributor
Author
|
I have read the CLA Document and I hereby sign the CLA |
Contributor
Author
|
recheck |
danielchalef
added a commit
that referenced
this pull request
Jan 27, 2026
danielchalef
approved these changes
Jan 30, 2026
Update all documentation references from graphiti_mcp_server.py to main.py. The old filename was causing "No such file or directory" errors when users tried to run the commands as documented. The actual entry point is main.py in the mcp_server directory. Changes: - Update 7 command examples in README.md - Update example configuration file with correct path Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> Signed-off-by: Andrei Bogdan <166901+andreibogdan@users.noreply.github.com>
andreibogdan
pushed a commit
to andreibogdan/graphiti
that referenced
this pull request
Jan 31, 2026
prasmussen15
added a commit
that referenced
this pull request
Feb 6, 2026
* Fix MCP server documentation to reference correct entry point Update all documentation references from graphiti_mcp_server.py to main.py. The old filename was causing "No such file or directory" errors when users tried to run the commands as documented. The actual entry point is main.py in the mcp_server directory. Changes: - Update 7 command examples in README.md - Update example configuration file with correct path Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> * @andreibogdan has signed the CLA in #1179 * Add extracted edge facts to entity summaries (#1182) * Add extracted edge facts to entity summaries Update _extract_entity_summary to include facts from edges connected to each node. Edge facts are appended to the existing summary, and LLM summarization is only triggered if the combined content exceeds the character limit. - Add edges parameter to extract_attributes_from_nodes and related functions - Filter edges per node before passing to attribute extraction - Append edge facts (newline-separated) to node summary - Skip LLM call when combined summary is within length limits Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove unused reflexion prompts and invalidate_edges v1 - Remove reflexion prompts from extract_nodes.py and extract_edges.py - Remove extract_nodes_reflexion function from node_operations.py - Remove unused v1 function from invalidate_edges.py Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Filter out None/empty edge facts when building summary Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove unused MissedEntities import Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Optimize edge filtering with pre-built lookup dictionary Replace O(N * E) per-node edge filtering with O(E + N) pre-built dictionary lookup. Edges are now indexed by node UUID once before the gather operation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Handle empty summary edge case Return early if summary_with_edges is empty after stripping, avoiding storing empty summaries when node.summary and all edge facts are empty. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Update tests to reflect summary optimization behavior Tests now expect that short summaries are kept as-is without LLM calls. Added new test to verify LLM is called when summary exceeds character limit due to edge facts. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * format * Bump version to 0.27.0 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * lock * change version --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Fix dependabot security vulnerabilities (#1184) Fix dependabot security vulnerabilities in dependencies Update lock files to address multiple security alerts: - pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490) - langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664) - mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS) - azure-core: 1.34.0 → 1.38.0 (deserialization) - starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities) - python-multipart: 0.0.20 → 0.0.22 (arbitrary file write) - fastapi: 0.115.14 → 0.128.0 (for starlette compatibility) - nbconvert: 7.16.6 → 7.17.0 - orjson: 3.11.5 → 3.11.6 - protobuf: 6.33.4 → 6.33.5 Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Revert "Fix dependabot security vulnerabilities" (#1185) Revert "Fix dependabot security vulnerabilities (#1184)" This reverts commit 30cd907. * Pin mcp_server to graphiti-core 0.26.3 (#1186) * Fix dependabot security vulnerabilities in dependencies Update lock files to address multiple security alerts: - pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490) - langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664) - mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS) - azure-core: 1.34.0 → 1.38.0 (deserialization) - starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities) - python-multipart: 0.0.20 → 0.0.22 (arbitrary file write) - fastapi: 0.115.14 → 0.128.0 (for starlette compatibility) - nbconvert: 7.16.6 → 7.17.0 - orjson: 3.11.5 → 3.11.6 - protobuf: 6.33.4 → 6.33.5 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Pin mcp_server to graphiti-core 0.26.3 from PyPI - Change dependency from >=0.23.1 to ==0.26.3 - Remove editable source override to use published package - Addresses code review feedback about RC version usage Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix remaining security vulnerabilities in mcp_server Update vulnerable transitive dependencies: - aiohttp: 3.12.15 → 3.13.3 (High: zip bomb, DoS) - urllib3: 2.5.0 → 2.6.3 (High: decompression bomb bypass) - filelock: 3.19.1 → 3.20.3 (Medium: TOCTOU symlink) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Update manual code review workflow to use claude-opus-4-5-20251101 (#1189) * Update manual code review workflow to use claude-opus-4-5-20251101 Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com> * Update auto code review workflow to use claude-opus-4-5-20251101 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com> * Fix Azure OpenAI integration for v1 API compatibility This commit addresses several issues with Azure OpenAI integration: 1. Azure OpenAI Client (graphiti_core/llm_client/azure_openai_client.py): - Use AsyncOpenAI with v1 endpoint instead of AsyncAzureOpenAI - Implement separate handling for reasoning models (responses.parse) vs non-reasoning models (beta.chat.completions.parse) - Add custom response handler to parse both response formats correctly - Fix RefusalError import path from llm_client.errors 2. MCP Server Factories (mcp_server/src/services/factories.py): - Update Azure OpenAI factory to use v1 compatibility endpoint - Use same deployment for both main and small models in Azure - Add support for custom embedder endpoints (Ollama compatibility) - Add support for custom embedding dimensions - Remove unused Azure AD authentication code (TODO for future) - Add reasoning model detection for OpenAI provider 3. MCP Server Configuration (mcp_server/pyproject.toml): - Add local graphiti-core source dependency for development 4. Tests (tests/llm_client/test_azure_openai_client.py): - Update test mocks to support beta.chat.completions.parse - Update test expectations for non-reasoning model path These changes enable Azure OpenAI to work correctly with both reasoning and non-reasoning models, support custom embedder endpoints like Ollama, and maintain compatibility with the OpenAI v1 API specification. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address code review feedback - Remove local development dependency from mcp_server/pyproject.toml that would break PyPI installations - Move json import to top of azure_openai_client.py - Add comments explaining why non-reasoning models use beta.chat.completions.parse instead of responses.parse (Azure v1 compatibility limitation) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address minor code review issues - Add noqa comment for unused response_model parameter (inherited from abstract method interface) - Fix misleading comment in factories.py that referenced Azure OpenAI in the regular OpenAI case Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com> Co-authored-by: Preston Rasmussen <109292228+prasmussen15@users.noreply.github.com> Co-authored-by: prestonrasmussen <prasmuss15@gmail.com>
lehcode
pushed a commit
to lehcode/graphiti-litellm
that referenced
this pull request
Feb 8, 2026
* Fix MCP server documentation to reference correct entry point Update all documentation references from graphiti_mcp_server.py to main.py. The old filename was causing "No such file or directory" errors when users tried to run the commands as documented. The actual entry point is main.py in the mcp_server directory. Changes: - Update 7 command examples in README.md - Update example configuration file with correct path Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> * @andreibogdan has signed the CLA in getzep#1179 * Add extracted edge facts to entity summaries (getzep#1182) * Add extracted edge facts to entity summaries Update _extract_entity_summary to include facts from edges connected to each node. Edge facts are appended to the existing summary, and LLM summarization is only triggered if the combined content exceeds the character limit. - Add edges parameter to extract_attributes_from_nodes and related functions - Filter edges per node before passing to attribute extraction - Append edge facts (newline-separated) to node summary - Skip LLM call when combined summary is within length limits Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove unused reflexion prompts and invalidate_edges v1 - Remove reflexion prompts from extract_nodes.py and extract_edges.py - Remove extract_nodes_reflexion function from node_operations.py - Remove unused v1 function from invalidate_edges.py Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Filter out None/empty edge facts when building summary Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove unused MissedEntities import Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Optimize edge filtering with pre-built lookup dictionary Replace O(N * E) per-node edge filtering with O(E + N) pre-built dictionary lookup. Edges are now indexed by node UUID once before the gather operation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Handle empty summary edge case Return early if summary_with_edges is empty after stripping, avoiding storing empty summaries when node.summary and all edge facts are empty. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Update tests to reflect summary optimization behavior Tests now expect that short summaries are kept as-is without LLM calls. Added new test to verify LLM is called when summary exceeds character limit due to edge facts. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * format * Bump version to 0.27.0 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * lock * change version --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Fix dependabot security vulnerabilities (getzep#1184) Fix dependabot security vulnerabilities in dependencies Update lock files to address multiple security alerts: - pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490) - langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664) - mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS) - azure-core: 1.34.0 → 1.38.0 (deserialization) - starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities) - python-multipart: 0.0.20 → 0.0.22 (arbitrary file write) - fastapi: 0.115.14 → 0.128.0 (for starlette compatibility) - nbconvert: 7.16.6 → 7.17.0 - orjson: 3.11.5 → 3.11.6 - protobuf: 6.33.4 → 6.33.5 Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Revert "Fix dependabot security vulnerabilities" (getzep#1185) Revert "Fix dependabot security vulnerabilities (getzep#1184)" This reverts commit 30cd907. * Pin mcp_server to graphiti-core 0.26.3 (getzep#1186) * Fix dependabot security vulnerabilities in dependencies Update lock files to address multiple security alerts: - pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490) - langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664) - mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS) - azure-core: 1.34.0 → 1.38.0 (deserialization) - starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities) - python-multipart: 0.0.20 → 0.0.22 (arbitrary file write) - fastapi: 0.115.14 → 0.128.0 (for starlette compatibility) - nbconvert: 7.16.6 → 7.17.0 - orjson: 3.11.5 → 3.11.6 - protobuf: 6.33.4 → 6.33.5 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Pin mcp_server to graphiti-core 0.26.3 from PyPI - Change dependency from >=0.23.1 to ==0.26.3 - Remove editable source override to use published package - Addresses code review feedback about RC version usage Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix remaining security vulnerabilities in mcp_server Update vulnerable transitive dependencies: - aiohttp: 3.12.15 → 3.13.3 (High: zip bomb, DoS) - urllib3: 2.5.0 → 2.6.3 (High: decompression bomb bypass) - filelock: 3.19.1 → 3.20.3 (Medium: TOCTOU symlink) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Update manual code review workflow to use claude-opus-4-5-20251101 (getzep#1189) * Update manual code review workflow to use claude-opus-4-5-20251101 Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com> * Update auto code review workflow to use claude-opus-4-5-20251101 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com> * Fix Azure OpenAI integration for v1 API compatibility This commit addresses several issues with Azure OpenAI integration: 1. Azure OpenAI Client (graphiti_core/llm_client/azure_openai_client.py): - Use AsyncOpenAI with v1 endpoint instead of AsyncAzureOpenAI - Implement separate handling for reasoning models (responses.parse) vs non-reasoning models (beta.chat.completions.parse) - Add custom response handler to parse both response formats correctly - Fix RefusalError import path from llm_client.errors 2. MCP Server Factories (mcp_server/src/services/factories.py): - Update Azure OpenAI factory to use v1 compatibility endpoint - Use same deployment for both main and small models in Azure - Add support for custom embedder endpoints (Ollama compatibility) - Add support for custom embedding dimensions - Remove unused Azure AD authentication code (TODO for future) - Add reasoning model detection for OpenAI provider 3. MCP Server Configuration (mcp_server/pyproject.toml): - Add local graphiti-core source dependency for development 4. Tests (tests/llm_client/test_azure_openai_client.py): - Update test mocks to support beta.chat.completions.parse - Update test expectations for non-reasoning model path These changes enable Azure OpenAI to work correctly with both reasoning and non-reasoning models, support custom embedder endpoints like Ollama, and maintain compatibility with the OpenAI v1 API specification. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address code review feedback - Remove local development dependency from mcp_server/pyproject.toml that would break PyPI installations - Move json import to top of azure_openai_client.py - Add comments explaining why non-reasoning models use beta.chat.completions.parse instead of responses.parse (Azure v1 compatibility limitation) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address minor code review issues - Add noqa comment for unused response_model parameter (inherited from abstract method interface) - Fix misleading comment in factories.py that referenced Azure OpenAI in the regular OpenAI case Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com> Co-authored-by: Preston Rasmussen <109292228+prasmussen15@users.noreply.github.com> Co-authored-by: prestonrasmussen <prasmuss15@gmail.com>
maskshell
pushed a commit
to maskshell/graphiti
that referenced
this pull request
Feb 9, 2026
maskshell
pushed a commit
to maskshell/graphiti
that referenced
this pull request
Feb 9, 2026
maskshell
pushed a commit
to maskshell/graphiti
that referenced
this pull request
Feb 9, 2026
* Fix MCP server documentation to reference correct entry point Update all documentation references from graphiti_mcp_server.py to main.py. The old filename was causing "No such file or directory" errors when users tried to run the commands as documented. The actual entry point is main.py in the mcp_server directory. Changes: - Update 7 command examples in README.md - Update example configuration file with correct path Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> * @andreibogdan has signed the CLA in getzep#1179 * Add extracted edge facts to entity summaries (getzep#1182) * Add extracted edge facts to entity summaries Update _extract_entity_summary to include facts from edges connected to each node. Edge facts are appended to the existing summary, and LLM summarization is only triggered if the combined content exceeds the character limit. - Add edges parameter to extract_attributes_from_nodes and related functions - Filter edges per node before passing to attribute extraction - Append edge facts (newline-separated) to node summary - Skip LLM call when combined summary is within length limits Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove unused reflexion prompts and invalidate_edges v1 - Remove reflexion prompts from extract_nodes.py and extract_edges.py - Remove extract_nodes_reflexion function from node_operations.py - Remove unused v1 function from invalidate_edges.py Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Filter out None/empty edge facts when building summary Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove unused MissedEntities import Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Optimize edge filtering with pre-built lookup dictionary Replace O(N * E) per-node edge filtering with O(E + N) pre-built dictionary lookup. Edges are now indexed by node UUID once before the gather operation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Handle empty summary edge case Return early if summary_with_edges is empty after stripping, avoiding storing empty summaries when node.summary and all edge facts are empty. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Update tests to reflect summary optimization behavior Tests now expect that short summaries are kept as-is without LLM calls. Added new test to verify LLM is called when summary exceeds character limit due to edge facts. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * format * Bump version to 0.27.0 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * lock * change version --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Fix dependabot security vulnerabilities (getzep#1184) Fix dependabot security vulnerabilities in dependencies Update lock files to address multiple security alerts: - pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490) - langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664) - mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS) - azure-core: 1.34.0 → 1.38.0 (deserialization) - starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities) - python-multipart: 0.0.20 → 0.0.22 (arbitrary file write) - fastapi: 0.115.14 → 0.128.0 (for starlette compatibility) - nbconvert: 7.16.6 → 7.17.0 - orjson: 3.11.5 → 3.11.6 - protobuf: 6.33.4 → 6.33.5 Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Revert "Fix dependabot security vulnerabilities" (getzep#1185) Revert "Fix dependabot security vulnerabilities (getzep#1184)" This reverts commit 30cd907. * Pin mcp_server to graphiti-core 0.26.3 (getzep#1186) * Fix dependabot security vulnerabilities in dependencies Update lock files to address multiple security alerts: - pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490) - langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664) - mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS) - azure-core: 1.34.0 → 1.38.0 (deserialization) - starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities) - python-multipart: 0.0.20 → 0.0.22 (arbitrary file write) - fastapi: 0.115.14 → 0.128.0 (for starlette compatibility) - nbconvert: 7.16.6 → 7.17.0 - orjson: 3.11.5 → 3.11.6 - protobuf: 6.33.4 → 6.33.5 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Pin mcp_server to graphiti-core 0.26.3 from PyPI - Change dependency from >=0.23.1 to ==0.26.3 - Remove editable source override to use published package - Addresses code review feedback about RC version usage Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix remaining security vulnerabilities in mcp_server Update vulnerable transitive dependencies: - aiohttp: 3.12.15 → 3.13.3 (High: zip bomb, DoS) - urllib3: 2.5.0 → 2.6.3 (High: decompression bomb bypass) - filelock: 3.19.1 → 3.20.3 (Medium: TOCTOU symlink) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Update manual code review workflow to use claude-opus-4-5-20251101 (getzep#1189) * Update manual code review workflow to use claude-opus-4-5-20251101 Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com> * Update auto code review workflow to use claude-opus-4-5-20251101 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com> * Fix Azure OpenAI integration for v1 API compatibility This commit addresses several issues with Azure OpenAI integration: 1. Azure OpenAI Client (graphiti_core/llm_client/azure_openai_client.py): - Use AsyncOpenAI with v1 endpoint instead of AsyncAzureOpenAI - Implement separate handling for reasoning models (responses.parse) vs non-reasoning models (beta.chat.completions.parse) - Add custom response handler to parse both response formats correctly - Fix RefusalError import path from llm_client.errors 2. MCP Server Factories (mcp_server/src/services/factories.py): - Update Azure OpenAI factory to use v1 compatibility endpoint - Use same deployment for both main and small models in Azure - Add support for custom embedder endpoints (Ollama compatibility) - Add support for custom embedding dimensions - Remove unused Azure AD authentication code (TODO for future) - Add reasoning model detection for OpenAI provider 3. MCP Server Configuration (mcp_server/pyproject.toml): - Add local graphiti-core source dependency for development 4. Tests (tests/llm_client/test_azure_openai_client.py): - Update test mocks to support beta.chat.completions.parse - Update test expectations for non-reasoning model path These changes enable Azure OpenAI to work correctly with both reasoning and non-reasoning models, support custom embedder endpoints like Ollama, and maintain compatibility with the OpenAI v1 API specification. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address code review feedback - Remove local development dependency from mcp_server/pyproject.toml that would break PyPI installations - Move json import to top of azure_openai_client.py - Add comments explaining why non-reasoning models use beta.chat.completions.parse instead of responses.parse (Azure v1 compatibility limitation) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address minor code review issues - Add noqa comment for unused response_model parameter (inherited from abstract method interface) - Fix misleading comment in factories.py that referenced Azure OpenAI in the regular OpenAI case Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com> Co-authored-by: Preston Rasmussen <109292228+prasmussen15@users.noreply.github.com> Co-authored-by: prestonrasmussen <prasmuss15@gmail.com>
maskshell
pushed a commit
to maskshell/graphiti
that referenced
this pull request
Feb 10, 2026
maskshell
pushed a commit
to maskshell/graphiti
that referenced
this pull request
Feb 10, 2026
* Fix MCP server documentation to reference correct entry point Update all documentation references from graphiti_mcp_server.py to main.py. The old filename was causing "No such file or directory" errors when users tried to run the commands as documented. The actual entry point is main.py in the mcp_server directory. Changes: - Update 7 command examples in README.md - Update example configuration file with correct path Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> * @andreibogdan has signed the CLA in getzep#1179 * Add extracted edge facts to entity summaries (getzep#1182) * Add extracted edge facts to entity summaries Update _extract_entity_summary to include facts from edges connected to each node. Edge facts are appended to the existing summary, and LLM summarization is only triggered if the combined content exceeds the character limit. - Add edges parameter to extract_attributes_from_nodes and related functions - Filter edges per node before passing to attribute extraction - Append edge facts (newline-separated) to node summary - Skip LLM call when combined summary is within length limits Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove unused reflexion prompts and invalidate_edges v1 - Remove reflexion prompts from extract_nodes.py and extract_edges.py - Remove extract_nodes_reflexion function from node_operations.py - Remove unused v1 function from invalidate_edges.py Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Filter out None/empty edge facts when building summary Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove unused MissedEntities import Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Optimize edge filtering with pre-built lookup dictionary Replace O(N * E) per-node edge filtering with O(E + N) pre-built dictionary lookup. Edges are now indexed by node UUID once before the gather operation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Handle empty summary edge case Return early if summary_with_edges is empty after stripping, avoiding storing empty summaries when node.summary and all edge facts are empty. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Update tests to reflect summary optimization behavior Tests now expect that short summaries are kept as-is without LLM calls. Added new test to verify LLM is called when summary exceeds character limit due to edge facts. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * format * Bump version to 0.27.0 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * lock * change version --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Fix dependabot security vulnerabilities (getzep#1184) Fix dependabot security vulnerabilities in dependencies Update lock files to address multiple security alerts: - pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490) - langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664) - mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS) - azure-core: 1.34.0 → 1.38.0 (deserialization) - starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities) - python-multipart: 0.0.20 → 0.0.22 (arbitrary file write) - fastapi: 0.115.14 → 0.128.0 (for starlette compatibility) - nbconvert: 7.16.6 → 7.17.0 - orjson: 3.11.5 → 3.11.6 - protobuf: 6.33.4 → 6.33.5 Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Revert "Fix dependabot security vulnerabilities" (getzep#1185) Revert "Fix dependabot security vulnerabilities (getzep#1184)" This reverts commit 30cd907. * Pin mcp_server to graphiti-core 0.26.3 (getzep#1186) * Fix dependabot security vulnerabilities in dependencies Update lock files to address multiple security alerts: - pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490) - langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664) - mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS) - azure-core: 1.34.0 → 1.38.0 (deserialization) - starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities) - python-multipart: 0.0.20 → 0.0.22 (arbitrary file write) - fastapi: 0.115.14 → 0.128.0 (for starlette compatibility) - nbconvert: 7.16.6 → 7.17.0 - orjson: 3.11.5 → 3.11.6 - protobuf: 6.33.4 → 6.33.5 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Pin mcp_server to graphiti-core 0.26.3 from PyPI - Change dependency from >=0.23.1 to ==0.26.3 - Remove editable source override to use published package - Addresses code review feedback about RC version usage Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix remaining security vulnerabilities in mcp_server Update vulnerable transitive dependencies: - aiohttp: 3.12.15 → 3.13.3 (High: zip bomb, DoS) - urllib3: 2.5.0 → 2.6.3 (High: decompression bomb bypass) - filelock: 3.19.1 → 3.20.3 (Medium: TOCTOU symlink) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Update manual code review workflow to use claude-opus-4-5-20251101 (getzep#1189) * Update manual code review workflow to use claude-opus-4-5-20251101 Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com> * Update auto code review workflow to use claude-opus-4-5-20251101 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com> * Fix Azure OpenAI integration for v1 API compatibility This commit addresses several issues with Azure OpenAI integration: 1. Azure OpenAI Client (graphiti_core/llm_client/azure_openai_client.py): - Use AsyncOpenAI with v1 endpoint instead of AsyncAzureOpenAI - Implement separate handling for reasoning models (responses.parse) vs non-reasoning models (beta.chat.completions.parse) - Add custom response handler to parse both response formats correctly - Fix RefusalError import path from llm_client.errors 2. MCP Server Factories (mcp_server/src/services/factories.py): - Update Azure OpenAI factory to use v1 compatibility endpoint - Use same deployment for both main and small models in Azure - Add support for custom embedder endpoints (Ollama compatibility) - Add support for custom embedding dimensions - Remove unused Azure AD authentication code (TODO for future) - Add reasoning model detection for OpenAI provider 3. MCP Server Configuration (mcp_server/pyproject.toml): - Add local graphiti-core source dependency for development 4. Tests (tests/llm_client/test_azure_openai_client.py): - Update test mocks to support beta.chat.completions.parse - Update test expectations for non-reasoning model path These changes enable Azure OpenAI to work correctly with both reasoning and non-reasoning models, support custom embedder endpoints like Ollama, and maintain compatibility with the OpenAI v1 API specification. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address code review feedback - Remove local development dependency from mcp_server/pyproject.toml that would break PyPI installations - Move json import to top of azure_openai_client.py - Add comments explaining why non-reasoning models use beta.chat.completions.parse instead of responses.parse (Azure v1 compatibility limitation) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address minor code review issues - Add noqa comment for unused response_model parameter (inherited from abstract method interface) - Fix misleading comment in factories.py that referenced Azure OpenAI in the regular OpenAI case Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com> Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com> Co-authored-by: Preston Rasmussen <109292228+prasmussen15@users.noreply.github.com> Co-authored-by: prestonrasmussen <prasmuss15@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR fixes incorrect references to the MCP server entry point in the documentation. Users following the current documentation encounter "No such file or directory" errors because the documented filename
graphiti_mcp_server.pydoesn't exist at the project root.Problem
When users try to run commands from the README such as:
They receive:
Solution
Updated all documentation references from
graphiti_mcp_server.pytomain.py, which is the actual entry point located in themcp_server/directory.Changes
Files Modified
mcp_server/README.md- Updated 7 command examples across multiple sections:mcp_server/config/mcp_config_stdio_example.json- Updated example configuration pathTotal Changes
Testing
Verified the fix by:
uv run main.py --helpsuccessfullygrep -r "graphiti_mcp_server\.py" mcp_server/README.mdreturns emptyAdditional Context
The actual file structure is:
mcp_server/main.py- Entry point wrapper scriptmcp_server/src/graphiti_mcp_server.py- ImplementationThe
main.pywrapper imports and runs the actual server, maintaining proper package organization while providing a convenient entry point.