Skip to content

Conversation

@andreibogdan
Copy link
Contributor

Description

This PR fixes incorrect references to the MCP server entry point in the documentation. Users following the current documentation encounter "No such file or directory" errors because the documented filename graphiti_mcp_server.py doesn't exist at the project root.

Problem

When users try to run commands from the README such as:

uv run graphiti_mcp_server.py --database-provider neo4j

They receive:

error: Failed to spawn: `graphiti_mcp_server.py`
  Caused by: No such file or directory (os error 2)

Solution

Updated all documentation references from graphiti_mcp_server.py to main.py, which is the actual entry point located in the mcp_server/ directory.

Changes

Files Modified

  • mcp_server/README.md - Updated 7 command examples across multiple sections:

    • Running with Neo4j (2 occurrences)
    • Running with FalkorDB (2 occurrences)
    • MCP client stdio configuration example (1 occurrence)
    • Cursor IDE integration (1 occurrence)
    • Claude Desktop integration (1 occurrence)
  • mcp_server/config/mcp_config_stdio_example.json - Updated example configuration path

Total Changes

  • 2 files changed, 8 insertions(+), 8 deletions(-)

Testing

Verified the fix by:

  1. ✅ Running uv run main.py --help successfully
  2. ✅ Confirming no remaining references to old filename: grep -r "graphiti_mcp_server\.py" mcp_server/README.md returns empty
  3. ✅ Running unit tests: 6/6 tests passed
    • Configuration loading tests
    • CLI argument override tests
    • stdio transport tests

Additional Context

The actual file structure is:

  • mcp_server/main.py - Entry point wrapper script
  • mcp_server/src/graphiti_mcp_server.py - Implementation

The main.py wrapper imports and runs the actual server, maintaining proper package organization while providing a convenient entry point.

@danielchalef
Copy link
Member

danielchalef commented Jan 27, 2026

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@andreibogdan
Copy link
Contributor Author

I have read the CLA Document and I hereby sign the CLA

@andreibogdan
Copy link
Contributor Author

recheck

danielchalef added a commit that referenced this pull request Jan 27, 2026
Update all documentation references from graphiti_mcp_server.py to main.py.
The old filename was causing "No such file or directory" errors when users
tried to run the commands as documented. The actual entry point is main.py
in the mcp_server directory.

Changes:
- Update 7 command examples in README.md
- Update example configuration file with correct path

Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>
Signed-off-by: Andrei Bogdan <166901+andreibogdan@users.noreply.github.com>
andreibogdan pushed a commit to andreibogdan/graphiti that referenced this pull request Jan 31, 2026
prasmussen15 added a commit that referenced this pull request Feb 6, 2026
* Fix MCP server documentation to reference correct entry point

Update all documentation references from graphiti_mcp_server.py to main.py.
The old filename was causing "No such file or directory" errors when users
tried to run the commands as documented. The actual entry point is main.py
in the mcp_server directory.

Changes:
- Update 7 command examples in README.md
- Update example configuration file with correct path

Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>

* @andreibogdan has signed the CLA in #1179

* Add extracted edge facts to entity summaries (#1182)

* Add extracted edge facts to entity summaries

Update _extract_entity_summary to include facts from edges connected to
each node. Edge facts are appended to the existing summary, and LLM
summarization is only triggered if the combined content exceeds the
character limit.

- Add edges parameter to extract_attributes_from_nodes and related functions
- Filter edges per node before passing to attribute extraction
- Append edge facts (newline-separated) to node summary
- Skip LLM call when combined summary is within length limits

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove unused reflexion prompts and invalidate_edges v1

- Remove reflexion prompts from extract_nodes.py and extract_edges.py
- Remove extract_nodes_reflexion function from node_operations.py
- Remove unused v1 function from invalidate_edges.py

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Filter out None/empty edge facts when building summary

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove unused MissedEntities import

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Optimize edge filtering with pre-built lookup dictionary

Replace O(N * E) per-node edge filtering with O(E + N) pre-built
dictionary lookup. Edges are now indexed by node UUID once before
the gather operation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Handle empty summary edge case

Return early if summary_with_edges is empty after stripping,
avoiding storing empty summaries when node.summary and all
edge facts are empty.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update tests to reflect summary optimization behavior

Tests now expect that short summaries are kept as-is without LLM calls.
Added new test to verify LLM is called when summary exceeds character
limit due to edge facts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* format

* Bump version to 0.27.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* lock

* change version

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Fix dependabot security vulnerabilities (#1184)

Fix dependabot security vulnerabilities in dependencies

Update lock files to address multiple security alerts:
- pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490)
- langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664)
- mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS)
- azure-core: 1.34.0 → 1.38.0 (deserialization)
- starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities)
- python-multipart: 0.0.20 → 0.0.22 (arbitrary file write)
- fastapi: 0.115.14 → 0.128.0 (for starlette compatibility)
- nbconvert: 7.16.6 → 7.17.0
- orjson: 3.11.5 → 3.11.6
- protobuf: 6.33.4 → 6.33.5

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Revert "Fix dependabot security vulnerabilities" (#1185)

Revert "Fix dependabot security vulnerabilities (#1184)"

This reverts commit 30cd907.

* Pin mcp_server to graphiti-core 0.26.3 (#1186)

* Fix dependabot security vulnerabilities in dependencies

Update lock files to address multiple security alerts:
- pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490)
- langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664)
- mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS)
- azure-core: 1.34.0 → 1.38.0 (deserialization)
- starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities)
- python-multipart: 0.0.20 → 0.0.22 (arbitrary file write)
- fastapi: 0.115.14 → 0.128.0 (for starlette compatibility)
- nbconvert: 7.16.6 → 7.17.0
- orjson: 3.11.5 → 3.11.6
- protobuf: 6.33.4 → 6.33.5

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Pin mcp_server to graphiti-core 0.26.3 from PyPI

- Change dependency from >=0.23.1 to ==0.26.3
- Remove editable source override to use published package
- Addresses code review feedback about RC version usage

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix remaining security vulnerabilities in mcp_server

Update vulnerable transitive dependencies:
- aiohttp: 3.12.15 → 3.13.3 (High: zip bomb, DoS)
- urllib3: 2.5.0 → 2.6.3 (High: decompression bomb bypass)
- filelock: 3.19.1 → 3.20.3 (Medium: TOCTOU symlink)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Update manual code review workflow to use claude-opus-4-5-20251101 (#1189)

* Update manual code review workflow to use claude-opus-4-5-20251101

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>

* Update auto code review workflow to use claude-opus-4-5-20251101

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>

* Fix Azure OpenAI integration for v1 API compatibility

This commit addresses several issues with Azure OpenAI integration:

1. Azure OpenAI Client (graphiti_core/llm_client/azure_openai_client.py):
   - Use AsyncOpenAI with v1 endpoint instead of AsyncAzureOpenAI
   - Implement separate handling for reasoning models (responses.parse) vs
     non-reasoning models (beta.chat.completions.parse)
   - Add custom response handler to parse both response formats correctly
   - Fix RefusalError import path from llm_client.errors

2. MCP Server Factories (mcp_server/src/services/factories.py):
   - Update Azure OpenAI factory to use v1 compatibility endpoint
   - Use same deployment for both main and small models in Azure
   - Add support for custom embedder endpoints (Ollama compatibility)
   - Add support for custom embedding dimensions
   - Remove unused Azure AD authentication code (TODO for future)
   - Add reasoning model detection for OpenAI provider

3. MCP Server Configuration (mcp_server/pyproject.toml):
   - Add local graphiti-core source dependency for development

4. Tests (tests/llm_client/test_azure_openai_client.py):
   - Update test mocks to support beta.chat.completions.parse
   - Update test expectations for non-reasoning model path

These changes enable Azure OpenAI to work correctly with both reasoning
and non-reasoning models, support custom embedder endpoints like Ollama,
and maintain compatibility with the OpenAI v1 API specification.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address code review feedback

- Remove local development dependency from mcp_server/pyproject.toml
  that would break PyPI installations
- Move json import to top of azure_openai_client.py
- Add comments explaining why non-reasoning models use
  beta.chat.completions.parse instead of responses.parse
  (Azure v1 compatibility limitation)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address minor code review issues

- Add noqa comment for unused response_model parameter (inherited from
  abstract method interface)
- Fix misleading comment in factories.py that referenced Azure OpenAI
  in the regular OpenAI case

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
Co-authored-by: Preston Rasmussen <109292228+prasmussen15@users.noreply.github.com>
Co-authored-by: prestonrasmussen <prasmuss15@gmail.com>
lehcode pushed a commit to lehcode/graphiti-litellm that referenced this pull request Feb 8, 2026
* Fix MCP server documentation to reference correct entry point

Update all documentation references from graphiti_mcp_server.py to main.py.
The old filename was causing "No such file or directory" errors when users
tried to run the commands as documented. The actual entry point is main.py
in the mcp_server directory.

Changes:
- Update 7 command examples in README.md
- Update example configuration file with correct path

Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>

* @andreibogdan has signed the CLA in getzep#1179

* Add extracted edge facts to entity summaries (getzep#1182)

* Add extracted edge facts to entity summaries

Update _extract_entity_summary to include facts from edges connected to
each node. Edge facts are appended to the existing summary, and LLM
summarization is only triggered if the combined content exceeds the
character limit.

- Add edges parameter to extract_attributes_from_nodes and related functions
- Filter edges per node before passing to attribute extraction
- Append edge facts (newline-separated) to node summary
- Skip LLM call when combined summary is within length limits

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove unused reflexion prompts and invalidate_edges v1

- Remove reflexion prompts from extract_nodes.py and extract_edges.py
- Remove extract_nodes_reflexion function from node_operations.py
- Remove unused v1 function from invalidate_edges.py

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Filter out None/empty edge facts when building summary

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove unused MissedEntities import

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Optimize edge filtering with pre-built lookup dictionary

Replace O(N * E) per-node edge filtering with O(E + N) pre-built
dictionary lookup. Edges are now indexed by node UUID once before
the gather operation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Handle empty summary edge case

Return early if summary_with_edges is empty after stripping,
avoiding storing empty summaries when node.summary and all
edge facts are empty.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update tests to reflect summary optimization behavior

Tests now expect that short summaries are kept as-is without LLM calls.
Added new test to verify LLM is called when summary exceeds character
limit due to edge facts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* format

* Bump version to 0.27.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* lock

* change version

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Fix dependabot security vulnerabilities (getzep#1184)

Fix dependabot security vulnerabilities in dependencies

Update lock files to address multiple security alerts:
- pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490)
- langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664)
- mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS)
- azure-core: 1.34.0 → 1.38.0 (deserialization)
- starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities)
- python-multipart: 0.0.20 → 0.0.22 (arbitrary file write)
- fastapi: 0.115.14 → 0.128.0 (for starlette compatibility)
- nbconvert: 7.16.6 → 7.17.0
- orjson: 3.11.5 → 3.11.6
- protobuf: 6.33.4 → 6.33.5

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Revert "Fix dependabot security vulnerabilities" (getzep#1185)

Revert "Fix dependabot security vulnerabilities (getzep#1184)"

This reverts commit 30cd907.

* Pin mcp_server to graphiti-core 0.26.3 (getzep#1186)

* Fix dependabot security vulnerabilities in dependencies

Update lock files to address multiple security alerts:
- pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490)
- langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664)
- mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS)
- azure-core: 1.34.0 → 1.38.0 (deserialization)
- starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities)
- python-multipart: 0.0.20 → 0.0.22 (arbitrary file write)
- fastapi: 0.115.14 → 0.128.0 (for starlette compatibility)
- nbconvert: 7.16.6 → 7.17.0
- orjson: 3.11.5 → 3.11.6
- protobuf: 6.33.4 → 6.33.5

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Pin mcp_server to graphiti-core 0.26.3 from PyPI

- Change dependency from >=0.23.1 to ==0.26.3
- Remove editable source override to use published package
- Addresses code review feedback about RC version usage

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix remaining security vulnerabilities in mcp_server

Update vulnerable transitive dependencies:
- aiohttp: 3.12.15 → 3.13.3 (High: zip bomb, DoS)
- urllib3: 2.5.0 → 2.6.3 (High: decompression bomb bypass)
- filelock: 3.19.1 → 3.20.3 (Medium: TOCTOU symlink)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Update manual code review workflow to use claude-opus-4-5-20251101 (getzep#1189)

* Update manual code review workflow to use claude-opus-4-5-20251101

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>

* Update auto code review workflow to use claude-opus-4-5-20251101

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>

* Fix Azure OpenAI integration for v1 API compatibility

This commit addresses several issues with Azure OpenAI integration:

1. Azure OpenAI Client (graphiti_core/llm_client/azure_openai_client.py):
   - Use AsyncOpenAI with v1 endpoint instead of AsyncAzureOpenAI
   - Implement separate handling for reasoning models (responses.parse) vs
     non-reasoning models (beta.chat.completions.parse)
   - Add custom response handler to parse both response formats correctly
   - Fix RefusalError import path from llm_client.errors

2. MCP Server Factories (mcp_server/src/services/factories.py):
   - Update Azure OpenAI factory to use v1 compatibility endpoint
   - Use same deployment for both main and small models in Azure
   - Add support for custom embedder endpoints (Ollama compatibility)
   - Add support for custom embedding dimensions
   - Remove unused Azure AD authentication code (TODO for future)
   - Add reasoning model detection for OpenAI provider

3. MCP Server Configuration (mcp_server/pyproject.toml):
   - Add local graphiti-core source dependency for development

4. Tests (tests/llm_client/test_azure_openai_client.py):
   - Update test mocks to support beta.chat.completions.parse
   - Update test expectations for non-reasoning model path

These changes enable Azure OpenAI to work correctly with both reasoning
and non-reasoning models, support custom embedder endpoints like Ollama,
and maintain compatibility with the OpenAI v1 API specification.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address code review feedback

- Remove local development dependency from mcp_server/pyproject.toml
  that would break PyPI installations
- Move json import to top of azure_openai_client.py
- Add comments explaining why non-reasoning models use
  beta.chat.completions.parse instead of responses.parse
  (Azure v1 compatibility limitation)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address minor code review issues

- Add noqa comment for unused response_model parameter (inherited from
  abstract method interface)
- Fix misleading comment in factories.py that referenced Azure OpenAI
  in the regular OpenAI case

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
Co-authored-by: Preston Rasmussen <109292228+prasmussen15@users.noreply.github.com>
Co-authored-by: prestonrasmussen <prasmuss15@gmail.com>
maskshell pushed a commit to maskshell/graphiti that referenced this pull request Feb 9, 2026
maskshell pushed a commit to maskshell/graphiti that referenced this pull request Feb 9, 2026
maskshell pushed a commit to maskshell/graphiti that referenced this pull request Feb 9, 2026
* Fix MCP server documentation to reference correct entry point

Update all documentation references from graphiti_mcp_server.py to main.py.
The old filename was causing "No such file or directory" errors when users
tried to run the commands as documented. The actual entry point is main.py
in the mcp_server directory.

Changes:
- Update 7 command examples in README.md
- Update example configuration file with correct path

Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>

* @andreibogdan has signed the CLA in getzep#1179

* Add extracted edge facts to entity summaries (getzep#1182)

* Add extracted edge facts to entity summaries

Update _extract_entity_summary to include facts from edges connected to
each node. Edge facts are appended to the existing summary, and LLM
summarization is only triggered if the combined content exceeds the
character limit.

- Add edges parameter to extract_attributes_from_nodes and related functions
- Filter edges per node before passing to attribute extraction
- Append edge facts (newline-separated) to node summary
- Skip LLM call when combined summary is within length limits

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove unused reflexion prompts and invalidate_edges v1

- Remove reflexion prompts from extract_nodes.py and extract_edges.py
- Remove extract_nodes_reflexion function from node_operations.py
- Remove unused v1 function from invalidate_edges.py

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Filter out None/empty edge facts when building summary

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove unused MissedEntities import

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Optimize edge filtering with pre-built lookup dictionary

Replace O(N * E) per-node edge filtering with O(E + N) pre-built
dictionary lookup. Edges are now indexed by node UUID once before
the gather operation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Handle empty summary edge case

Return early if summary_with_edges is empty after stripping,
avoiding storing empty summaries when node.summary and all
edge facts are empty.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update tests to reflect summary optimization behavior

Tests now expect that short summaries are kept as-is without LLM calls.
Added new test to verify LLM is called when summary exceeds character
limit due to edge facts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* format

* Bump version to 0.27.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* lock

* change version

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Fix dependabot security vulnerabilities (getzep#1184)

Fix dependabot security vulnerabilities in dependencies

Update lock files to address multiple security alerts:
- pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490)
- langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664)
- mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS)
- azure-core: 1.34.0 → 1.38.0 (deserialization)
- starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities)
- python-multipart: 0.0.20 → 0.0.22 (arbitrary file write)
- fastapi: 0.115.14 → 0.128.0 (for starlette compatibility)
- nbconvert: 7.16.6 → 7.17.0
- orjson: 3.11.5 → 3.11.6
- protobuf: 6.33.4 → 6.33.5

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Revert "Fix dependabot security vulnerabilities" (getzep#1185)

Revert "Fix dependabot security vulnerabilities (getzep#1184)"

This reverts commit 30cd907.

* Pin mcp_server to graphiti-core 0.26.3 (getzep#1186)

* Fix dependabot security vulnerabilities in dependencies

Update lock files to address multiple security alerts:
- pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490)
- langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664)
- mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS)
- azure-core: 1.34.0 → 1.38.0 (deserialization)
- starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities)
- python-multipart: 0.0.20 → 0.0.22 (arbitrary file write)
- fastapi: 0.115.14 → 0.128.0 (for starlette compatibility)
- nbconvert: 7.16.6 → 7.17.0
- orjson: 3.11.5 → 3.11.6
- protobuf: 6.33.4 → 6.33.5

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Pin mcp_server to graphiti-core 0.26.3 from PyPI

- Change dependency from >=0.23.1 to ==0.26.3
- Remove editable source override to use published package
- Addresses code review feedback about RC version usage

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix remaining security vulnerabilities in mcp_server

Update vulnerable transitive dependencies:
- aiohttp: 3.12.15 → 3.13.3 (High: zip bomb, DoS)
- urllib3: 2.5.0 → 2.6.3 (High: decompression bomb bypass)
- filelock: 3.19.1 → 3.20.3 (Medium: TOCTOU symlink)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Update manual code review workflow to use claude-opus-4-5-20251101 (getzep#1189)

* Update manual code review workflow to use claude-opus-4-5-20251101

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>

* Update auto code review workflow to use claude-opus-4-5-20251101

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>

* Fix Azure OpenAI integration for v1 API compatibility

This commit addresses several issues with Azure OpenAI integration:

1. Azure OpenAI Client (graphiti_core/llm_client/azure_openai_client.py):
   - Use AsyncOpenAI with v1 endpoint instead of AsyncAzureOpenAI
   - Implement separate handling for reasoning models (responses.parse) vs
     non-reasoning models (beta.chat.completions.parse)
   - Add custom response handler to parse both response formats correctly
   - Fix RefusalError import path from llm_client.errors

2. MCP Server Factories (mcp_server/src/services/factories.py):
   - Update Azure OpenAI factory to use v1 compatibility endpoint
   - Use same deployment for both main and small models in Azure
   - Add support for custom embedder endpoints (Ollama compatibility)
   - Add support for custom embedding dimensions
   - Remove unused Azure AD authentication code (TODO for future)
   - Add reasoning model detection for OpenAI provider

3. MCP Server Configuration (mcp_server/pyproject.toml):
   - Add local graphiti-core source dependency for development

4. Tests (tests/llm_client/test_azure_openai_client.py):
   - Update test mocks to support beta.chat.completions.parse
   - Update test expectations for non-reasoning model path

These changes enable Azure OpenAI to work correctly with both reasoning
and non-reasoning models, support custom embedder endpoints like Ollama,
and maintain compatibility with the OpenAI v1 API specification.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address code review feedback

- Remove local development dependency from mcp_server/pyproject.toml
  that would break PyPI installations
- Move json import to top of azure_openai_client.py
- Add comments explaining why non-reasoning models use
  beta.chat.completions.parse instead of responses.parse
  (Azure v1 compatibility limitation)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address minor code review issues

- Add noqa comment for unused response_model parameter (inherited from
  abstract method interface)
- Fix misleading comment in factories.py that referenced Azure OpenAI
  in the regular OpenAI case

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
Co-authored-by: Preston Rasmussen <109292228+prasmussen15@users.noreply.github.com>
Co-authored-by: prestonrasmussen <prasmuss15@gmail.com>
maskshell pushed a commit to maskshell/graphiti that referenced this pull request Feb 10, 2026
maskshell pushed a commit to maskshell/graphiti that referenced this pull request Feb 10, 2026
* Fix MCP server documentation to reference correct entry point

Update all documentation references from graphiti_mcp_server.py to main.py.
The old filename was causing "No such file or directory" errors when users
tried to run the commands as documented. The actual entry point is main.py
in the mcp_server directory.

Changes:
- Update 7 command examples in README.md
- Update example configuration file with correct path

Co-Authored-By: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>

* @andreibogdan has signed the CLA in getzep#1179

* Add extracted edge facts to entity summaries (getzep#1182)

* Add extracted edge facts to entity summaries

Update _extract_entity_summary to include facts from edges connected to
each node. Edge facts are appended to the existing summary, and LLM
summarization is only triggered if the combined content exceeds the
character limit.

- Add edges parameter to extract_attributes_from_nodes and related functions
- Filter edges per node before passing to attribute extraction
- Append edge facts (newline-separated) to node summary
- Skip LLM call when combined summary is within length limits

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove unused reflexion prompts and invalidate_edges v1

- Remove reflexion prompts from extract_nodes.py and extract_edges.py
- Remove extract_nodes_reflexion function from node_operations.py
- Remove unused v1 function from invalidate_edges.py

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Filter out None/empty edge facts when building summary

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove unused MissedEntities import

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Optimize edge filtering with pre-built lookup dictionary

Replace O(N * E) per-node edge filtering with O(E + N) pre-built
dictionary lookup. Edges are now indexed by node UUID once before
the gather operation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Handle empty summary edge case

Return early if summary_with_edges is empty after stripping,
avoiding storing empty summaries when node.summary and all
edge facts are empty.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update tests to reflect summary optimization behavior

Tests now expect that short summaries are kept as-is without LLM calls.
Added new test to verify LLM is called when summary exceeds character
limit due to edge facts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* format

* Bump version to 0.27.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* lock

* change version

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Fix dependabot security vulnerabilities (getzep#1184)

Fix dependabot security vulnerabilities in dependencies

Update lock files to address multiple security alerts:
- pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490)
- langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664)
- mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS)
- azure-core: 1.34.0 → 1.38.0 (deserialization)
- starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities)
- python-multipart: 0.0.20 → 0.0.22 (arbitrary file write)
- fastapi: 0.115.14 → 0.128.0 (for starlette compatibility)
- nbconvert: 7.16.6 → 7.17.0
- orjson: 3.11.5 → 3.11.6
- protobuf: 6.33.4 → 6.33.5

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Revert "Fix dependabot security vulnerabilities" (getzep#1185)

Revert "Fix dependabot security vulnerabilities (getzep#1184)"

This reverts commit 30cd907.

* Pin mcp_server to graphiti-core 0.26.3 (getzep#1186)

* Fix dependabot security vulnerabilities in dependencies

Update lock files to address multiple security alerts:
- pyasn1: 0.6.1 → 0.6.2 (CVE-2026-23490)
- langchain-core: 0.3.74 → 0.3.83 (CVE-2025-68664)
- mcp: 1.9.4 → 1.26.0 (DNS rebinding, DoS)
- azure-core: 1.34.0 → 1.38.0 (deserialization)
- starlette: 0.46.2/0.47.1 → 0.50.0/0.52.1 (DoS vulnerabilities)
- python-multipart: 0.0.20 → 0.0.22 (arbitrary file write)
- fastapi: 0.115.14 → 0.128.0 (for starlette compatibility)
- nbconvert: 7.16.6 → 7.17.0
- orjson: 3.11.5 → 3.11.6
- protobuf: 6.33.4 → 6.33.5

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Pin mcp_server to graphiti-core 0.26.3 from PyPI

- Change dependency from >=0.23.1 to ==0.26.3
- Remove editable source override to use published package
- Addresses code review feedback about RC version usage

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix remaining security vulnerabilities in mcp_server

Update vulnerable transitive dependencies:
- aiohttp: 3.12.15 → 3.13.3 (High: zip bomb, DoS)
- urllib3: 2.5.0 → 2.6.3 (High: decompression bomb bypass)
- filelock: 3.19.1 → 3.20.3 (Medium: TOCTOU symlink)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Update manual code review workflow to use claude-opus-4-5-20251101 (getzep#1189)

* Update manual code review workflow to use claude-opus-4-5-20251101

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>

* Update auto code review workflow to use claude-opus-4-5-20251101

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>

* Fix Azure OpenAI integration for v1 API compatibility

This commit addresses several issues with Azure OpenAI integration:

1. Azure OpenAI Client (graphiti_core/llm_client/azure_openai_client.py):
   - Use AsyncOpenAI with v1 endpoint instead of AsyncAzureOpenAI
   - Implement separate handling for reasoning models (responses.parse) vs
     non-reasoning models (beta.chat.completions.parse)
   - Add custom response handler to parse both response formats correctly
   - Fix RefusalError import path from llm_client.errors

2. MCP Server Factories (mcp_server/src/services/factories.py):
   - Update Azure OpenAI factory to use v1 compatibility endpoint
   - Use same deployment for both main and small models in Azure
   - Add support for custom embedder endpoints (Ollama compatibility)
   - Add support for custom embedding dimensions
   - Remove unused Azure AD authentication code (TODO for future)
   - Add reasoning model detection for OpenAI provider

3. MCP Server Configuration (mcp_server/pyproject.toml):
   - Add local graphiti-core source dependency for development

4. Tests (tests/llm_client/test_azure_openai_client.py):
   - Update test mocks to support beta.chat.completions.parse
   - Update test expectations for non-reasoning model path

These changes enable Azure OpenAI to work correctly with both reasoning
and non-reasoning models, support custom embedder endpoints like Ollama,
and maintain compatibility with the OpenAI v1 API specification.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address code review feedback

- Remove local development dependency from mcp_server/pyproject.toml
  that would break PyPI installations
- Move json import to top of azure_openai_client.py
- Add comments explaining why non-reasoning models use
  beta.chat.completions.parse instead of responses.parse
  (Azure v1 compatibility limitation)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: address minor code review issues

- Add noqa comment for unused response_model parameter (inherited from
  abstract method interface)
- Fix misleading comment in factories.py that referenced Azure OpenAI
  in the regular OpenAI case

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude (us.anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
Co-authored-by: Preston Rasmussen <109292228+prasmussen15@users.noreply.github.com>
Co-authored-by: prestonrasmussen <prasmuss15@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants