Skip to content

Conversation

@koxudaxi
Copy link
Owner

@koxudaxi koxudaxi commented Dec 24, 2025

Summary by CodeRabbit

Release Notes

  • Chores

    • Updated Python setup action in CI workflows to latest version.
    • Optimized test execution in CI to focus on benchmark tests.
  • Tests

    • Added pytest marker configuration for performance tests to improve test organization.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 24, 2025

📝 Walkthrough

Walkthrough

This pull request upgrades the GitHub Actions Python setup action from v4 to v6, introduces a pytest marker filter to exclude performance tests from CI benchmarks, and adds a corresponding perf marker configuration in pyproject.toml for test categorization.

Changes

Cohort / File(s) Summary
CI Workflow Configuration
.github/workflows/codspeed.yaml
Upgrades actions/setup-python from v4 to v6; adds pytest marker filter -m "benchmark and not perf" to exclude performance tests from benchmark runs.
Pytest Configuration
pyproject.toml
Adds new pytest marker perf: marks tests as performance tests (excluded from CI benchmarks) to [tool.pytest.ini_options] for test categorization.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

🐰 A fluffy changelog hops with glee,
Python six now runs wild and free,
Markers sort tests with care divine,
Benchmarks shine bright without perf's line,
Workflow perfected, hop-hop-hooray!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Title check ⚠️ Warning The title mentions adding performance e2e tests with large schema fixtures, but the changes only upgrade a Python action, add a pytest marker filter, and configure a marker. The title does not reflect the actual CI/tooling-focused changes. Update the title to reflect the actual changes, such as 'Upgrade Python setup action and add performance test markers' or 'Configure CI for performance benchmarking tests'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-performance-e2e-tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dbdd376 and ea56bd1.

⛔ Files ignored due to path filters (59)
  • tests/data/performance/aws_style.yaml is excluded by !tests/data/**/*.yaml and included by none
  • tests/data/performance/complex_refs.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/deep_nested.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/duplicate_names.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/graphql_style.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/kubernetes_style.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/large_models.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_00.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_01.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_02.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_03.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_04.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_05.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_06.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_07.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_08.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_09.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_10.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_11.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_12.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_13.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_14.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_15.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_16.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_17.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_18.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_19.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_20.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_21.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_22.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_23.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_24.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_25.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_26.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_27.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_28.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_29.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_30.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_31.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_32.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_33.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_34.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_35.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_36.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_37.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_38.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_39.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_40.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_41.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_42.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_43.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_44.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_45.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_46.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_47.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_48.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/multiple_files/module_49.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/performance/openapi_large.yaml is excluded by !tests/data/**/*.yaml and included by none
  • tests/data/performance/stripe_style.json is excluded by !tests/data/**/*.json and included by none
📒 Files selected for processing (3)
  • .github/workflows/perf.yaml
  • tests/main/test_performance.py
  • tox.ini
🧰 Additional context used
🪛 actionlint (1.7.9)
.github/workflows/perf.yaml

23-23: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue

(action)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
  • GitHub Check: Analyze (python)
  • GitHub Check: py312-isort6 on Ubuntu
  • GitHub Check: 3.12 on Ubuntu
  • GitHub Check: py312-black22 on Ubuntu
  • GitHub Check: py312-pydantic1 on Ubuntu
  • GitHub Check: 3.13 on Ubuntu
  • GitHub Check: 3.12 on macOS
  • GitHub Check: 3.11 on Ubuntu
  • GitHub Check: 3.10 on Ubuntu
  • GitHub Check: 3.13 on Windows
  • GitHub Check: py312-isort5 on Ubuntu
  • GitHub Check: py312-isort7 on Ubuntu
  • GitHub Check: py312-black24 on Ubuntu
  • GitHub Check: 3.14 on Ubuntu
  • GitHub Check: 3.10 on macOS
  • GitHub Check: 3.14 on macOS
  • GitHub Check: py312-black23 on Ubuntu
  • GitHub Check: 3.11 on macOS
  • GitHub Check: benchmarks
  • GitHub Check: benchmarks
🔇 Additional comments (6)
.github/workflows/perf.yaml (6)

1-10: LGTM: Workflow triggers are well-configured.

The workflow name and trigger configuration are appropriate for performance benchmarking. Including workflow_dispatch is particularly useful for CodSpeed's backtest feature.


12-14: LGTM: Good concurrency control.

The concurrency configuration prevents redundant benchmark runs on the same branch with automatic cancellation of in-progress runs.


21-22: LGTM: Helpful documentation.

The comment explaining the uv standalone build incompatibility with CodSpeedHQ and linking to the relevant issue is valuable documentation for future maintainers.


26-29: LGTM: Dependency installation is correct.

The use of astral-sh/setup-uv@v5 and uv sync --all-extras properly installs all dependencies needed for the performance tests.


30-35: pytest-xdist is already included in the project dependencies (pyproject.toml specifies pytest-xdist>=3.3.1). No action required.

Likely an incorrect or invalid review comment.


18-18: > Likely an incorrect or invalid review comment.

@codecov
Copy link

codecov bot commented Dec 24, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 99.49%. Comparing base (dbdd376) to head (6eb7178).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff            @@
##             main    #2782    +/-   ##
========================================
  Coverage   99.48%   99.49%            
========================================
  Files          87       88     +1     
  Lines       13016    13212   +196     
  Branches     1555     1556     +1     
========================================
+ Hits        12949    13145   +196     
  Misses         35       35            
  Partials       32       32            
Flag Coverage Δ
unittests 99.49% <ø> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@codspeed-hq
Copy link

codspeed-hq bot commented Dec 24, 2025

CodSpeed Performance Report

Merging #2782 will not alter performance

Comparing add-performance-e2e-tests (6eb7178) with main (dbdd376)

Summary

✅ 73 untouched
⏩ 10 skipped1

Footnotes

  1. 10 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@koxudaxi koxudaxi force-pushed the add-performance-e2e-tests branch from a7f1d89 to 6eb7178 Compare December 24, 2025 11:57
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ea56bd1 and 6eb7178.

📒 Files selected for processing (3)
  • .github/workflows/codspeed.yaml
  • pyproject.toml
  • tests/main/test_performance.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: 3.10 on Windows
  • GitHub Check: 3.14 on macOS
  • GitHub Check: py312-isort7 on Ubuntu
  • GitHub Check: 3.10 on macOS
  • GitHub Check: py312-isort6 on Ubuntu
  • GitHub Check: py312-black23 on Ubuntu
  • GitHub Check: py312-black22 on Ubuntu
  • GitHub Check: 3.10 on Ubuntu
  • GitHub Check: 3.12 on macOS
  • GitHub Check: 3.13 on Ubuntu
  • GitHub Check: 3.13 on macOS
  • GitHub Check: 3.11 on macOS
  • GitHub Check: 3.11 on Windows
  • GitHub Check: 3.13 on Windows
  • GitHub Check: 3.12 on Windows
  • GitHub Check: 3.11 on Ubuntu
  • GitHub Check: 3.14 on Windows
  • GitHub Check: Analyze (python)
  • GitHub Check: benchmarks
🔇 Additional comments (2)
.github/workflows/codspeed.yaml (1)

34-34: Marker filter correctly excludes performance tests from CI benchmarks.

The marker expression "benchmark and not perf" properly filters tests to run benchmark tests while excluding those marked with the perf marker. This aligns well with the marker definition added in pyproject.toml.

pyproject.toml (1)

227-227: Well-defined pytest marker for performance test categorization.

The perf marker is correctly configured with a clear description. This enables proper test categorization and allows the CI workflow to selectively exclude performance tests from benchmark runs.

@koxudaxi koxudaxi merged commit 3a227c4 into main Dec 24, 2025
37 checks passed
@koxudaxi koxudaxi deleted the add-performance-e2e-tests branch December 24, 2025 12:11
@github-actions
Copy link
Contributor

Breaking Change Analysis

Result: No breaking changes detected

Reasoning: This PR only adds performance tests, test fixtures, and CI/tox configuration. It does not modify any source code that affects code generation, templates, CLI/API, default behavior, Python version support, or error handling. All changes are internal to the testing infrastructure and do not impact users of the library.


This analysis was performed by Claude Code Action

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant