Skip to content

Conversation

@koxudaxi
Copy link
Owner

@koxudaxi koxudaxi commented Dec 26, 2025

Summary by CodeRabbit

  • Tests
    • Refined performance test configuration to target specific test suites more effectively
    • Updated standard test runs to exclude performance-related tests, improving execution efficiency

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 26, 2025

📝 Walkthrough

Walkthrough

The pull request refactors test organization by narrowing CodSpeed instrumentation to run a single explicit performance test file and adding a pytest marker filter to exclude performance-marked tests from the standard test suite.

Changes

Cohort / File(s) Summary
CodSpeed Workflow
.github/workflows/codspeed.yaml
Replaces broad test selection with explicit targeting of tests/main/test_performance.py; removes all-tests path and "benchmark and not perf" exclusion filter
Test Configuration
tox.ini
Adds -m "not perf" pytest marker filter to exclude perf-marked tests from standard test runs in the main testenv

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

🐰 Hop, hop! Tests now neatly sorted,
Performance ones to CodSpeed reported,
While standard runs skip the perf brigade,
A cleaner CI pipeline's made!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: excluding perf tests from regular test runs, which is reflected in both the tox.ini and codspeed.yaml modifications.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ci/exclude-perf-tests

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 269b6b4 and 2a107b3.

📒 Files selected for processing (2)
  • .github/workflows/codspeed.yaml
  • tox.ini
🔇 Additional comments (2)
.github/workflows/codspeed.yaml (1)

34-34: LGTM! Simplified and explicit test targeting.

The change to explicitly target tests/main/test_performance.py aligns well with the new [testenv:perf] configuration in tox.ini. This approach is cleaner than using marker filters in the CI workflow.

tox.ini (1)

40-40: The perf marker is already registered in pyproject.toml.

No action needed. The marker is defined at line 228 of pyproject.toml as: markers = [ "perf: marks tests as performance tests (excluded from CI benchmarks)" ]. The code change is correct.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 26, 2025

📚 Docs Preview: https://pr-2817.datamodel-code-generator.pages.dev

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tox.ini (1)

147-157: Performance tests are scattered across multiple files; perf environment may miss benchmarks outside test_performance.py

The [testenv:perf] configuration targets only tests/main/test_performance.py, but the codebase has @pytest.mark.benchmark tests in:

  • tests/main/test_main_yaml.py
  • tests/main/openapi/test_main_openapi.py
  • tests/main/jsonschema/test_main_jsonschema.py

To properly isolate performance testing, either:

  1. Move all @pytest.mark.benchmark tests to tests/main/test_performance.py, or
  2. Update the perf environment to use -m "perf or benchmark" to capture all performance tests
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 269b6b4 and 2a107b3.

📒 Files selected for processing (2)
  • .github/workflows/codspeed.yaml
  • tox.ini
🔇 Additional comments (2)
.github/workflows/codspeed.yaml (1)

34-34: LGTM! Simplified and explicit test targeting.

The change to explicitly target tests/main/test_performance.py aligns well with the new [testenv:perf] configuration in tox.ini. This approach is cleaner than using marker filters in the CI workflow.

tox.ini (1)

40-40: The perf marker is already registered in pyproject.toml.

No action needed. The marker is defined at line 228 of pyproject.toml as: markers = [ "perf: marks tests as performance tests (excluded from CI benchmarks)" ]. The code change is correct.

@koxudaxi koxudaxi enabled auto-merge (squash) December 26, 2025 18:24
@koxudaxi koxudaxi merged commit 1e4db90 into main Dec 26, 2025
33 checks passed
@koxudaxi koxudaxi deleted the ci/exclude-perf-tests branch December 26, 2025 18:24
@codecov
Copy link

codecov bot commented Dec 26, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.72%. Comparing base (269b6b4) to head (2a107b3).
⚠️ Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2817      +/-   ##
==========================================
- Coverage   99.51%   98.72%   -0.80%     
==========================================
  Files          89       90       +1     
  Lines       13856    14090     +234     
  Branches     1634     1658      +24     
==========================================
+ Hits        13789    13910     +121     
- Misses         36      149     +113     
  Partials       31       31              
Flag Coverage Δ
unittests 98.72% <ø> (-0.80%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@codspeed-hq
Copy link

codspeed-hq bot commented Dec 26, 2025

CodSpeed Performance Report

Merging #2817 will create unknown performance changes

Comparing ci/exclude-perf-tests (2a107b3) with main (3cafaf2)1

Summary

🆕 26 new
⏩ 109 skipped2

Benchmarks breakdown

Mode Benchmark BASE HEAD Efficiency
🆕 Simulation test_perf_openapi_large_strict_types N/A 10.5 s N/A
🆕 Simulation test_perf_large_models_dataclass N/A 12.5 s N/A
🆕 Simulation test_perf_aws_style_openapi N/A 5.9 s N/A
🆕 Simulation test_perf_large_models N/A 12.7 s N/A
🆕 Simulation test_perf_kubernetes_style_pydantic_v2 N/A 9.6 s N/A
🆕 Simulation test_perf_openapi_large_field_constraints N/A 11 s N/A
🆕 Simulation test_perf_all_options_enabled N/A 25.7 s N/A
🆕 Simulation test_perf_combined_large_models_with_formatting N/A 13.4 s N/A
🆕 Simulation test_perf_aws_style_openapi_pydantic_v2 N/A 6.7 s N/A
🆕 Simulation test_perf_deep_nested N/A 23.6 s N/A
🆕 Simulation test_perf_complex_refs N/A 7.3 s N/A
🆕 Simulation test_perf_multiple_files_input N/A 11.3 s N/A
🆕 Simulation test_perf_large_models_pydantic_v2 N/A 13.1 s N/A
🆕 Simulation test_perf_deep_nested_use_annotated N/A 25.1 s N/A
🆕 Simulation test_perf_complex_refs_collapse_root N/A 7.3 s N/A
🆕 Simulation test_perf_multiple_files_to_multiple_outputs N/A 11.3 s N/A
🆕 Simulation test_perf_openapi_large N/A 10.5 s N/A
🆕 Simulation test_perf_duplicate_names_multiple_files N/A 3.4 s N/A
🆕 Simulation test_perf_stripe_style N/A 7 s N/A
🆕 Simulation test_perf_large_models_typed_dict N/A 11 s N/A
... ... ... ... ... ...

ℹ️ Only the first 20 benchmarks are displayed. Go to the app to view all benchmarks.

Footnotes

  1. No successful run was found on main (12fdab5) during the generation of this report, so 3cafaf2 was used instead as the comparison base. There might be some changes unrelated to this pull request in this report.

  2. 109 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant