Skip to content

[TRTLLM-10021][docs] Skip Softmax Attention blog and docs.#10592

Open
bobboli wants to merge 18 commits intoNVIDIA:mainfrom
bobboli:user/lbo/skip_softmax_blog
Open

[TRTLLM-10021][docs] Skip Softmax Attention blog and docs.#10592
bobboli wants to merge 18 commits intoNVIDIA:mainfrom
bobboli:user/lbo/skip_softmax_blog

Conversation

@bobboli
Copy link
Collaborator

@bobboli bobboli commented Jan 12, 2026

Summary by CodeRabbit

  • Documentation
    • Added comprehensive guide on Skip Softmax Attention, a sparse attention method for accelerating long-context inference. Includes configuration options, usage examples across Python/YAML/CLI, calibration guidelines, and performance benchmarks demonstrating accuracy and throughput improvements across various GPU backends.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@bobboli bobboli changed the title [TRTLLM-10021][chore] Draft Skip Softmax Attention blog. [TRTLLM-10021][docs] Draft Skip Softmax Attention blog. Jan 20, 2026
@bobboli bobboli marked this pull request as ready for review January 20, 2026 09:06
@bobboli bobboli requested a review from a team as a code owner January 20, 2026 09:06
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 20, 2026

📝 Walkthrough

Walkthrough

A new documentation article was added describing Skip Softmax Attention, a sparse attention optimization technique for long-context inference. The article covers the mechanism, configuration options, usage examples, calibration guidance, and performance benchmarks across different GPU backends.

Changes

Cohort / File(s) Summary
Documentation
docs/source/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md
New blog article explaining Skip Softmax Attention as a drop-in sparse attention method. Includes dynamic thresholding mechanism, configuration examples (Python, YAML, CLI), calibration notes, accuracy and performance benchmarks (LongBench V1/V2), and reproduction steps across GPU backends (Hopper, Blackwell).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description consists entirely of the repository template with all sections left blank (Description, Test Coverage, and PR Checklist items unchecked), providing no actual content about the changes. Fill in the Description section explaining what Skip Softmax Attention is and why this documentation is being added, and complete the Test Coverage and PR Checklist sections as appropriate.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title correctly follows the repository template with valid JIRA ticket format and appropriate [docs] type label, clearly summarizing the addition of Skip Softmax Attention documentation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In
`@docs/source/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md`:
- Line 124: The sentence "For prefilling, the maximum speedup is ~1.8x. Another
advantage of Skip Softmax Attention is that it can further boost performance on
top of FP8 attention,." has a punctuation error—remove the extraneous comma
before the period so it ends "...FP8 attention."; locate the clause referencing
"Skip Softmax Attention" and update the trailing punctuation accordingly.
- Line 5: The sentence about Skip Softmax Attention contains grammatical errors
and a leftover artifact; change "Skip Softmax Attention based on top of the
Flash Attention algorithm" to "Skip Softmax Attention is based on the Flash
Attention algorithm" (or "builds on top of the Flash Attention algorithm") and
remove the trailing "image.png" artifact, ensuring the phrases "Skip Softmax
Attention", "Flash Attention", and "attention kernels" remain intact and
grammatically integrated.
- Around line 43-56: The YAML/bash examples are malformed because the first
here-doc is missing its closing EOF and the two examples run together; update
the snippet around extra_llm_api_options.yaml so the first cat <<EOF block is
terminated with EOF, add a brief comment clarifying it is an alternative to the
second block, and ensure both examples show complete here-docs: one with a
single threshold_scale_factor numeric value and a second where
threshold_scale_factor is an object with prefill and decode keys; reference the
sparse_attention_config and threshold_scale_factor keys and ensure both cat
>extra_llm_api_options.yaml <<EOF ... EOF blocks are properly closed.
- Line 58: Replace the deprecated flag usage in the CLI examples: find
occurrences of the command snippets using --extra_llm_api_options (e.g., the
line with "trtllm-serve Qwen/Qwen3-30B-A3B-Instruct-2507 --extra_llm_api_options
extra_llm_api_options.yaml") and update them to use --config instead (so the
flag becomes --config extra_llm_api_options.yaml); apply the same replacement
for all similar examples referencing trtllm-serve, trtllm-bench, or trtllm-eval
in this document (lines flagged in the review).
🧹 Nitpick comments (2)
docs/source/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md (2)

3-3: Address the TODO before publishing.

The link to the previous tech blog points to a user branch and needs to be updated to the published version before this blog goes live.

Would you like me to help verify the correct link once the previous blog is published?


210-210: Track TODO for MInference comparison.

This TODO indicates missing content comparing Skip Softmax Attention with MInference. Consider whether this comparison is essential for the blog publication or can be deferred to a future update.

Would you like me to open an issue to track this task, or should it be completed before publishing this blog?

@bobboli bobboli changed the title [TRTLLM-10021][docs] Draft Skip Softmax Attention blog. [TRTLLM-10021][docs] Skip Softmax Attention blog and docs. Jan 20, 2026
@bobboli bobboli force-pushed the user/lbo/skip_softmax_blog branch 2 times, most recently from db31214 to 44eaef3 Compare January 23, 2026 02:51
@bobboli bobboli force-pushed the user/lbo/skip_softmax_blog branch from 44eaef3 to 2a40dc6 Compare January 28, 2026 06:12
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
@bobboli bobboli force-pushed the user/lbo/skip_softmax_blog branch from 2a40dc6 to a1e8a6d Compare January 30, 2026 11:24
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
@bobboli bobboli requested review from heyuhhh and lfr-0531 January 30, 2026 11:30
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Copy link
Collaborator

@lfr-0531 lfr-0531 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tech blog looks good to me. Added some minor comments to the doc changes.

| 0.6 | 15020.24 | 8.57 | 6431.65 | 6.25 |
| 0.7 | 14921.12 | 8.42 | 6355.43 | 6.24 |
| 0.8 | 14465.74 | 8.41 | 6192.77 | 6.26 |
| 0.9 | 13791.37 | 8.40 | 6043.06 | 6.27 |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add two more columns for the two sheets? The added columns mean the speedup ratio compared to the baseline. It maybe more intuitive to the readers i think

Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants