[TRTLLM-10021][docs] Skip Softmax Attention blog and docs.#10592
[TRTLLM-10021][docs] Skip Softmax Attention blog and docs.#10592bobboli wants to merge 18 commits intoNVIDIA:mainfrom
Conversation
📝 WalkthroughWalkthroughA new documentation article was added describing Skip Softmax Attention, a sparse attention optimization technique for long-context inference. The article covers the mechanism, configuration options, usage examples, calibration guidance, and performance benchmarks across different GPU backends. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In
`@docs/source/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md`:
- Line 124: The sentence "For prefilling, the maximum speedup is ~1.8x. Another
advantage of Skip Softmax Attention is that it can further boost performance on
top of FP8 attention,." has a punctuation error—remove the extraneous comma
before the period so it ends "...FP8 attention."; locate the clause referencing
"Skip Softmax Attention" and update the trailing punctuation accordingly.
- Line 5: The sentence about Skip Softmax Attention contains grammatical errors
and a leftover artifact; change "Skip Softmax Attention based on top of the
Flash Attention algorithm" to "Skip Softmax Attention is based on the Flash
Attention algorithm" (or "builds on top of the Flash Attention algorithm") and
remove the trailing "image.png" artifact, ensuring the phrases "Skip Softmax
Attention", "Flash Attention", and "attention kernels" remain intact and
grammatically integrated.
- Around line 43-56: The YAML/bash examples are malformed because the first
here-doc is missing its closing EOF and the two examples run together; update
the snippet around extra_llm_api_options.yaml so the first cat <<EOF block is
terminated with EOF, add a brief comment clarifying it is an alternative to the
second block, and ensure both examples show complete here-docs: one with a
single threshold_scale_factor numeric value and a second where
threshold_scale_factor is an object with prefill and decode keys; reference the
sparse_attention_config and threshold_scale_factor keys and ensure both cat
>extra_llm_api_options.yaml <<EOF ... EOF blocks are properly closed.
- Line 58: Replace the deprecated flag usage in the CLI examples: find
occurrences of the command snippets using --extra_llm_api_options (e.g., the
line with "trtllm-serve Qwen/Qwen3-30B-A3B-Instruct-2507 --extra_llm_api_options
extra_llm_api_options.yaml") and update them to use --config instead (so the
flag becomes --config extra_llm_api_options.yaml); apply the same replacement
for all similar examples referencing trtllm-serve, trtllm-bench, or trtllm-eval
in this document (lines flagged in the review).
🧹 Nitpick comments (2)
docs/source/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md (2)
3-3: Address the TODO before publishing.The link to the previous tech blog points to a user branch and needs to be updated to the published version before this blog goes live.
Would you like me to help verify the correct link once the previous blog is published?
210-210: Track TODO for MInference comparison.This TODO indicates missing content comparing Skip Softmax Attention with MInference. Consider whether this comparison is essential for the blog publication or can be deferred to a future update.
Would you like me to open an issue to track this task, or should it be completed before publishing this blog?
...ce/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md
Outdated
Show resolved
Hide resolved
...ce/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md
Show resolved
Hide resolved
...ce/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md
Show resolved
Hide resolved
...ce/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md
Outdated
Show resolved
Hide resolved
db31214 to
44eaef3
Compare
44eaef3 to
2a40dc6
Compare
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2a40dc6 to
a1e8a6d
Compare
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
lfr-0531
left a comment
There was a problem hiding this comment.
The tech blog looks good to me. Added some minor comments to the doc changes.
...ce/blogs/tech_blog/blog16_Accelerating_Long_Context_Inference_with_Skip_Softmax_Attention.md
Show resolved
Hide resolved
| | 0.6 | 15020.24 | 8.57 | 6431.65 | 6.25 | | ||
| | 0.7 | 14921.12 | 8.42 | 6355.43 | 6.24 | | ||
| | 0.8 | 14465.74 | 8.41 | 6192.77 | 6.26 | | ||
| | 0.9 | 13791.37 | 8.40 | 6043.06 | 6.27 | |
There was a problem hiding this comment.
Could you add two more columns for the two sheets? The added columns mean the speedup ratio compared to the baseline. It maybe more intuitive to the readers i think
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.