[https://nvbugs/6104831][test] Add reproducer for gen-side blocking hang in checkGenTransferStatus(at_least=1)#13674
Draft
chienchunhung wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
…ang in checkGenTransferStatus(at_least=1) Reproduce signature NVIDIA#4 of NVBug 6104831: the disagg generation event loop self-blocks when CacheTransceiver::checkGenTransferStatus() is called with at_least_request_num=1 and the first selected requester future is not yet ready. The PyExecutor disagg loop polls check_gen_transfer_status(1) every iteration to harvest completed receiver-side futures. On stock rc11 the inner for-loop unconditionally calls future.get() on the first entry that lands in the to-complete set, even when its wait_for(0) status is timeout. A single in-flight generation request whose context-side ready signal has not yet arrived therefore blocks the entire decoder event loop, indistinguishable from a wedge. The new test test_check_gen_transfer_status_at_least_one_does_not_block_on_unready_future drives one full ctx/gen handshake to completion to capture a real opaque comm/cache state, then enqueues a second generation request whose context counterpart has not yet been respond_and_send_async-ed. It calls check_gen_transfer_status(1) on a worker thread with a 1s probe timeout and asserts the call returns promptly. Pre-fix the call hangs past the probe timeout (asserted via thread is_alive()); the fix in the matching chained PR makes the call return immediately by probing wait_for(0) and skipping unready futures when blockAll is False. Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com> Made-with: Cursor
chienchunhung
added a commit
to chienchunhung/TensorRT-LLM
that referenced
this pull request
Apr 30, 2026
…13673/NVIDIA#13674 into the investigation report The four sig NVIDIA#4 / NVIDIA#5 / NVIDIA#6 PRs are now open against NVIDIA/TensorRT-LLM: - Sig NVIDIA#4: chained pair NVIDIA#13674 (test) -> NVIDIA#13671 (fix; carries 2 commits including the test, both PRs target main so they can be merged in order) - Sig NVIDIA#5: combined test + fix in NVIDIA#13672 (independent of the #1 chain) - Sig NVIDIA#6: combined test + fix in NVIDIA#13673, chained on top of NVIDIA#13640 (the #1 fix is a prerequisite for the !isReady early-return path) Update the Signature <-> PR Map, the per-signature Status / PRs blocks, and the Next Steps items 2 and 3 to reference these PR numbers and mark the corresponding follow-up items as done. Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com> Made-with: Cursor
chienchunhung
added a commit
to chienchunhung/TensorRT-LLM
that referenced
this pull request
Apr 30, 2026
… PRs landed Apply nine consistency fixes against the post-NVIDIA#13671/NVIDIA#13672/NVIDIA#13673/NVIDIA#13674 state of the investigation: 1. Front-matter Status block: replace the "sig NVIDIA#6 root-caused, validation in flight" wording with the post-run8 picture (all 6 TRT-LLM PRs in review; NVIDIA#7 is an out-of-scope NIXL bug; deadline work is the TRT-LLM-side fallback for NVIDIA#7). 2. Front-matter Branches in this worktree: add the four new sig NVIDIA#4 / NVIDIA#5 / NVIDIA#6 branches. 3. Front-matter Related PRs: add NVIDIA#13674 / NVIDIA#13671 / NVIDIA#13672 / NVIDIA#13673 with chained-on-NVIDIA#13640 callout for NVIDIA#13673. 4. "Configurations that did not reproduce": NVIDIA#5 and NVIDIA#6 now do reproduce in single-process unit tests via the new tests added by NVIDIA#13672 and NVIDIA#13673; only NVIDIA#3 and NVIDIA#7 remain field-only. 5. Phase 6 close: the sig NVIDIA#4 regression test is no longer isolated in local/rc11-disagg-repro - it is now in the chained NVIDIA#13674 / NVIDIA#13671 pair. 6. Signature NVIDIA#6 section: drop the "(suspected)" qualifier and the "(most likely)" hedging on Where-it-lives - both are confirmed by run7 and run8. Rename the section header to describe the actual failure shape (recv buffer index leak via !isReady early return wedging assignBufferIndexForRecv) rather than the early control-path-stall hypothesis. Mirror the rename in the Signature - PR Map row. 7. File / Branch Index "New unit tests": add the new sig NVIDIA#5 (test_cancel_queued_gen_request_fulfills_receiver_future) and sig NVIDIA#6 (test_cancelled_after_ready_does_not_leak_recv_buffer_index, NIXL backend) tests. 8. Signature NVIDIA#3 status hypothesis: add a one-paragraph note that NIXL (signature NVIDIA#7) is now also a candidate cause of the half-initialized state, so a future field hit is not misattributed to a fresh TRT-LLM bug. 9. Phase 5 narrative: add a forward link explaining that the underlying terminal driver of the Phase-5 wedge was already NVIDIA#7 (NIXL), but NVIDIA#4 was the visible TRT-LLM-side symptom because the gen event loop was self-blocking before any of the later layers could surface. Signed-off-by: Chien-Chun Hung <2679986+chienchunhung@users.noreply.github.com> Made-with: Cursor
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
…ang in checkGenTransferStatus(at_least=1)
Reproduce signature #4 of NVBug 6104831: the disagg generation event loop self-blocks when CacheTransceiver::checkGenTransferStatus() is called with at_least_request_num=1 and the first selected requester future is not yet ready.
The PyExecutor disagg loop polls check_gen_transfer_status(1) every iteration to harvest completed receiver-side futures. On stock rc11 the inner for-loop unconditionally calls future.get() on the first entry that lands in the to-complete set, even when its wait_for(0) status is timeout. A single in-flight generation request whose context-side ready signal has not yet arrived therefore blocks the entire decoder event loop, indistinguishable from a wedge.
The new test
test_check_gen_transfer_status_at_least_one_does_not_block_on_unready_future drives one full ctx/gen handshake to completion to capture a real opaque comm/cache state, then enqueues a second generation request whose context counterpart has not yet been respond_and_send_async-ed. It calls check_gen_transfer_status(1) on a worker thread with a 1s probe timeout and asserts the call returns promptly. Pre-fix the call hangs past the probe timeout (asserted via thread is_alive()); the fix in the matching chained PR makes the call return immediately by probing wait_for(0) and skipping unready futures when blockAll is False.
Made-with: Cursor
@coderabbitai summary
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.