Skip to content

[None][test] add Nemotron Ultra V3 AutoDeploy accuracy test#13658

Open
tcherckez-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
nv-auto-deploy:ultra-v3-tests
Open

[None][test] add Nemotron Ultra V3 AutoDeploy accuracy test#13658
tcherckez-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
nv-auto-deploy:ultra-v3-tests

Conversation

@tcherckez-nvidia
Copy link
Copy Markdown
Collaborator

@tcherckez-nvidia tcherckez-nvidia commented Apr 30, 2026

Summary by CodeRabbit

  • New Features

    • Introduced Nemotron-Ultra-V3 model with optimized performance configurations for enhanced inference capabilities.
    • Established accuracy benchmarks across standard evaluation datasets.
  • Tests

    • Added comprehensive integration accuracy tests spanning multiple hardware configurations and inference backends.
    • Implemented test coverage for both single and distributed multi-GPU deployment scenarios.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@tcherckez-nvidia
Copy link
Copy Markdown
Collaborator Author

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1"

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 30, 2026

📝 Walkthrough

Walkthrough

This pull request adds support for the Nemotron-Ultra-V3 model by introducing model registry configuration, accuracy reference benchmarks for GSM8K and MMLU, an integration test class with parametrized testing across multiple backends and configurations, test list entries, and HuggingFace Hub ID mapping support.

Changes

Cohort / File(s) Summary
Model Configuration
examples/auto_deploy/model_registry/configs/ultra_v3.yaml
New model registry config for ultra_v3 with TensorRT-LLM runtime, CUDA graph compilation, chunked prefill, mamba attention backend, and compilation optimizations including MoE fusion, SSM attention caching, and sharding detection via SYMM_MEM allreduce strategy.
Accuracy References
tests/integration/defs/accuracy/references/gsm8k.yaml, tests/integration/defs/accuracy/references/mmlu.yaml
New accuracy reference entries for nvidia/Nemotron-Ultra-V3 with NVFP4 quantization and FP8 KV-cache quantization, specifying GSM8K accuracy of 91.797 and MMLU accuracy of 85.70.
Integration Tests
tests/integration/defs/accuracy/test_llm_api_autodeploy.py, tests/integration/test_lists/test-db/l0_dgx_b200.yml
New test class TestNemotronUltraV3 with parametrized accuracy tests across flashinfer and trtllm backends, attention-DP sharding modes, and multi-GPU world sizes (4, 8); corresponding test list entries added for l0_dgx_b200 configurations.
Data Mapping
tests/test_common/llm_data.py
Added HuggingFace Hub ID mapping for nvidia/Nemotron-Ultra-V3-NVFP4 to local model directory resolution in mock_snapshot_download function.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete; it contains only the template structure with empty sections for Description and Test Coverage, and lacks actual implementation details. Fill in the Description section explaining what test is being added and why, and the Test Coverage section listing the relevant test cases added (e.g., TestNemotronUltraV3 with MMLU/GSM8K accuracy tests).
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding an AutoDeploy accuracy test for Nemotron Ultra V3 model.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

Review ran into problems

🔥 Problems

Timed out fetching pipeline failures after 30000ms


Review rate limit: 8/10 reviews remaining, refill in 10 minutes and 39 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46356 [ run ] triggered by Bot. Commit: 5f905e2 Link to invocation

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/test_lists/test-db/l0_dgx_b200.yml (1)

326-335: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add a pre-merge trtllm Ultra-V3 selection.

The new model-registry path is trtllm-oriented, but this pre-merge block only schedules the flashinfer variant. That leaves the primary runtime unvalidated until post-merge.

Suggested fix
   - accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-flashinfer]
+  - accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-trtllm]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml` around lines 326 - 335,
The pre-merge test list schedules only the flashinfer variant for Ultra-V3,
leaving the new trtllm-oriented model-registry path untested; update the
pre-merge block in this YAML to include a trtllm Ultra-V3 entry (mirror the
existing
accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-flashinfer]
line) so the corresponding trtllm runtime is validated pre-merge, and ensure any
grouping keys or tags used for pre-merge selection reference the trtllm variant
as well.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py`:
- Around line 722-785: The test exercises only the NVFP4 path
(MODEL_PATHS["nvfp4"]) so add an early guard in
TestNemotronUltraV3.test_accuracy to skip when the host GPUs are not
Blackwell/B200-class; specifically, after the existing get_device_count() check,
call the project helper that detects Blackwell/B200 GPUs (e.g., is_blackwell()
or the equivalent GPU-family detection helper) and pytest.skip with a
descriptive message if it returns False, ensuring the test only runs on
Blackwell systems.

---

Outside diff comments:
In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml`:
- Around line 326-335: The pre-merge test list schedules only the flashinfer
variant for Ultra-V3, leaving the new trtllm-oriented model-registry path
untested; update the pre-merge block in this YAML to include a trtllm Ultra-V3
entry (mirror the existing
accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-flashinfer]
line) so the corresponding trtllm runtime is validated pre-merge, and ensure any
grouping keys or tags used for pre-merge selection reference the trtllm variant
as well.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 46ca0eed-98ad-45c6-95c9-766a12295a41

📥 Commits

Reviewing files that changed from the base of the PR and between 24228e9 and 5f905e2.

📒 Files selected for processing (6)
  • examples/auto_deploy/model_registry/configs/ultra_v3.yaml
  • tests/integration/defs/accuracy/references/gsm8k.yaml
  • tests/integration/defs/accuracy/references/mmlu.yaml
  • tests/integration/defs/accuracy/test_llm_api_autodeploy.py
  • tests/integration/test_lists/test-db/l0_dgx_b200.yml
  • tests/test_common/llm_data.py

Comment on lines +722 to +785
class TestNemotronUltraV3(LlmapiAccuracyTestHarness):
MODEL_NAME = "nvidia/Nemotron-Ultra-V3"
CONFIG_YAML = str(
Path(get_llm_root()) / "examples" / "auto_deploy" / "model_registry" /
"configs" / "ultra_v3.yaml")
MODEL_PATHS = {
"nvfp4": hf_id_to_local_model_dir("nvidia/Nemotron-Ultra-V3-NVFP4"),
}

def get_default_sampling_params(self):
# Use end_id=None to allow framework to read tokenizer's EOS tokens [2, 11]
# and enable task-specific stop sequences (critical for GSM8K)
return SamplingParams(end_id=None,
pad_id=None,
n=1,
use_beam_search=False)

@pytest.mark.parametrize("attn_backend", ["flashinfer", "trtllm"])
@pytest.mark.parametrize("enable_attention_dp", [False, True],
ids=["attn_dp_off", "attn_dp_on"])
@pytest.mark.parametrize("world_size", [4, 8])
@pytest.mark.parametrize("model_id", ["nvfp4"])
def test_accuracy(self, model_id, world_size, enable_attention_dp,
attn_backend):
if get_device_count() < world_size:
pytest.skip(f"Not enough devices for world_size={world_size}")

model_path = self.MODEL_PATHS[model_id]
kwargs = {}
kwargs["attn_backend"] = attn_backend
kwargs.setdefault("transforms", {}).setdefault(
"detect_sharding", {})["enable_attention_dp"] = enable_attention_dp

print_memory_usage("test start")
with AutoDeployLLM(model=model_path,
tokenizer=model_path,
world_size=world_size,
yaml_extra=[self.CONFIG_YAML],
trust_remote_code=True,
**kwargs) as llm:
_set_quant_config(llm, model_id)
print_memory_usage("after engine build")

sampling_params = self.get_default_sampling_params()
task = MMLU(self.MODEL_NAME)
task.evaluate(llm, sampling_params=sampling_params)

# Ultra V3 uses extended thinking: enable_thinking=True so the model
# can use <think>...</think> CoT before the #### answer.
# Increase max_tokens to 1024 to allow the full thinking chain to
# complete before the "#### N" answer token -- 256 is too short.
sampling_params.max_tokens = 1024
task = GSM8K(self.MODEL_NAME)
task.NUM_SAMPLES = 128
task.evaluate(llm,
sampling_params=sampling_params,
extra_evaluator_kwargs={
"apply_chat_template": True,
"chat_template_kwargs": {
"enable_thinking": True
},
})

print_memory_usage("after evaluation")
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Gate Ultra V3 NVFP4 on Blackwell.

This class only exercises the NVFP4 path, but it never skips pre-Blackwell GPUs. If it runs outside the B200-only list, it can fail before reaching the accuracy assertions.

Suggested fix
 class TestNemotronUltraV3(LlmapiAccuracyTestHarness):
@@
-    `@pytest.mark.parametrize`("attn_backend", ["flashinfer", "trtllm"])
+    `@skip_pre_blackwell`
+    `@pytest.mark.parametrize`("attn_backend", ["flashinfer", "trtllm"])
🧰 Tools
🪛 Ruff (0.15.12)

[warning] 727-729: Mutable default value for class attribute

(RUF012)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py` around lines 722
- 785, The test exercises only the NVFP4 path (MODEL_PATHS["nvfp4"]) so add an
early guard in TestNemotronUltraV3.test_accuracy to skip when the host GPUs are
not Blackwell/B200-class; specifically, after the existing get_device_count()
check, call the project helper that detects Blackwell/B200 GPUs (e.g.,
is_blackwell() or the equivalent GPU-family detection helper) and pytest.skip
with a descriptive message if it returns False, ensuring the test only runs on
Blackwell systems.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 @tcherckez-nvidia

    @skip_pre_blackwell
    @pytest.mark.skip_less_device_memory(180000)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46356 [ run ] completed with state SUCCESS. Commit: 5f905e2
/LLM/main/L0_MergeRequest_PR pipeline #36443 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

CI Agent Failure Analysis

Link to invocation

Comment on lines +739 to +741
@pytest.mark.parametrize("attn_backend", ["flashinfer", "trtllm"])
@pytest.mark.parametrize("enable_attention_dp", [False, True],
ids=["attn_dp_off", "attn_dp_on"])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to be careful with over-parameterizing. I realize this is copied from SuperV3, but I don't think it's maintainable to keep testing all these variants in integration tests.
I suggest focusing on testing the config from the ultra_v3.yaml without any overrides (except for world size, maybe).

Comment on lines +722 to +785
class TestNemotronUltraV3(LlmapiAccuracyTestHarness):
MODEL_NAME = "nvidia/Nemotron-Ultra-V3"
CONFIG_YAML = str(
Path(get_llm_root()) / "examples" / "auto_deploy" / "model_registry" /
"configs" / "ultra_v3.yaml")
MODEL_PATHS = {
"nvfp4": hf_id_to_local_model_dir("nvidia/Nemotron-Ultra-V3-NVFP4"),
}

def get_default_sampling_params(self):
# Use end_id=None to allow framework to read tokenizer's EOS tokens [2, 11]
# and enable task-specific stop sequences (critical for GSM8K)
return SamplingParams(end_id=None,
pad_id=None,
n=1,
use_beam_search=False)

@pytest.mark.parametrize("attn_backend", ["flashinfer", "trtllm"])
@pytest.mark.parametrize("enable_attention_dp", [False, True],
ids=["attn_dp_off", "attn_dp_on"])
@pytest.mark.parametrize("world_size", [4, 8])
@pytest.mark.parametrize("model_id", ["nvfp4"])
def test_accuracy(self, model_id, world_size, enable_attention_dp,
attn_backend):
if get_device_count() < world_size:
pytest.skip(f"Not enough devices for world_size={world_size}")

model_path = self.MODEL_PATHS[model_id]
kwargs = {}
kwargs["attn_backend"] = attn_backend
kwargs.setdefault("transforms", {}).setdefault(
"detect_sharding", {})["enable_attention_dp"] = enable_attention_dp

print_memory_usage("test start")
with AutoDeployLLM(model=model_path,
tokenizer=model_path,
world_size=world_size,
yaml_extra=[self.CONFIG_YAML],
trust_remote_code=True,
**kwargs) as llm:
_set_quant_config(llm, model_id)
print_memory_usage("after engine build")

sampling_params = self.get_default_sampling_params()
task = MMLU(self.MODEL_NAME)
task.evaluate(llm, sampling_params=sampling_params)

# Ultra V3 uses extended thinking: enable_thinking=True so the model
# can use <think>...</think> CoT before the #### answer.
# Increase max_tokens to 1024 to allow the full thinking chain to
# complete before the "#### N" answer token -- 256 is too short.
sampling_params.max_tokens = 1024
task = GSM8K(self.MODEL_NAME)
task.NUM_SAMPLES = 128
task.evaluate(llm,
sampling_params=sampling_params,
extra_evaluator_kwargs={
"apply_chat_template": True,
"chat_template_kwargs": {
"enable_thinking": True
},
})

print_memory_usage("after evaluation")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 @tcherckez-nvidia

    @skip_pre_blackwell
    @pytest.mark.skip_less_device_memory(180000)

- accuracy/test_llm_api_autodeploy.py::TestNemotronSuperV3::test_accuracy[fp8-4-attn_dp_off-trtllm]
- accuracy/test_llm_api_autodeploy.py::TestNemotronSuperV3::test_accuracy[fp8-4-attn_dp_on-trtllm]
- accuracy/test_llm_api_autodeploy.py::TestNemotronSuperV3::test_accuracy[nvfp4-4-attn_dp_on-trtllm]
- accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-flashinfer]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Config defines trtllm attn backend, so the most important config we want to target in pre-merge is

  - accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-trtllm]

anything else can go to post merge

Comment on lines +353 to +355
- accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_on-flashinfer]
- accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-trtllm]
- accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_on-trtllm]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you put this in post merge to limit pre-merge runtime, that's good. I'd still stick with just the YAML config - this way we make sure it's being tested even if someone modified just the config,

Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
@tcherckez-nvidia
Copy link
Copy Markdown
Collaborator Author

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46585 [ run ] triggered by Bot. Commit: eee2b19 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46585 [ run ] completed with state SUCCESS. Commit: eee2b19
/LLM/main/L0_MergeRequest_PR pipeline #36634 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

CI Agent Failure Analysis

Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants