[None][test] add Nemotron Ultra V3 AutoDeploy accuracy test#13658
[None][test] add Nemotron Ultra V3 AutoDeploy accuracy test#13658tcherckez-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
Conversation
bf4fba1 to
5f905e2
Compare
|
/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" |
📝 WalkthroughWalkthroughThis pull request adds support for the Nemotron-Ultra-V3 model by introducing model registry configuration, accuracy reference benchmarks for GSM8K and MMLU, an integration test class with parametrized testing across multiple backends and configurations, test list entries, and HuggingFace Hub ID mapping support. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsTimed out fetching pipeline failures after 30000ms Review rate limit: 8/10 reviews remaining, refill in 10 minutes and 39 seconds. Comment |
|
PR_Github #46356 [ run ] triggered by Bot. Commit: |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/integration/test_lists/test-db/l0_dgx_b200.yml (1)
326-335:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAdd a pre-merge
trtllmUltra-V3 selection.The new model-registry path is trtllm-oriented, but this pre-merge block only schedules the flashinfer variant. That leaves the primary runtime unvalidated until post-merge.
Suggested fix
- accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-flashinfer] + - accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-trtllm]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml` around lines 326 - 335, The pre-merge test list schedules only the flashinfer variant for Ultra-V3, leaving the new trtllm-oriented model-registry path untested; update the pre-merge block in this YAML to include a trtllm Ultra-V3 entry (mirror the existing accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-flashinfer] line) so the corresponding trtllm runtime is validated pre-merge, and ensure any grouping keys or tags used for pre-merge selection reference the trtllm variant as well.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py`:
- Around line 722-785: The test exercises only the NVFP4 path
(MODEL_PATHS["nvfp4"]) so add an early guard in
TestNemotronUltraV3.test_accuracy to skip when the host GPUs are not
Blackwell/B200-class; specifically, after the existing get_device_count() check,
call the project helper that detects Blackwell/B200 GPUs (e.g., is_blackwell()
or the equivalent GPU-family detection helper) and pytest.skip with a
descriptive message if it returns False, ensuring the test only runs on
Blackwell systems.
---
Outside diff comments:
In `@tests/integration/test_lists/test-db/l0_dgx_b200.yml`:
- Around line 326-335: The pre-merge test list schedules only the flashinfer
variant for Ultra-V3, leaving the new trtllm-oriented model-registry path
untested; update the pre-merge block in this YAML to include a trtllm Ultra-V3
entry (mirror the existing
accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-flashinfer]
line) so the corresponding trtllm runtime is validated pre-merge, and ensure any
grouping keys or tags used for pre-merge selection reference the trtllm variant
as well.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 46ca0eed-98ad-45c6-95c9-766a12295a41
📒 Files selected for processing (6)
examples/auto_deploy/model_registry/configs/ultra_v3.yamltests/integration/defs/accuracy/references/gsm8k.yamltests/integration/defs/accuracy/references/mmlu.yamltests/integration/defs/accuracy/test_llm_api_autodeploy.pytests/integration/test_lists/test-db/l0_dgx_b200.ymltests/test_common/llm_data.py
| class TestNemotronUltraV3(LlmapiAccuracyTestHarness): | ||
| MODEL_NAME = "nvidia/Nemotron-Ultra-V3" | ||
| CONFIG_YAML = str( | ||
| Path(get_llm_root()) / "examples" / "auto_deploy" / "model_registry" / | ||
| "configs" / "ultra_v3.yaml") | ||
| MODEL_PATHS = { | ||
| "nvfp4": hf_id_to_local_model_dir("nvidia/Nemotron-Ultra-V3-NVFP4"), | ||
| } | ||
|
|
||
| def get_default_sampling_params(self): | ||
| # Use end_id=None to allow framework to read tokenizer's EOS tokens [2, 11] | ||
| # and enable task-specific stop sequences (critical for GSM8K) | ||
| return SamplingParams(end_id=None, | ||
| pad_id=None, | ||
| n=1, | ||
| use_beam_search=False) | ||
|
|
||
| @pytest.mark.parametrize("attn_backend", ["flashinfer", "trtllm"]) | ||
| @pytest.mark.parametrize("enable_attention_dp", [False, True], | ||
| ids=["attn_dp_off", "attn_dp_on"]) | ||
| @pytest.mark.parametrize("world_size", [4, 8]) | ||
| @pytest.mark.parametrize("model_id", ["nvfp4"]) | ||
| def test_accuracy(self, model_id, world_size, enable_attention_dp, | ||
| attn_backend): | ||
| if get_device_count() < world_size: | ||
| pytest.skip(f"Not enough devices for world_size={world_size}") | ||
|
|
||
| model_path = self.MODEL_PATHS[model_id] | ||
| kwargs = {} | ||
| kwargs["attn_backend"] = attn_backend | ||
| kwargs.setdefault("transforms", {}).setdefault( | ||
| "detect_sharding", {})["enable_attention_dp"] = enable_attention_dp | ||
|
|
||
| print_memory_usage("test start") | ||
| with AutoDeployLLM(model=model_path, | ||
| tokenizer=model_path, | ||
| world_size=world_size, | ||
| yaml_extra=[self.CONFIG_YAML], | ||
| trust_remote_code=True, | ||
| **kwargs) as llm: | ||
| _set_quant_config(llm, model_id) | ||
| print_memory_usage("after engine build") | ||
|
|
||
| sampling_params = self.get_default_sampling_params() | ||
| task = MMLU(self.MODEL_NAME) | ||
| task.evaluate(llm, sampling_params=sampling_params) | ||
|
|
||
| # Ultra V3 uses extended thinking: enable_thinking=True so the model | ||
| # can use <think>...</think> CoT before the #### answer. | ||
| # Increase max_tokens to 1024 to allow the full thinking chain to | ||
| # complete before the "#### N" answer token -- 256 is too short. | ||
| sampling_params.max_tokens = 1024 | ||
| task = GSM8K(self.MODEL_NAME) | ||
| task.NUM_SAMPLES = 128 | ||
| task.evaluate(llm, | ||
| sampling_params=sampling_params, | ||
| extra_evaluator_kwargs={ | ||
| "apply_chat_template": True, | ||
| "chat_template_kwargs": { | ||
| "enable_thinking": True | ||
| }, | ||
| }) | ||
|
|
||
| print_memory_usage("after evaluation") |
There was a problem hiding this comment.
Gate Ultra V3 NVFP4 on Blackwell.
This class only exercises the NVFP4 path, but it never skips pre-Blackwell GPUs. If it runs outside the B200-only list, it can fail before reaching the accuracy assertions.
Suggested fix
class TestNemotronUltraV3(LlmapiAccuracyTestHarness):
@@
- `@pytest.mark.parametrize`("attn_backend", ["flashinfer", "trtllm"])
+ `@skip_pre_blackwell`
+ `@pytest.mark.parametrize`("attn_backend", ["flashinfer", "trtllm"])🧰 Tools
🪛 Ruff (0.15.12)
[warning] 727-729: Mutable default value for class attribute
(RUF012)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py` around lines 722
- 785, The test exercises only the NVFP4 path (MODEL_PATHS["nvfp4"]) so add an
early guard in TestNemotronUltraV3.test_accuracy to skip when the host GPUs are
not Blackwell/B200-class; specifically, after the existing get_device_count()
check, call the project helper that detects Blackwell/B200 GPUs (e.g.,
is_blackwell() or the equivalent GPU-family detection helper) and pytest.skip
with a descriptive message if it returns False, ensuring the test only runs on
Blackwell systems.
There was a problem hiding this comment.
@skip_pre_blackwell
@pytest.mark.skip_less_device_memory(180000)
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
|
PR_Github #46356 [ run ] completed with state
|
| @pytest.mark.parametrize("attn_backend", ["flashinfer", "trtllm"]) | ||
| @pytest.mark.parametrize("enable_attention_dp", [False, True], | ||
| ids=["attn_dp_off", "attn_dp_on"]) |
There was a problem hiding this comment.
Need to be careful with over-parameterizing. I realize this is copied from SuperV3, but I don't think it's maintainable to keep testing all these variants in integration tests.
I suggest focusing on testing the config from the ultra_v3.yaml without any overrides (except for world size, maybe).
| class TestNemotronUltraV3(LlmapiAccuracyTestHarness): | ||
| MODEL_NAME = "nvidia/Nemotron-Ultra-V3" | ||
| CONFIG_YAML = str( | ||
| Path(get_llm_root()) / "examples" / "auto_deploy" / "model_registry" / | ||
| "configs" / "ultra_v3.yaml") | ||
| MODEL_PATHS = { | ||
| "nvfp4": hf_id_to_local_model_dir("nvidia/Nemotron-Ultra-V3-NVFP4"), | ||
| } | ||
|
|
||
| def get_default_sampling_params(self): | ||
| # Use end_id=None to allow framework to read tokenizer's EOS tokens [2, 11] | ||
| # and enable task-specific stop sequences (critical for GSM8K) | ||
| return SamplingParams(end_id=None, | ||
| pad_id=None, | ||
| n=1, | ||
| use_beam_search=False) | ||
|
|
||
| @pytest.mark.parametrize("attn_backend", ["flashinfer", "trtllm"]) | ||
| @pytest.mark.parametrize("enable_attention_dp", [False, True], | ||
| ids=["attn_dp_off", "attn_dp_on"]) | ||
| @pytest.mark.parametrize("world_size", [4, 8]) | ||
| @pytest.mark.parametrize("model_id", ["nvfp4"]) | ||
| def test_accuracy(self, model_id, world_size, enable_attention_dp, | ||
| attn_backend): | ||
| if get_device_count() < world_size: | ||
| pytest.skip(f"Not enough devices for world_size={world_size}") | ||
|
|
||
| model_path = self.MODEL_PATHS[model_id] | ||
| kwargs = {} | ||
| kwargs["attn_backend"] = attn_backend | ||
| kwargs.setdefault("transforms", {}).setdefault( | ||
| "detect_sharding", {})["enable_attention_dp"] = enable_attention_dp | ||
|
|
||
| print_memory_usage("test start") | ||
| with AutoDeployLLM(model=model_path, | ||
| tokenizer=model_path, | ||
| world_size=world_size, | ||
| yaml_extra=[self.CONFIG_YAML], | ||
| trust_remote_code=True, | ||
| **kwargs) as llm: | ||
| _set_quant_config(llm, model_id) | ||
| print_memory_usage("after engine build") | ||
|
|
||
| sampling_params = self.get_default_sampling_params() | ||
| task = MMLU(self.MODEL_NAME) | ||
| task.evaluate(llm, sampling_params=sampling_params) | ||
|
|
||
| # Ultra V3 uses extended thinking: enable_thinking=True so the model | ||
| # can use <think>...</think> CoT before the #### answer. | ||
| # Increase max_tokens to 1024 to allow the full thinking chain to | ||
| # complete before the "#### N" answer token -- 256 is too short. | ||
| sampling_params.max_tokens = 1024 | ||
| task = GSM8K(self.MODEL_NAME) | ||
| task.NUM_SAMPLES = 128 | ||
| task.evaluate(llm, | ||
| sampling_params=sampling_params, | ||
| extra_evaluator_kwargs={ | ||
| "apply_chat_template": True, | ||
| "chat_template_kwargs": { | ||
| "enable_thinking": True | ||
| }, | ||
| }) | ||
|
|
||
| print_memory_usage("after evaluation") |
There was a problem hiding this comment.
@skip_pre_blackwell
@pytest.mark.skip_less_device_memory(180000)
| - accuracy/test_llm_api_autodeploy.py::TestNemotronSuperV3::test_accuracy[fp8-4-attn_dp_off-trtllm] | ||
| - accuracy/test_llm_api_autodeploy.py::TestNemotronSuperV3::test_accuracy[fp8-4-attn_dp_on-trtllm] | ||
| - accuracy/test_llm_api_autodeploy.py::TestNemotronSuperV3::test_accuracy[nvfp4-4-attn_dp_on-trtllm] | ||
| - accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-flashinfer] |
There was a problem hiding this comment.
Config defines trtllm attn backend, so the most important config we want to target in pre-merge is
- accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-trtllm]
anything else can go to post merge
| - accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_on-flashinfer] | ||
| - accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_off-trtllm] | ||
| - accuracy/test_llm_api_autodeploy.py::TestNemotronUltraV3::test_accuracy[nvfp4-4-attn_dp_on-trtllm] |
There was a problem hiding this comment.
I see you put this in post merge to limit pre-merge runtime, that's good. I'd still stick with just the YAML config - this way we make sure it's being tested even if someone modified just the config,
Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
5f905e2 to
eee2b19
Compare
|
/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" |
|
PR_Github #46585 [ run ] triggered by Bot. Commit: |
|
PR_Github #46585 [ run ] completed with state
|
Summary by CodeRabbit
New Features
Tests
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.