Skip to content

[Models]support GLM4.7 Flash#7139

Merged
chang-wenbin merged 3 commits intoPaddlePaddle:developfrom
chang-wenbin:Ernie_MLA
Apr 3, 2026
Merged

[Models]support GLM4.7 Flash#7139
chang-wenbin merged 3 commits intoPaddlePaddle:developfrom
chang-wenbin:Ernie_MLA

Conversation

@chang-wenbin
Copy link
Copy Markdown
Collaborator

@chang-wenbin chang-wenbin commented Apr 1, 2026

Motivation

support GLM4.7 Flash

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

support GLM4.7 Flash

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Apr 1, 2026

Thanks for your contribution!

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 1, 2026

Codecov Report

❌ Patch coverage is 44.44444% with 20 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@3651113). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...executor/layers/attention/mla_attention_backend.py 11.76% 11 Missing and 4 partials ⚠️
fastdeploy/model_executor/models/deepseek_v3.py 72.22% 4 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #7139   +/-   ##
==========================================
  Coverage           ?   73.70%           
==========================================
  Files              ?      376           
  Lines              ?    52876           
  Branches           ?     8244           
==========================================
  Hits               ?    38973           
  Misses             ?    11193           
  Partials           ?     2710           
Flag Coverage Δ
GPU 73.70% <44.44%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

该 PR 旨在为新增/兼容的模型形态(GLM4.7 Flash 相关)打通 DeepSeekV3/MLA 路径:将 position_ids 与 encoder mask 统一收敛到 ForwardMeta,并在 MLA backend 中增加对 head 数不足 64 的 padding 处理,同时补充新的架构注册类。

Changes:

  • 将 DeepSeekV3 的 position_ids / mask_encoder_batch 从显式参数改为通过 ForwardMeta 传递,减少调用链参数数量。
  • MLAAttentionBackend decode 路径对 num_heads < 64(TP>1)时对 Q/out 做 padding/裁剪以适配内核约束。
  • 新增 Glm4MoeLiteForCausalLM 架构注册,复用 DeepseekV3 实现。

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.

File Description
fastdeploy/model_executor/models/deepseek_v3.py 通过 ForwardMeta 传递 position_ids/mask,并注册 Glm4MoeLite 架构
fastdeploy/model_executor/layers/attention/mla_attention_backend.py MLA decode 增加 head padding 支持,并调整 rope_scaling 判断逻辑
fastdeploy/model_executor/forward_meta.py 为 MLA/DSA 增加 position_idsmask_encoder_batch 字段
Comments suppressed due to low confidence (1)

fastdeploy/model_executor/layers/attention/mla_attention_backend.py:295

  • 这里同样使用了 getattr(self.rope_scaling, "factor", None) 来判断 rope scaling,但 rope_scaling 在配置中是 dict,会导致缩放逻辑永远不生效;同时分支内仍然读取 fd_config.model_config.rope_scaling,与上面缓存到 self.rope_scaling 不一致。建议:改为 dict key 判断(如 self.rope_scaling.get("factor"))并在分支内统一使用 self.rope_scaling 取值。
        self.rope_scaling = getattr(fd_config.model_config, "rope_scaling", None)
        if self.rope_scaling and getattr(self.rope_scaling, "factor", None):
            # if fd_config.model_config.rope_scaling:
            mscale_all_dim = fd_config.model_config.rope_scaling.get("mscale_all_dim", False)  # 1.0
            scaling_factor = fd_config.model_config.rope_scaling["factor"]  # 40
            mscale = yarn_get_mscale(scaling_factor, float(mscale_all_dim))
            self.attn_softmax_scale = self.attn_softmax_scale * mscale * mscale

Copy link
Copy Markdown
Collaborator

@EmmonsCurse EmmonsCurse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM~ Skip coverage check as it mainly relies on tests with flashmla.

@chang-wenbin chang-wenbin merged commit 1090f8b into PaddlePaddle:develop Apr 3, 2026
52 of 56 checks passed
@chang-wenbin chang-wenbin changed the title [Models]support GLM4.7 Flash && Ernie_MLA [Models]support GLM4.7 Flash Apr 4, 2026
@chang-wenbin chang-wenbin deleted the Ernie_MLA branch April 4, 2026 00:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants