Support for KV cache quantization for MLA Attention vLLM fakequant#714
Open
kinjalpatel27 wants to merge 4 commits intomainfrom
Open
Support for KV cache quantization for MLA Attention vLLM fakequant#714kinjalpatel27 wants to merge 4 commits intomainfrom
kinjalpatel27 wants to merge 4 commits intomainfrom
Commits
Commits on Dec 18, 2025
- committed