Skip to content

Add riscv half-precision floating point detection#375

Open
ken-unger wants to merge 4 commits intopytorch:mainfrom
ken-unger:riscv-zvfh
Open

Add riscv half-precision floating point detection#375
ken-unger wants to merge 4 commits intopytorch:mainfrom
ken-unger:riscv-zvfh

Conversation

@ken-unger
Copy link

Add cpuinfo_has_riscv_zfh() and cpuinfo_has_riscv_zvfh for fp16 detection.

The motivation here is to enable this runtime detection support in xnnpack for its rvv fp16 kernels. (xnnpack uses this library)

Copy link
Collaborator

@fbarchard fbarchard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG. can you confirm this is vector arithmetics and is there a different detect for conversion?
another isa that would be great to detect is the 8 bit dot product

@ken-unger
Copy link
Author

ken-unger commented Mar 4, 2026 via email

@fbarchard
Copy link
Collaborator

Perfect. For XNNPack running on SiFive X280 its supposed to have zfh/zvfh, but the qemu I use doesnt support fp16, and cpuinfo didnt detect them.
x280 also has int8 dotproduct .. eg.
vqdot.[vv,vx]
Both would be useful for writing good microkernels for xnnpack. Its not clear its worth supporting zvfhmin - thats for lower end cpus like SiFive P650

@ken-unger
Copy link
Author

Is there a maintainer who could review and merge (if accepted) this PR? Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants