Skip to content

Conversation

@ryan-steed-usa
Copy link
Contributor

This change might restore support for Maxwell and Pascal architectures.

- Updated GPU dependency from torch==2.8.0+cu129 to torch==2.8.0+cu126 in pyproject.toml
- Changed PyTorch CUDA index URL from https://download.pytorch.org/whl/cu129 to https://download.pytorch.org/whl/cu126
- This change ensures compatibility with CUDA 12.6 runtime while maintaining the same PyTorch version (2.8.0)
@ryan-steed-usa
Copy link
Contributor Author

Closes #406

@ryan-steed-usa ryan-steed-usa changed the title fix: update PyTorch CUDA version from cu129 to cu126 fix: downgrade PyTorch CUDA version from cu129 to cu126 Nov 1, 2025
@remsky
Copy link
Owner

remsky commented Nov 5, 2025

Hey @ryan-steed-usa, is this still a draft?

@ryan-steed-usa
Copy link
Contributor Author

Hi @remsky, I was hoping for confirmation from a Maxwell or Pascal CUDA user but everything seems to work containerized with my Ada Lovelace GPUs. Otherwise I think it's ready to go.

@ryan-steed-usa ryan-steed-usa marked this pull request as ready for review November 5, 2025 03:40
@jtabet
Copy link

jtabet commented Dec 14, 2025

Pascal user here. I can confirm my container crashes on remsky/kokoro-fastapi-gpu:latest-amd64 and works fine on ryan-steed-usa/kokoro-fastapi-gpu:latest. It could be useful to have a dedicated build tag for those legacy GPU architectures to keep the latest CUDA version by default

@ryan-steed-usa
Copy link
Contributor Author

Thanks for the feedback.

It could be useful to have a dedicated build tag for those legacy GPU architectures to keep the latest CUDA version by default

I agree, unless @remsky prefers to maintain a unified image in which case this workaround should accommodate everyone (for a while anyway). If we want to maintain a separate tag, we might also consider downgrading the entire base image.

@remsky
Copy link
Owner

remsky commented Dec 15, 2025

Thats a great idea. I have an optimization to the build stages on the nvidia image I was planning to push, can take a look to tag by torch versions and roll this in

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants