Skip to content

Mateusz-Dera/ROCm-AI-Installer

Repository files navigation

ROCm-AI-Installer

A script that automatically installs the required dependencies to run selected AI applications on AMD Radeon GPUs (default: RX 7900 XTX). For other cards and architectures, the HSA_OVERRIDE_GFX_VERSION and GFX variables in the install.sh file should be modified accordingly (Not tested).

Info

Version

Note

Debian 13.2 is recommended. Version 9.x is not tested on older systems.
On other distros, most of the python based applications should work, but manual installation of ROCm will be required.

Important

All models and applications are tested on a GPU with 24GB of VVRAM.
Some applications may not work on GPUs with less VRAM.

Test platform:

Name Info
CPU AMD Ryzen 5 7500F
GPU AMD Radeon 7900XTX
RAM 64GB DDR5 6600MHz
Motherboard Gigabyte X870 AORUS ELITE WIFI7 (BIOS F8e)
OS Debian 13.2
Kernel 6.12.57+deb13-amd64
ROCm 7.1

Text generation:

Name Links Additional information
KoboldCPP https://github.com/YellowRoseCx/koboldcpp-rocm Support GGML and GGUF models.
Text generation web UI https://github.com/oobabooga/text-generation-webui
https://github.com/ROCm/bitsandbytes.git
https://github.com/turboderp/exllamav2
1. Support ExLlamaV2, Transformers using ROCm and llama.cpp using Vulkan.
2. If you are using Transformers, it is recommended to use sdpa option instead of flash_attention_2.
SillyTavern https://github.com/SillyTavern/SillyTavern
llama.cpp https://github.com/ggerganov/llama.cpp 1. Put model.gguf into llama.cpp folder.
2. In run.sh file, change the values of GPU offload layers and context size to match your model.
Ollama https://github.com/ollama/ollama You can use standard Ollama commands in terminal or run GGUF model.
1. Put model.gguf into Ollama folder.
2. In run.sh file, change the values of GPU offload layers and context size to match your model.
3. In run.sh file, customize model parameters.

SillyTavern Extensions:

Name Link Additional information
WhisperSpeech web UI https://github.com/Mateusz-Dera/whisperspeech-webui Install and run WhisperSpeech web UI first.

Image & video generation:

Name Links Additional information
ComfyUI https://github.com/comfyanonymous/ComfyUI Workflows templates are in the workflows folder.

ComfyUI Addons:

Important

For GGUF Flux and Flux based models:
1. Accept accept the conditions to access its files and content on HugginFace website:
https://huggingface.co/black-forest-labs/FLUX.1-schnell
2. HugginFace token is required during installation.

Name Link Additional information
ComfyUI-Manager https://github.com/ltdrdata/ComfyUI-Manager Manage nodes of ComfyUI.
After first run change custom_nodes/ComfyUI-Manager/config.ini security_level to weak.
GGUF https://github.com/calcuis/gguf GGUF models loader.
ComfyUI-AuraSR https://github.com/alexisrolland/ComfyUI-AuraSR
https://huggingface.co/fal/AuraSR
https://huggingface.co/fal/AuraSR-v2
ComfyUI node to upscale images.
AuraFlow-v0.3 https://huggingface.co/fal/AuraFlow-v0.3 Text to image model.
FLUX.1-schnell GGUF https://huggingface.co/black-forest-labs/FLUX.1-schnell
https://huggingface.co/city96/FLUX.1-schnell-gguf
Text to image model.
Model quant: Q8_0
AnimePro FLUX GGUF https://huggingface.co/advokat/AnimePro-FLUX Text to image model.
Flux based.
Model quant: Q5_K_M
Flex.1-alpha GGUF https://huggingface.co/ostris/Flex.1-alpha
https://huggingface.co/hum-ma/Flex.1-alpha-GGUF
Text to image model.
Flux based.
Model quant: Q8_0
Qwen-Image GGUF https://huggingface.co/Qwen/Qwen-Image
https://huggingface.co/QuantStack/Qwen-Image-Edit-2509-GGUF
https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI
https://huggingface.co/lightx2v/Qwen-Image-Lightning
Text to image model.
Qwen Image-Quant: Q6_K
Qwen-Image-Edit GGUF https://huggingface.co/Qwen/Qwen-Image-Edit
https://huggingface.co/calcuis/qwen-image-edit-ggu
https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI
https://huggingface.co/city96/Qwen-Image-gguf
https://huggingface.co/lightx2v/Qwen-Image-Lightning
Text to image model.
Qwen Image-Quant-Edit quant: Q4_K_M
Qwen-Image-Edit-2509 GGUF https://huggingface.co/Qwen/Qwen-Image-Edit-2509
https://huggingface.co/calcuis/qwen-image-edit-gguf
https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI
https://huggingface.co/city96/Qwen-Image-gguf
https://huggingface.co/lightx2v/Qwen-Image-Lightning
Text to image model.
Qwen Image-Quant-Edit-2509 quant: Q4_0

Music generation:

Name Links Additional information
ACE-Step https://github.com/ace-step/ACE-Step

Voice generation:

Name Links Additional information
WhisperSpeech web UI https://github.com/Mateusz-Dera/whisperspeech-webui
https://github.com/collabora/WhisperSpeech
F5-TTS https://github.com/SWivid/F5-TTS Remember to select voice.
Matcha-TTS https://github.com/shivammehta25/Matcha-TTS
Dia https://github.com/nari-labs/dia
https://github.com/tralamazza/dia/tree/optional-rocm-cuda
Script uses the optional-rocm-cuda fork by tralamazza.
Chatterbox Multilingual https://github.com/resemble-ai/chatterbox Only Polish and English have been tested.
May not read non-English characters.
Polish is fixed:
resemble-ai/chatterbox#256
For other languages, you will need to add the changes manually in the multilingual_app.py file.
For a better effect in Polish, I recommend using lowercase letters for the entire text.
KaniTTS https://github.com/nineninesix-ai/kani-tts If you want to change the default model, edit the kanitts/config.py file.
KaniTTS-vLLM https://github.com/nineninesix-ai/kanitts-vllm If you want to change the default model, edit the config.py file.

3D generation:

Name Links Additional information
PartCrafter https://github.com/wgsxm/PartCrafter Added custom simple UI.
Uses a modified version of PyTorch Cluster for ROCm https://github.com/Mateusz-Dera/pytorch_cluster_rocm.

Tools:

Name Links Additional information
Fastfetch https://github.com/fastfetch-cli/fastfetch Custom Fastfetch configuration with GPU memory info.
Supports also NVIDIA graphics cards (nvidia-smi needed).
If you want your own logo, place the asci.txt file in the ~/.config/fastfetch directory.

Instalation

Note

First startup after installation of the selected app may take longer.

Important

If app does not download any default models, download your own.

Caution

If you update, back up your settings and models. Reinstallation deletes the previous directories. If you have /etc/apt/preferences.d/fallback configured, make sure it does not override ROCm from /etc/apt/preferences.d/rocm-pin-600.

1. If you have installed uv other than through pipx, uninstall uv first.

2. Clone repository

git clone https://github.com/Mateusz-Dera/ROCm-AI-Installer.git

3. Run installer

bash ./install.sh

4. Select installation path.

5. Select Install required packages if you are upgrading or running the script for the first time.

6. If you are installing the script for the first time, restart system after this step.

7. Install selected app.

8. Go to the installation path with the selected app and run:

./run.sh

Docker

Check DOCKER.md