# Run ollama on GFX803 * host:ubuntu 22.04 * R5 2600, RX570 4GB(GFX803) ## 測試docker ``` sudo docker run -it --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -p 8089:8080 rocm61_pt24:latest /bin/bash ``` ## mozilla llamafile https://github.com/Mozilla-Ocho/llamafile ``` wget https://huggingface.co/Mozilla/TinyLlama-1.1B-Chat-v1.0-llamafile/resolve/main/TinyLlama-1.1B-Chat-v1.0.F16.llamafile?download=true ``` ``` ./llava-v1.5-7b-q4.llamafile -ngl 999 ``` 解壓缺少的HIP SDK ``` sudo apt-get update sudo apt-get install p7zip-full wget https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/download/v0.6.1.2/rocm.gfx803.optic.vega10.logic.hip.sdk.6.1.2.7z 7z x rocm.gfx803.optic.vega10.logic.hip.sdk.6.1.2.7z cp -r library/* /opt/rocm/lib/rocblas/library/ ``` 運行ollama ``` ./llama.llamafile -ngl 999 --host 0.0.0.0 ``` 有GPU加速 # 其他補充 Ollama for amd: https://github.com/likelovewant/ollama-for-amd/wiki
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up