# Fine-tuning LLM 的環境設置 ## Conda 更快的 Solver:Libmamba conda 22.11 更新:libmamba solver 的實驗標誌已被刪除。若要使用新的 solver,請在您的 base 環境中更新 conda: ```bash conda update -n base conda ``` 若要安裝並設定新的 solver,請執行以下命令: ```bash conda install -n base conda-libmamba-solver conda config --set solver libmamba ``` ## 安裝 cuda-toolkit :bulb: 後續安裝 flash-attn 需要用到 cuda-toolkit。 需先查看 pytorch 的 CUDA 版本,安裝對應的 [cuda-toolkit 版本](https://anaconda.org/nvidia/cuda-toolkit)。 ```bash pip install torch==2.2.0 ``` ```python import torch print(torch.__version__) # 2.2.0+cu121 => CUDA 12.1 ``` ```bash conda install nvidia/label/cuda-12.1.0::cuda-toolkit ``` ## 安裝 unsloth & flast-attn :star: Finetune Mistral, Llama 2-5x faster with 70% less memory! ```bash pip install xformers pip install bitsandbytes pip install "unsloth[conda] @ git+https://github.com/unslothai/unsloth.git" pip install -U flash-attn ```
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up