如何建置本地大型語言模型(AnythingLLM+Ollama+Taide) === ## 安裝Ubuntu VM 略 ## Run Ollama * https://ywctech.net/ml-ai/ollama-import-custom-gguf/ * https://github.com/ollama/ollama 1. 安裝 https://github.com/ollama/ollama/blob/main/docs/linux.md ``` curl -fsSL https://ollama.com/install.sh | sh ``` 2. 下載[taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit/tree/main) 模型(GGUF檔) ``` sudo apt-get install python3-pip sudo apt-get install python3-venv python3 -m venv abby-venv cd abby-venv bin/pip install huggingface_hub mkdir my-hf-model ``` ```vi download_taide.py``` ``` from huggingface_hub import hf_hub_download hf_hub_download( repo_id="taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit", token="<Hugging Face的Access Token>", local_dir="my-hf-model", filename="taide-8b-a.3-q4_k_m.gguf" ) ``` ``` bin/python download_taide.py ``` 3. 匯入模型 ``` cd my-hf-model vi Modelfile ``` ``` FROM ./taide-8b-a.3-q4_k_m.gguf TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" PARAMETER num_keep 24 PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>" PARAMETER num_ctx 1024 ``` ``` ollama create my-little-taide -f Modelfile ``` 4. [解決Ollama HOST IP限制](https://github.com/Mintplex-Labs/anything-llm/tree/master/server/utils/AiProviders/ollama#setting-environment-variables-on-linux) 將```Environment="OLLAMA_HOST=0.0.0.0"```加在```[Service]```底下 ``` sudo vi /etc/systemd/system/ollama.service [Service] Environment="OLLAMA_HOST=0.0.0.0" ``` ``` sudo systemctl daemon-reload sudo systemctl restart ollama ``` ## Run AnythingLLM * https://docs.anythingllm.com/installation-docker/local-docker 1. 架起AnythingLLM ``` export STORAGE_LOCATION=$HOME/anythingllm && \ mkdir -p $STORAGE_LOCATION && \ touch "$STORAGE_LOCATION/.env" && \ sudo docker run -d -p 3001:3001 \ --cap-add SYS_ADMIN \ --restart always \ -v ${STORAGE_LOCATION}:/app/server/storage \ -v ${STORAGE_LOCATION}/.env:/app/server/.env \ -e STORAGE_DIR="/app/server/storage" \ mintplexlabs/anythingllm ``` 2. 設定Ollama LLM Provider * https://docs.useanything.com/setup/llm-configuration/local/ollama * https://docs.anythingllm.com/installation-docker/local-docker#1-cannot-connect-to-service-running-on-localhost ![image](https://hackmd.io/_uploads/rySCMllVJx.png) 3. 建立新Workspace,並挑選Ollama LLM Provider ![image](https://hackmd.io/_uploads/ByQGokxVJx.png) 4. 可以對話了 ![image](https://hackmd.io/_uploads/Hkz4sJxEJx.png)