owned this note
owned this note
Published
Linked with GitHub
# Ollama
基本指令:
-- ollama pull llama2:70b 拉一個模型回來
-- ollama run llama2:70b 啟動一個模型(互動介面)
-- ollama rm llama2:70b 刪除模型
-- ollama cp llama2:70bh my-llama2 複製一個模型
-- ollama list 目前主機上已經擁有的模型列表
-- ollama create {model_name} -f ./Modelfile 建立自己的 modelfile 裡面會寫 prompt 或是一些相關參數調整
-- ollama run llama2 "Summarize this file: $(cat README.md)" 傳入 prompt
-- ollama serve 啟動一個伺服器。
Linux 安装:% curl https://ollama.ai/install.sh | sh
— ollama pull llama2-chinese
— ollama run llama2-chinese "天空为什么是蓝色的?"
Ollama WebUI
If Ollama is on your computer, use this command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
If Ollama is on a Different Server, use this command:
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
# 與模型聊天
curl http://localhost:11434/api/chat -d '{"model": "mistral","messages": [{ "role": "user", "content": "why is the sky blue?" }]}'
curl http://10.221.253.56:11434/api/chat -d '{"model": "mistral","messages": [{ "role": "user", "content": "why is the sky blue?" }]}'
curl http://10.221.253.65:11434/api/chat -d '{"model": "mistral","messages": [{ "role": "user", "content": "why is the sky blue?" }]}'
安裝Docker
先更新Ubuntu,輸入apt-get update。
再輸入apt-get upgrade。
在Ubuntu的終端機視窗中,輸入
curl -sSL https://get.docker.com/ubuntu/ | sudo sh
檢驗Docker是否安裝成功 輸入docker info
# Using_llama2_faiss_and_langchain_for_question_answering_on_your_own_data.ipynb
https://github.com/langchain-ai/langchain
https://github.com/chatchat-space/Langchain-Chatchat/blob/master/img/langchain+chatglm.png
https://github.com/huggingface/transformers
https://github.com/facebookresearch/faiss
https://github.com/murtuza753/llama2-faiss-langchain-qa-rag/blob/main/Using_llama2_faiss_and_langchain_for_question_answering_on_your_own_data.ipynb
`OLLAMA_HOST=127.0.0.1:11435 ollama serve`
可以改port
# GPT Crawler
`git clone https://github.com/BuilderIO/gpt-crawler.git`
`cd gpt-crawler/containerapp` //切到這個path
## 視情況更改config (all return Config Version)
```yaml
import { Config } from "./src/config";
export const defaultConfig: Config = {
url: "https://www.vghtpe.gov.tw/Index.action",//主頁
match: "https://www.vghtpe.gov.tw/**",//讓他可以去選
maxPagesToCrawl: 50,
outputFileName: "../data/output.json",
};
```
設定完之後
就可以開腳本
`. ./run.sh`
### 有可能遇到的問題
找不到他指定的ubuntu 不能選自己的 所以要預先去pull他指定的image
`docker pull ubuntu:jammy `
## Only News Version config
```
import { Config } from "./src/config";
export const defaultConfig: Config = {
url: "https://www.vghtpe.gov.tw/Index.action",
match: "https://www.vghtpe.gov.tw/News!one.action?nid=*",
maxPagesToCrawl: 50,
outputFileName: "../data/output.json",
};
```
# Hugging Face Model
[ICD10 model](https://huggingface.co/AkshatSurolia/ICD-10-Code-Prediction?text=Pneumonia)





## 弊端:
醫生的描述如果不大吻合的話就找不到
差不多的話就能透過hugging face的model找到接近ICD Code
# ChatOllama0606更新
/Users/lin/chat-ollama0606/composables/useMenus.ts
這個可以改Menu的排列 或者蓋掉 參考bdc version
/Users/lin/chat-ollama0606/pages/index.vue
要把index.vue裡的改成/pages/chat/index.vue的內容
可以使入首頁就為聊天畫面
/Users/lin/chat-ollama0606/locales/zh-TW.json
這裡要新增zh-TW.json,以便中文化 json檔準備好後 改config
/Users/lin/chat-ollama0606/config/i18n.ts
/Users/lin/chat-ollama0606/config/index.ts改名稱
---
# 量化model
```
link = https://medium.com/@NeroHin/將-huggingface-格式模式轉換為-gguf-以inx-text-bailong-instruct-7b-為例-a2cfdd892cbc
```
from huggingface_hub import snapshot_download
model_id = "INX-TEXT/Bailong-instruct-7B" # hugginFace's model name
snapshot_download(
repo_id=model_id,
local_dir="INX-TEXT_Bailong-instruct-7B",
local_dir_use_symlinks=False,
revision="main",
use_auth_token="<YOUR_HF_ACCESS_TOKEN>")
```
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
pip install -r requirements.txt
```
python convert.py -h
# 複製剛剛下載 HF Model 的 path
python convert.py INX-TEXT_Bailong-instruct-7B \
--outfile bailong-instruct-7b-f16.gguf \
--outtype f16
# 以 Q5_K_M 為例
# cd llama.cpp
./quantize ./models/Bailong-instruct-7B-f16.gguf ./models/Bailong-instruct-7B-v0.1-Q5_K_M.gguf q5_k_m
from huggingface_hub import HfApi
import os
api = HfApi()
HF_ACCESS_TOKEN = "<YOUR_HF_WRITE_ACCESS_TOKEN>"
model_id = "NeroUCH/Bailong-instruct-7B-GGUF"
api.create_repo(
model_id,
exist_ok=True,
repo_type="model", # 上傳格式為模型
use_auth_token=HF_ACCESS_TOKEN,
)
# upload the model to the hub
# upload model name includes the Bailong-instruct-7B in same folder
for file in os.listdir():
if file.endswith(".gguf"):
model_name = file.lower()
api.upload_file(
repo_id=model_id,
path_in_repo=model_name,
path_or_fileobj=f"{os.getcwd()}/{file}",
repo_type="model", # 上傳格式為模型
use_auth_token=HF_ACCESS_TOKE)
```
### conda
[conda](https://medium.com/@scofield44165/ubuntu-20-04中安裝-解安裝anaconda及虛擬環境詳細過程-install-uninstall-environment-setup-for-anaconda-in-ubuntu-cd64e68335c5)