## 選擇不同語言模型
使用參數***model***便可以選擇不同的語言模型,預設是***openai:gpt-3.5-turbo***.
## 範例
### 1. openai
```python!=
import akasha
akasha.Doc_QA()
ak.get_response(dir_path,
prompt,
embeddings="openai:text-embedding-ada-002",
model="openai:gpt-3.5-turbo")
```
</br>
</br>
### 2. huggingface
```python!=
import akasha
ak = akasha.Doc_QA()
ak.get_response(dir_path,
prompt,
embeddings="huggingface:all-MiniLM-L6-v2",
model="hf:meta-llama/Llama-2-13b-chat-hf")
```
</br>
</br>
### 3. llama-cpp
llama-cpp允許使用quantized模型並執行在cpu上,你可以從huggingface上下載.gguf llama-cpp 模型,如範例,如果你的模型下載到"model/"路徑下,可以使用以下方法加載模型
```python!=
import akasha
ak = akasha.Doc_QA()
ak.get_response(dir_path,
prompt,
embeddings="huggingface:all-MiniLM-L6-v2",
model="llama-cpu:model/llama-2-13b-chat.Q5_K_S.gguf")
```
llama-cpp同樣允許使用gpu運算模型,使用***llama-gpu***
```python!=
import akasha
ak = akasha.Doc_QA()
ak.get_response(dir_path,
prompt,
embeddings="huggingface:all-MiniLM-L6-v2",
model="llama-gpu:model/llama-2-3b-chat.Q5_K_S.gguf")
```
</br>
</br>
### 4. 遠端api
如果你使用別人的api或者利用TGI (Text Generation Inference)部署自己的模型,你可以使用***remote:{your LLM api url}***來加載模型。
```python!=
import akasha
ak = akasha.Doc_QA()
ak.get_response(dir_path,
prompt,
model="remote:http://140.92.60.189:8081")
```
</br>
</br>
## 可使用的模型
```python!=
openai_model = "openai:gpt-3.5-turbo" # need environment variable "OPENAI_API_KEY"
huggingface_model = "hf:meta-llama/Llama-2-7b-chat-hf" #need environment variable "HUGGINGFACEHUB_API_TOKEN" to download meta-llama model
quantized_ch_llama_model = "hf:FlagAlpha/Llama2-Chinese-13b-Chat-4bit"
taiwan_llama_gptq = "hf:weiren119/Taiwan-LLaMa-v1.0-4bits-GPTQ"
mistral = "hf:Mistral-7B-Instruct-v0.2"
mediatek_Breeze = "hf:MediaTek-Research/Breeze-7B-Instruct-64k-v0.1"
### If you want to use llama-cpp to run model on cpu, you can download gguf version of models
### from https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF and the name behind "llama-gpu:" or "llama-cpu:"
### from https://huggingface.co/TheBloke/CodeUp-Llama-2-13B-Chat-HF-GGUF
### is the path of the downloaded .gguf file
llama_cpp_model = "llama-gpu:model/llama-2-13b-chat-hf.Q5_K_S.gguf"
llama_cpp_model = "llama-cpu:model/llama-2-7b-chat.Q5_K_S.gguf"
llama_cpp_chinese_alpaca = "llama-gpu:model/chinese-alpaca-2-7b.Q5_K_S.gguf"
llama_cpp_chinese_alpaca = "llama-cpu:model/chinese-alpaca-2-13b.Q5_K_M.gguf"
chatglm_model = "chatglm:THUDM/chatglm2-6b"
```
</br>
</br>
</br>
</br>
## 自訂語言模型
如果你想使用其他模型,可以建立一個輸入是prompt的函數並回傳語言模型的回答,並將此函數作為***model***參數
### example
我們建立一個test_model函數,並可以將它作為參數輸入進get_response回答問題
```python!=
import akasha
def test_model(prompt:str):
import openai
from langchain.chat_models import ChatOpenAI
openai.api_type = "open_ai"
model = ChatOpenAI(model="gpt-3.5-turbo", temperature = 0)
ret = model.predict(prompt)
return ret
doc_path = "./mic/"
prompt = "五軸是什麼?"
qa = akasha.Doc_QA(verbose=True, search_type = "svm", model = test_model)
qa.get_response(doc_path= doc_path, prompt = prompt)
```
</br>
</br>
</br>
</br>
## 建立LLM物件
以上使用model參數選擇模型後,便會在Doc_QA物件內建立模型的物件model_obj(LLM)
```python!=
import akasha
AK = akasha.Doc_QA(model="openai:gpt-3.5-turbo")
print(type(AK.model_obj))
```
</br>
</br>
也可以使用輔助函數建立LLM物件
```python!=
import akasha
model_obj = akasha.handle_model("openai:gpt-3.5-turbo",verbose=False,temperature=0.0)
print(type(model_obj))
```
</br>
</br>
此LLM物件也可直接傳入Doc_QA,避免重複宣告
```python!=
import akasha
model_obj = akasha.handle_model("openai:gpt-3.5-turbo",verbose=False,temperature=0.0)
AK = Doc_QA(model=model_obj)
```