# W&B: Building LLM-Powered Apps
## LLM 基本原理
### Predict the next token
主要分為以下步驟:
1. 提供inupt text:
```Weights & Biases is```
2. 將input進行tokenize:
```[1135,2338,3134,223,3432,2123]```
3. 連結LLM並將token提供給LLM
4. LLM將會依序預測下一個最可能出現的token
5. 將最可能的token進行輸出 Sample
## 如何控制LLM的輸出
### 1. Temperature in LLM
#### Temperature: The LLM temperature is a hyperparameter that regulates the randomness, or creativity, of the AI’s responses.
當temperature越高,回應的可能性越多、越難以控制
當temperature越低,回應的可能性越少、越精確


### 2. Top P sampling
#### Explain: Top-p sampling (or nucleus sampling) chooses from the smallest possible set of words whose cumulative probability exceeds the probability p
#### 提供前 p 可能性個token進行回應

## Prompt Engineering
### 1. Level 5 prompt
Complex directive that includes the following:
- Description of high-level goal
- A detailed bulleted list of sub-tasks
- An explicit statement asking LLM to explain its own output
- A guideline on how LLM output will be evaluated
- Few-shot examples

### 2. Zero shot
### 3. Few Shot
擁有真實的問題集->切割成一份份chunk
將以上chunk結合在prompt裡面:
```
prompt = "Generate a support question from a W&B user\n" +\
"The question should be answerable by provided fragment of W&B documentation.\n" +\
"Below you will find a fragment of W&B documentation:\n" +\
chunk + "\n" +\
"Let's start!"
```
透過以上方式可以產生一些我們需要的範例問題集,並透過此方式加以訓練模型,途中需要注意生成出來的問題集必須要符合我們的需求。