# [Deep Learning Term Project] Usage
## 如何使用 demo.sh 產出結果並呈現於前端
`project_root = RobertaABSA`
- Downloads: 請先下載並放在對應的位置。
[Drive Link](https://drive.google.com/drive/folders/1Bt44o_vkOMf_erK7QV2II3zzq2jIn7Pk?usp=share_link)
1. For tree-induction: `RobertaABSA/Train/save_models/roberta-en-Laptop-FT/roberta-en`
`RobertaABSA/Train/save_models/bert-en-base-uncased-Laptop-FT/bert-en-base-uncased`
2. For aspect-level sentiment prediction:
`RobertaABSA/ASGCN/state_dict`
`RobertaABSA/ASGCN/state_dict_finetuned`
3. Tokenizer: `RobertaABSA/Laptop_word2idx.pkl`
4. Embedding matrix:`RobertaABSA/300_Laptop_embedding_matrix.pkl` .
- Input:
- text file
請放在 `RobertaABSA/Dataset/UserInput/input.txt`
範例:
```
I think Google Pixel has great camera.
camera
```
第一行是 user 輸入的句子
第二行是 user 指定的該句子內的 aspect term。
<font color="green"> #TODO: `home.py` 裡希望可以寫一個防呆機制,假設 user 輸入句子為 "I love cameras.",但 aspect term 輸入 "screen" 或輸入不在句子內的 aspect term,則直接在前端顯示 alert message: "Please make sure that the specified aspect term is in the input sentence." </font>
- User 可調整的參數:
`-p, ptm`: 想使用哪顆 PTM 做 tree induction。選項:`bert, roberta`
`-f, fine-tuned`: 想要使用 finetune 在 SemEval 2014 Laptop Dataset 上的 PTM,還是不要。選項:`ft, no-ft`。
`-l, layer`: 想要使用 ptm 中哪一個 layer induce 的 trees。
- 執行指令
```
bash demo.sh -p roberta -f ft -l 7
```
- Output
- The induced tree is saved at `RobertaABSA/DepTrees/UserInput-Test-{layer}.npy`.
可以把後續接上 tree drawer 畫出樹。
- The result of sentiment classification is saved under `RobertaABSA/UserOutput/output.txt`.
範例:
```
positive
Probability: 0.9999058246612549
```