# 1. clone code 1. https://github.com/darongliu/GAN_Harmonized_with_HMMs 2. to barch my-pytorch-libri 然後開個新branch # 2. modified ROOT/config.sh and ROOT/config_battleship.sh (ROOT:code在的地方) ### 步驟: 1. ROOT_DIR=$ROOT(code在的地方) 2. TIMIT_DIR=不重要也不需要改 3. DATA_PATH=/groups/public/DARONG/test_GAN_Harmonized_with_HMMs_librispeech/data(如果有搬走要記得改,因為run 下面的code會有寫入動作,可能有權限問題) # 3. (TODO) create new WFST graph ### 步驟: 1. comment line 8 in the ROOT/preprocess.sh 2. run ROOT/preprocess.sh * 但應該要改一下才能在戰艦run, 加hrun之類的) * 裡面其實是去叫$ROOT/src/WFST-decoder/scripts/preprocess.sh ### a. 要改的東西:$ROOT/src/WFST-decoder/scripts/preprocess.sh (裡面做的事情就是看/groups/public/DARONG/test_GAN_Harmonized_with_HMMs_librispeech/data/timit_for_GAN/text/{match, nonmatch}_lm.48 產生 wfst graph) ### b. 產生的wfst graph應該會在: $DATA_PATH/wfst_data = /groups/public/DARONG/test_GAN_Harmonized_with_HMMs_librispeech/data/wfst_data ### c. 把文字lm加上去可能(很可能XD)會用到的東西的path: 1. lexicon: $DATA_PATH/lexicon.txt (但是sil和spn要改小寫) 2. text: /groups/public/DARONG/test_GAN_Harmonized_with_HMMs_librispeech/data/timit_for_GAN/text/{match, nonmatch}_text_lm.48 # 4. (TODO) run wfst decoding ### a. 在ROOT/run.sh or ROOT/run_battleship.sh裡面,在training process那個for loop comment到只剩wfst那一行 ### run command: bash ./run_battleship.sh src/GAN-based-model/librispeech_config/config.yaml test_wfst_ (後面的_不是多打的) ### output result應該會在/groups/public/DARONG/test_GAN_Harmonized_with_HMMs_librispeech/data/save/