# 2022 / 10 / 26 進度 ###### tags: `實驗` [TOC] ### 嘗試 1. 為了測試把前一張圖片也加入Object detection的效果,研究FCOS的架構並實作出train/test輸出一樣loss/detections的模型寫法 2. 為了做實驗比較,發現以前的實驗無法復現,即使是同樣的code跑兩次結果也不相同,但趨勢是一樣的(具體mAP50差距可達0.05),於是探討了random seed的問題 ==================Code================== ``` torch.cuda.manual_seed(SEED) torch.cuda.manual_seed_all(SEED) ``` ==================Code================== ``` def seed_worker(worker_id): numpy.random.seed(worker_seed) random.seed(worker_seed) g = torch.Generator() g.manual_seed(SEED) ``` ==================Code================== ``` os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8" torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True ``` 最後發現最大隨機的原因在Augmentation,關掉aug後可以原全復現,但此時在第2個epoch就到達performance最大值,這樣我的模型算是General的嗎,需要做更多實驗 No Aug : | - | mAP50 Train | mAP50 Valid | | -------- | -------- | -------- | | SGD at(16) | 0.586 | 0.5087 | | AdamW at(1) | 0.400 | 0.5556 | | AdamW at(2) | 0.536 | 0.528 | 3. 發現cv2read gray複製3份和直接讀RGB過模型的LOSS不同,研判是PIL.save的問題(但效能差距好像不大) 存成jpg就會讓值不同,png不會,所以全部重新弄成png 4.把不納入valid的17拿去train變爛 train的33移掉也是變差 #### Next : 4. 嘗試上次報的SGL K-fold Pseudo-label能否幫助提升效能
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up