從下圖樣可以看出,文字框模型標註效果不如預期並非正確的結果。 加上英文內容標註為獨立單字,後面辨識時效能與後處理也很複雜。 所以計畫設計一個標註少量樣本來訓練模型得到大量樣本的方法。 ![](https://hackmd.io/_uploads/HyWb3xWY2.png) ![](https://hackmd.io/_uploads/BkAf2gZY2.png) ![](https://hackmd.io/_uploads/r1sN3lZKn.png) ![](https://hackmd.io/_uploads/rkFT2eZY3.png) ![](https://hackmd.io/_uploads/H19kagbtn.png) ### 第一個假設 找同類型文件(同一份文件)取其1/10做人工標註,然後訓練fine-tune模型(tag name 標註名稱並保留模型),用來預測(標註)這一份資料。使其倍增標註樣本為10倍。作為訓練最終模型的基礎樣本(之一)。 同時,需要1/10樣本作為訓練時驗證用,所以實際可能需要2/10的標註,最後為5倍擴充而非10倍。 實際測試: 人工標註訓練集10張樣本測試集10張樣本(共20張),如下 ![](https://hackmd.io/_uploads/rkThzPZK2.png) ![](https://hackmd.io/_uploads/H1EkXvZY3.png) 利用resume training訓練70epoch(from 230~300) ``` 2023-07-04 16:56:34,961 DBNet.pytorch INFO: [291/300], train_loss: 0.6467, time: 5.3245, lr: 4.6837194216121523e-05 2023-07-04 16:56:41,134 DBNet.pytorch INFO: FPS:18.66458644739518 2023-07-04 16:56:41,136 DBNet.pytorch INFO: test: recall: 0.868545, precision: 0.937500, f1: 0.901706 2023-07-04 16:56:41,136 DBNet.pytorch INFO: current best, recall: 0.868545, precision: 0.937500, hmean: 0.901706, train_loss: 0.646725, best_model_epoch: 291.000000, 2023-07-04 16:56:41,254 DBNet.pytorch INFO: Saving current best: c:\develop\DBNet_pytorch_Wenmu\output\DBNet_resnet18_FPN_DBHead\checkpoint/model_best.pth 2023-07-04 16:56:46,558 DBNet.pytorch INFO: [292/300], train_loss: 0.5880, time: 5.3024, lr: 4.259995391188707e-05 2023-07-04 16:56:51,758 DBNet.pytorch INFO: FPS:30.293319774311623 2023-07-04 16:56:51,759 DBNet.pytorch INFO: test: recall: 0.870110, precision: 0.937605, f1: 0.902597 2023-07-04 16:56:51,760 DBNet.pytorch INFO: current best, recall: 0.870110, precision: 0.937605, hmean: 0.902597, train_loss: 0.588031, best_model_epoch: 292.000000, 2023-07-04 16:56:51,878 DBNet.pytorch INFO: Saving current best: c:\develop\DBNet_pytorch_Wenmu\output\DBNet_resnet18_FPN_DBHead\checkpoint/model_best.pth 2023-07-04 16:56:57,349 DBNet.pytorch INFO: [293/300], train_loss: 0.6035, time: 5.4700, lr: 3.8315267243499006e-05 2023-07-04 16:57:02,740 DBNet.pytorch INFO: FPS:23.38649090340362 2023-07-04 16:57:02,741 DBNet.pytorch INFO: test: recall: 0.873239, precision: 0.939394, f1: 0.905109 2023-07-04 16:57:02,742 DBNet.pytorch INFO: current best, recall: 0.873239, precision: 0.939394, hmean: 0.905109, train_loss: 0.603535, best_model_epoch: 293.000000, 2023-07-04 16:57:02,861 DBNet.pytorch INFO: Saving current best: c:\develop\DBNet_pytorch_Wenmu\output\DBNet_resnet18_FPN_DBHead\checkpoint/model_best.pth 2023-07-04 16:57:08,039 DBNet.pytorch INFO: [294/300], train_loss: 0.6372, time: 5.1771, lr: 3.397653658483928e-05 2023-07-04 16:57:13,399 DBNet.pytorch INFO: FPS:19.027188671250887 2023-07-04 16:58:17,224 DBNet.pytorch INFO: train_loss:0.6035346388816833 2023-07-04 16:58:17,224 DBNet.pytorch INFO: best_model_epoch:293 2023-07-04 16:58:17,224 DBNet.pytorch INFO: finish train ``` 結果如下: ![](https://hackmd.io/_uploads/H12_mvbKh.png) ![](https://hackmd.io/_uploads/rk-2XDbKh.png) 對比finetune前的模型效果(下圖:左訓練前,右訓練後): ![](https://hackmd.io/_uploads/HJviNPbt2.jpg) ![](https://hackmd.io/_uploads/HyM9BwWth.png) 結論:可以明確提升效果,但對於增加訓練樣本(標註)的目的,可能還是要考量加上人工處理,如下案例: ![](https://hackmd.io/_uploads/rJArIwZF3.jpg) ![](https://hackmd.io/_uploads/Hk5Y8wbKh.jpg) ![](https://hackmd.io/_uploads/H1KaIPbYh.png) 若不人工處理(例如至少要刪除不正確的部分),可能會造成後續訓練時產生無法預期或控制結果。此案例利用20張標註樣本所達成用20張finetune做出100張訓練資料樣本,算是非常方便的處理方式。 ##### 2023/07/06 依照上述的流程,做了判別橫式模型finetune。訓練集139測試集92張,訓練到400epoch(230~...350...~400)效果如下。另外說明,圖的部分因為就算框出來辨識也是很多零碎的內容,後來標註時就不標圖的內容(部分有標)。 ![](https://hackmd.io/_uploads/SyStzhXtn.png) ![](https://hackmd.io/_uploads/SynofnmFh.png) ![](https://hackmd.io/_uploads/H1zRf37K3.png) 好像一直用橫式樣本,直式辨識效果有些退化如下圖,但大部分都還好,這也沒關係,因為這個模型主要目的是用來產生更多訓練樣本。最後是使用之前訓練的模型加上用這方法做的更多樣本,來訓練效果更好的最終模型。 ![](https://hackmd.io/_uploads/rJ74B3XF3.png) ### 接下來打算增加20張直式的人工標註樣本,以同樣的手法,做出100張訓練樣本再進一步評估此作法。 做了非常少的32個樣本,訓練約200epoch效果也很好,省很多工。如下: ![](https://hackmd.io/_uploads/H1eKgWNY2.png) ![](https://hackmd.io/_uploads/SJAngbNt3.png) 接下來修正預測的樣本標註,圖表的部分就只刪除錯誤的不再補上正確的,因為太花時間了。 ![](https://hackmd.io/_uploads/rkqL2lHKh.png) ![](https://hackmd.io/_uploads/HJ6PngrY3.png) ### 最後合併直式與橫式樣本,使用之前訓練的模型做finetune 200 epochs ![](https://hackmd.io/_uploads/SJiHFESFh.png) ![](https://hackmd.io/_uploads/ByUuFVrKh.png) ![](https://hackmd.io/_uploads/Byi5KESt3.png) ![](https://hackmd.io/_uploads/HJrAFErY3.png) ![](https://hackmd.io/_uploads/SyGZqVrF2.jpg)