# Research ## facenet ### dataset path | Name | Path | | -------- | -------- | | lfw-aligned-insight | /home/yysung/lfw-aligned-insight | | lfw-aligned-tf | /home/yysung/lfw-aligned-tf | ### validation dataset preprocessing #### lfw ##### lfw-aligned-insight on tbmoon/facenet ``` # https://github.com/deepinsight/insightface # In /home/yysung/repos/research/insightface/src/align python3 align_lfw.py --input-dir ~/lfw --output-dir ~/lfw-aligned-insight ``` model_920 validate on lfw-aligned-insight accuracy = 0.90950000 ![](https://i.imgur.com/fBMbdhA.png) ##### lfw-aligned-tf on tbmoon/facenet ``` # https://github.com/davidsandberg/facenet # In /home/yysung/repos/research/facenet-tf/src for N in {1..4}; do python3 -m align.align_dataset_mtcnn ~/lfw ~/lfw-aligned-tf --image_size 182 --margin 44 --random_order --gpu_memory_fraction 0.2 & done ``` model_920 validate on lfw-aligned-tf accuracy = 0.91480000 ![](https://i.imgur.com/QFy8sut.png) ##### lfw-aligned on tbmoon/facenet ``` # https://github.com/tbmoon/facenet # In /home/yysung/repos/reserach/facenet python3 mtcnn.py --root-dir ~/lfw --final-dir ~/lfw-aligned ``` model_920 validate on lfw-aligned accuracy = 0.81420000 ![](https://i.imgur.com/mINW8Y0.png) ### train #### vggface2-test ##### vggface2-test-aligned-insight on tbmoon/facenet preprocess ``` # https://github.com/deepinsight/insightface # In /home/yysung/repos/research/insightface/src/align python3 align_lfw.py --input-dir ~/VGG-Face2/data/test --output-dir /mnt/disk1/VGG-Face2/data/test-aligned-insight ``` train ``` # In /home/yysung/repos/research/facenet python3 train.py --learning-rate 0.001 --train-root-dir ~/VGG-Face2/data/test-aligned-insight/ --valid-root-dir ~/lfw/ --train-csv-name datasets/vggface2-test-insight.csv --valid-csv-name datasets/lfw.csv --train-all ``` validate on lfw-aligned-insight accuracy = 0.82915000 ![](https://i.imgur.com/GyR2dT5.png) #### casia ##### casia-aligned-tf on tbmoon/facenet preprocess ``` # https://github.com/deepinsight/insightface # In /home/yysung/repos/research/insightface/src/align for N in {1..4}; do python3 -m align.align_dataset_mtcnn ~/CASIA-WebFace ~/casia-aligned-tf --image_size 182 --margin 44 --random_order --gpu_memory_fraction 0.2 & done ``` train ``` # In /home/yysung/repos/research/facenet python3 train.py --learning-rate 0.0001 --train-root-dir ~/casia-aligned-tf/ --valid-root-dir ~/lfw-aligned-tf/ --train-csv-name datasets/casia.csv --valid-csv-name datasets/lfw.csv --train-all ``` logs ``` # In /home/yysung/repos/research/facenet/logs/facenet-casia-tf-20210427-1 ``` ## tamerthamoqa/facenet-pytorch-vggface2 ### train #### 2021/4/30 arguments ``` python3 train_triplet_loss.py --dataroot ~/vggface2_224/ --lfw /mnt/disk1/lfw_224/ --epochs 19 --iterations_per_epoch 10000 --model_architecture resnet18 --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 32 --optimizer adagrad --learning_rate 0.0002 --margin 0.2 --image_size 224 --use_semihard_negatives True ``` Result ``` Accuracy on LFW: 0.8252+-0.0170 Precision 0.8181+-0.0198 Recall 0.8367+-0.0190 ROC Area Under Curve: 0.8980 Best distance threshold: 0.95+-0.00 TAR: 0.0437+-0.0123 @ FAR: 0.0013 ``` #### 2021/5/6 full-precision, FL 10 clients arguments ``` python3 train_triplet_loss_fed.py --dataroot ~/vggface2_224/ --lfw /mnt/disk1/lfw_224/ --epochs 19 --iterations_per_epoch 1000 --model_architecture resnet18 --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 32 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True ``` Result ``` Accuracy on LFW: 0.9260+-0.0115 Precision 0.9233+-0.0124 Recall 0.9293+-0.0150 ROC Area Under Curve: 0.9770 Best distance threshold: 0.79+-0.00 TAR: 0.2350+-0.0888 @ FAR: 0.0013 ``` #### 2021/5/9 full-precision, original arguments ``` python3 train_triplet_loss.py --dataroot ~/vggface2_224/ --lfw /mnt/disk1/lfw_224/ --epochs 19 --iterations_per_epoch 10000 --model_architecture resnet18 --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 32 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True ``` Result ``` Epoch 17: Accuracy on LFW: 0.9648+-0.0081 Precision 0.9638+-0.0085 Recall 0.9660+-0.0125 ROC Area Under Curve: 0.9946 Best distance threshold: 0.88+-0.01 TAR: 0.5557+-0.0321 @ FAR: 0.0013 Epoch 18: Accuracy on LFW: 0.9585+-0.0095 Precision 0.9451+-0.0103 Recall 0.9737+-0.0120 ROC Area Under Curve: 0.9937 Best distance threshold: 0.93+-0.01 TAR: 0.6053+-0.0450 @ FAR: 0.0010 ``` #### 2021/5/31 full-precision, FSL 10 clients, bar_mask_2_ebp arguments ``` python3 train_triplet_loss_fedebpdali.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_ebp --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_ebp --dataset_csv datasets/vggface2_full_ebp.csv --epochs 19 --iterations_per_epoch 1000 --model_architecture resnet18ebp --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 10 ``` Result ``` Epoch 18: Accuracy on LFW: 0.9220+-0.0126 Precision 0.9113+-0.0152 Recall 0.9353+-0.0206 ROC Area Under Curve: 0.9792 Best distance threshold: 0.86+-0.02 TAR: 0.3647+-0.0258 @ FAR: 0.0010 ``` #### 2021/6/10 mixed-precision, FSL 10 clients, bar_mask_2_ebp arguments ``` python3 train_triplet_loss_fedebpdali.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_ebp --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_ebp --dataset_csv datasets/vggface2_full_ebp.csv --epochs 19 --iterations_per_epoch 1000 --model_architecture resnet18ebp --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 10 ``` Result ``` Epoch 18: Accuracy on LFW: 0.9153+-0.0114 Precision 0.9015+-0.0168 Recall 0.9330+-0.0185 ROC Area Under Curve: 0.9757 Best distance threshold: 0.83+-0.01 TAR: 0.2597+-0.0249 @ FAR: 0.0010 ``` ``` acc = [0.9137, 0.9103, 0.9045, 0.9020, 0.8990, 0.9070, 0.9110, 0.9050, 0.9013, 0.8988] acc_std = [0.0114, 0.0152, 0.0158, 0.0126, 0.0147, 0.0099, 0.0119, 0.0127, 0.0142, 0.0105] ``` #### 2021/6/10 mixed-precision, FL 10 clients arguments ``` python3 train_triplet_loss_fed.py --dataroot ~/vggface2_224/ --lfw /mnt/disk1/lfw_224/ --epochs 19 --iterations_per_epoch 1000 --model_architecture resnet18 --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 32 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True ``` #### 2021/6/17 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/c74a5fc60bd4571247e820f29a16997e9faa8696 mixed-precision, FSL 10 clients, bar_mask_2_ebp arguments ``` python3 train_triplet_loss_fedebpdali.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_ebp --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_ebp --dataset_csv datasets/vggface2_full_ebp.csv --epochs 19 --iterations_per_epoch 1000 --model_architecture resnet18ebp --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 10 ``` Result ``` Epoch 17: 0.9165 Epoch 18: 0.9126 Epoch 18 Before Aggregated acc = [0.9082, 0.9035, 0.9018, 0.9070, 0.9063, 0.9100, 0.9052, 0.9065, 0.9120, 0.9095] acc_std = [0.0163, 0.0111, 0.0128, 0.0179, 0.0110, 0.0124, 0.0140, 0.0123, 0.0120, 0.0127] ``` #### 2021/6/19 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/c74a5fc60bd4571247e820f29a16997e9faa8696 mixed-precision, FSL 10 clients, bar_mask_2_ebp_fill arguments ``` python3 train_triplet_loss_fedebpdali.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_ebp_fill --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_ebp --dataset_csv datasets/vggface2_full_ebp.csv --epochs 19 --iterations_per_epoch 1000 --model_architecture resnet18ebp --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 10 ``` Result ``` Epoch 17: 0.9035 Epoch 18: 0.9048 Epoch 18 Before Aggregated acc = [0.9052, 0.9015, 0.8835, 0.8965, 0.8818, 0.8968, 0.8950, 0.8915, 0.8897, 0.8767] acc_std = [0.0130, 0.0165, 0.0160, 0.0114, 0.0177, 0.0134, 0.0132, 0.0149, 0.0150, 0.0156] ``` #### 2021/6/21 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/c74a5fc60bd4571247e820f29a16997e9faa8696 mixed-precision(?), FL 10 clients arguments ``` python3 train_triplet_loss_fedebpdali.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_ebp --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_ebp --dataset_csv datasets/vggface2_full_ebp.csv --epochs 19 --iterations_per_epoch 1000 --model_architecture resnet18 --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 10 ``` Result ``` Epoch 18: Accuracy on LFW: 0.9250+-0.0100 Precision 0.9205+-0.0140 Recall 0.9307+-0.0138 ROC Area Under Curve: 0.9803 Best distance threshold: 0.79+-0.01 TAR: 0.3187+-0.0698 @ FAR: 0.0010 ``` #### 2021/6/22 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/c74a5fc60bd4571247e820f29a16997e9faa8696 mixed-precision, FSL 10 clients, vggface2_224_ebp arguments ``` python3 train_triplet_loss_fedebpdali.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_ebp --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_ebp --dataset_csv datasets/vggface2_full_ebp.csv --epochs 19 --iterations_per_epoch 1000 --model_architecture resnet18ebp --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 10 ``` #### 2021/6/29 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/af676b5bee9111205244fbf29077ceeb63f24860 float32, No FL, CBAM, vggface2_224_ebp arguments ``` python3 train_triplet_loss_fedebpdali_float32.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_ebp --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_ebp --dataset_csv datasets/vggface2_full_ebp.csv --epochs 40 --iterations_per_epoch 10000 --model_architecture resnet18cbamebp --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 1 ``` #### 2021/6/29 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/af676b5bee9111205244fbf29077ceeb63f24860 float32, No FL, CBAM, vggface2_224_bar_mask_2_ebp_fill arguments ``` python3 train_triplet_loss_fedebpdali_float32.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_ebp_fill --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_ebp_fill --dataset_csv datasets/vggface2_full_ebp.csv --epochs 40 --iterations_per_epoch 10000 --model_architecture resnet18cbamebp --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 1 ``` #### 2021/7/7 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/bb398a6b0736a9c39f745b14ccf9e5c2d0b2dc11 float32, No FL, psresnet18_22111_cbam, vggface2_224_bar_mask_2_ebp_fill arguments ``` python3 train_triplet_loss_fedebpdali_float32.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_ebp_fill --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_ebp_fill --dataset_csv datasets/vggface2_full_ebp.csv --epochs 40 --iterations_per_epoch 10000 --model_architecture psresnet18_22111_cbam --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 1 ``` #### 2021/7/12 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/5fafbdee1ee2a77428d55a8722057f43ff73b7d5 float32, No FL, CBAM, vggface2_224_bar_mask_2_fill arguments ``` python3 train_triplet_loss_fedebpdali_float32.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_fill --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_fill --dataset_csv datasets/vggface2_full_fill.csv --epochs 40 --iterations_per_epoch 10000 --model_architecture resnet18cbamebp --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 1 ``` Result ``` Epoch 39: Accuracy on LFW: 0.9697+-0.0067 Precision 0.9611+-0.0073 Recall 0.9790+-0.0100 ROC Area Under Curve: 0.9957 Best distance threshold: 0.93+-0.00 TAR: 0.6290+-0.0271 @ ``` #### 2021/7/13 https://github.com/yysung1123/facenet-pytorch-vggface2/commit/5fafbdee1ee2a77428d55a8722057f43ff73b7d5 float32, No FL, Original, vggface2_224 arguments ``` python3 train_triplet_loss_fedebpdali_float32.py --dataroot /dev/shm/vggface2_224 --ebp_dataroot /dev/shm/vggface2_224_bar_mask_2_fill --lfw /dev/shm/lfw_224 --lfw_ebp /dev/shm/lfw_224_bar_mask_2_fill --dataset_csv datasets/vggface2_full_fill.csv --epochs 40 --iterations_per_epoch 10000 --model_architecture resnet18 --embedding_dimension 256 --batch_size 256 --lfw_batch_size 256 --num_workers 8 --optimizer adagrad --learning_rate 0.05 --margin 0.2 --image_size 224 --use_semihard_negatives True --clients 1 ``` Result ``` Epoch 39: Accuracy on LFW: 0.9718+-0.0081 Precision 0.9711+-0.0105 Recall 0.9727+-0.0103 ROC Area Under Curve: 0.9963 Best distance threshold: 0.90+-0.01 TAR: 0.5750+-0.0491 @ FAR: 0.0010 ``` ## Reference [为什么triplet loss有效?](https://bindog.github.io/blog/2019/10/23/why-triplet-loss-works/)