# Padding Experiment
## Experiment:
To check the accuracy of the encodings when each faces with different resolutions are encoded and used as registered images.
## Running Environments
machine
- azure india5(IP:157.55.181.233)
- azure india1-2(IP:51.143.18.187)
## Library requirements
- dlib
- face_recognition
- Link to [requirements.txt](https://exawizards.sharepoint.com/:t:/t/securitytech/EVqWhLKz8YxGoioWa4pdZOEBECOkzXhqJK2xY_dp9QHnZQ?e=vzztSf)
## How to run
For [encoding](https://exawizards.sharepoint.com/:u:/t/securitytech/EeXz26GPZ8xEkYBh_gRI7RYBPwWTRG68n1rgGCTmj-R5bw?e=dR5coQ).
Copy this [file](https://exawizards.sharepoint.com/:u:/t/securitytech/EYWvgOB0fChFqaGpUxVSi24Bqki-kQVq1zAh4SZj9i0uJg?e=jySYfp) also in the same folder.
- Run encode_faces.py with the folowing inputs
- --dataset, -d
- path to the dataset. Can contain full image or just faces. Directory should contain subfolder with names which conatin all the images
- --crop, -c
- Give False
- --encode, -e
- To create the encodings or not(True/False)
- --resolution, -r
- Run as 'none'
- --detection-method, -m
- face detection model to use: either `hog` or `cnn`
- The developer is manually required to change the padding value inside the code.(line 128 in encode_faces.py and line 95 in abdul_recognition). make sure both paddings are same
Run [abdul_recognition.py](https://exawizards.sharepoint.com/:u:/t/securitytech/EQ8lR9ASl4FKnul5WJMTELwBCfdCVGGZe6EBTsEjAMBnHg?e=sHb1tS) with the so obtained encodings
[Dataset](https://exawizards.sharepoint.com/:u:/t/securitytech/ESLdyWce521Mq0WRkv727pgBjr1NpS7KB2Wv1Ij4DdRofA?e=AUT9Hm
) used for encode_faces.py
[Encodings](https://exawizards.sharepoint.com/:f:/t/securitytech/EqxGan14EfZJoV8ly4kn-QgB4P5Uy0Eg6qGJrH1cEXeilg?e=ERyIpG) obtained after running the above code
## Result
- [CSV files](https://exawizards.sharepoint.com/:f:/t/securitytech/EtsoDrcZXvNNts2NyrIWIf4BFiMX2sQP62s9IL4WwQIH9g?e=3rtN3k)
- [Raw txt files](https://exawizards.sharepoint.com/:f:/t/securitytech/EgMUoXbnirlBnUFwi9RZL2EBdwF8Trgn3lxrELX-a76pdQ?e=2Z0mcw)
## Inference
- The most accuracy is for 0.25 padding which is assumed to be because since dlib uses 0.25 default padding it may have been trained on such faces which includes certain amount of background
- Refer this [link](https://docs.google.com/spreadsheets/d/1fXtzu5Dr95nFvhWOhyzm8lcFR3a8fUncxQ47hiWZ2gY/edit?usp=sharing) for inference