# Processing with AI ## Exploration : IA & Ethics Name: > ## Lucas Rea > Subject: > Detect student presence in class using Face Recognition >[TOC] ## Design brief ### Biais If we don't source our dataset with enough rigor, the following biais might appear: >1. The model will not be able to recognise all student. Such as student from minorities >2. The model could be tricky by a picture of a student. >3. The model could be confused by twins or people who look similars. We will ensure that our model is not biaised by: >1. we will train the model with one promotion of student then test it with another promotion for validation. >2. Making sure our data take into account that people could change during the year. Such as a exemple a student will probably cut his hair or start wearing glasses. >3. we will train the model with data that are not only picture from the front face, but also pictures from different angle. ### Overfitting >We will make sure our model does not overfit by checking the accuracy of our model on new data that the model never analysed before. ### Misuse >We have to remind ourselves that our application could be misused by a hacker to do detect people from a university and then practice social engenering against them (using a fake university letters for exemple with the name and logo from the right university). ### Data leakage >In a catastrophic scenario, where all of our training dataset were stolen or recovered from our model, the risk would be that we lost personal informations from student and this prejudicable since GDPR if we didn't put evererything intoplace to secure them.Futhermore, all faces we lost could be uses by bad persons using deepfake for revengeporn of whatever. ### Hacking > If someone found a way to "cheat" our model and make it make any prediction that it want instead of the real one, the risk would be that student can trick the model and being considered present by model while they are still laying in their bed.