# Processing with AI ## Exploration : IA & Ethics Name: > Claire Nguyen-Quang > Subject: > Detect student presence in class using Face Recognition >[TOC] ## Design brief ### Biais If we don't source our dataset with enough rigor, the following biais might appear: >1. Not detecting well people from minorities. >2. Not detecting well the students if they change something on their faces (makeup, accessories...). >3. Pretend to detect a student that have familiar traits with a student face present in our dataset but it's not the same person. We will ensure that our model is not biaised by: >1. Sourcing our data from different dataset like Microsoft, IBM dataset "Diversity in Faces", Face++,... >2. Making sure our data take into account *recognising peolple from minorities.* >3. Making sure our data take into account face changes like wearing makeup, accessories,... ### Overfitting We will make sure our model does not overfit by > Checking the accuracy of our model on a small group of people with very different traits. ### Misuse >We have to remind ourselves that our application could be misused by **the government** to do **excessive surveillance and violate the privacy of the population**. ### Data leakage >In a catastrophic scenario, where all of our training dataset were stolen or recovered from our model, the risk would be a threat for people's informations and data. The people who stole our training our dataset could use it to scam people, create fake news, making fake videos, comit crimes, rig elections... using the faces and informations from our dataset. ### Hacking > If someone found a way to "cheat" our model and make do it make any prediction that it want instead of the real one, the risk would be that it won't be accurate anymore because he can pretend to be a student and add his informations in our dataset and come to class without even being a real student and it can be dangerous if this person have bad intentions. Also, a student can pretend to be another one and not go to class with this technique.