# Processing with AI
## Exploration : AI & Ethics
Name:
> Zixin Zhang
>
Subject:
> Detect student presence in class using Face Recognition
>[TOC]
## Design brief
### Biais
If we don't source our dataset with enough rigor, the following biais might appear:
>1. We can not accuratelly judge the actions of students
>2. We can not accurately evaluate the performance of students on class
>3. We may misunderstand the interaction among students
We will ensure that our model is not biaised by:
>1. Sourcing our data from huge dataset, providing more data, and taking a multi-disciplinary approach in bias research while respecting privacy including students in similar age groups.
>2. Making sure our data take into account the facial characteristics of students in different nationalities.
>3. Considering how teachers and machines can work together to mitigate bias. Prepare enough time to educate teachers to use this application.
### Overfitting
We will make sure our model does not overfit by
> Separating the dataset in two parts, a training dataset and validation dataset. Checking the accuracy of our model on the validation one.
### Misuse
>We have to remind ourselves that our application could be misused by off-campus staff to do moniter the privacy of students
>We have to remind ourselves that our application could be misused by teachers to moniter the after-class activities and interactions of students.
>We have to remind ourselves that our application could be misused by hackers to illegal use of student portraits
### Data leakage
*Choose the most relevant proposition:*
>In a catastrophic scenario, where all of our training dataset were stolen or recovered from our model, the risk would be the privacy information of our students will be illeagally used or information of students are being used to swindle.
**OR**
>We have decided that our training dataset will be fully open-sourced, but before we made sure that all teachers and working staff are fully understand the usage principals.
### Hacking
> If someone found a way to "cheat" our model and make it make any prediction that it want instead of the real one, the risk would be that the evaluation of student performance will be wrong, including the attendance and perfomance on class, or maybe student information was illegally collected