# Processing with AI ## Exploration : IA & Ethics Name: > Philip Lüdecke Problematic: > Can we tackle racism and violence in football by understanding the behaviour of hooligans? Small description of the project: > Training surveillance cameras inside stadiums in a way that they can segment images, analyse their content and transmit warning messages that allow human corrective action in case of display of violence and racist behaviour >[TOC] ## Design brief ### Biais If we don't source our dataset with enough rigor, the following biais might appear: >1. Every person wearing a scarf may be a hooligan >2. Every person wearing specific branded fashion may be a hooligan >3. Every person who forms a fist may be a hooligan We will ensure that our model is not biaised by: >1. Training our model preferably with data received by police to spot hooligans that are known >2. Create interdepency of all/ most classes that our model can detect >3. Train the model to only analyse larger groups of people, instead of individuals that may not be participating in hooligansim ### Overfitting We will make sure our model does not overfit by > Training and validating the model with images that display only typical behaviour shown by hooligans as well as convicted hooligans only. An additional resource we wil make use of is human correction, meaning that an employee of the respective security team, that is monitoring the surveillance cameras, will take a personal look at the content before taking action. ### Misuse >The consequences of misuse can be catastrophic. If exercised by the wrong person, innocent people can be blamed for actions they have not done. In the worst case, the model can actually stimulate racism if used by individuals who have antipathy towards a certain demographic group. ### Data leakage *Choose the most relevant proposition:* >In a catastrophic scenario, where all of our training dataset were stolen or recovered from our model, the risk would be that the model would no longer detect symptopns of violence & racism and therefore could cause an increase of these atrocities inside stadiums. ### Hacking > If someone found a way to "cheat" our model and make it make any prediction that it want instead of the real one, the risk would be that individuals who engage in hooliganism will not be caught and that individuals who are innocent will be prosecuted wrongfully. Of course there is still a human side when analysing the content of the model and taking corrective action, however hacking would cause us to lose valuable time when investigating hooliganism.