# Processing with AI ## Exploration : IA & Ethics Name: > Detect dermatological problems using Computer Vision Subject: > Artificial intelligence has changed our life in may ways, from production to consumption, but the ethical issues raised by it cannot be ignored, let us take the example of dermatological problems caused by using Computer Vision. >[TOC] ## Design brief ### Biais If we don't source our dataset with enough rigor, the following biais might appear: >1. Machien learning systems treated gender classification as a binary decision — male or female , which made their performance on that task particularly easy to assess statistically. >2.If a new medical imaging technology advertised with 99% accuracy was mainly tested on white people then it could output false negatives for patients that would have benefited for a medical treatment. We will ensure that our model is not biaised by: >1. Taking into consideration a large variety of diversity >2. Making quantitives standards, for example a conclusion could not be drawed only if the research has tested on enough cases. ### Overfitting We will make sure our model does not overfit by Reducing the network's capacity by removing layers or reducing the number of elements in the hidden layers; It won’t work every time, but training with more data can help algorithms detect the signal better. Apply regularization, which comes down to adding a cost to the loss function for large weights ### Misuse >We have to remind ourselves that our application could be misused by commercial usages or racial discrimination speeches. For example, some cosmetic companies may take advantage of some results published on the topic of dermatological problems and argue that their particular product or service is really efficient and helpful to deal with some kind of skin problems, which is however, not the reality or some people who are racist may misuse some results by purpose to strengthe their ethnocentrism against a particular nation. ### Data leakage *Choose the most relevant proposition:* >In a catastrophic scenario, where all of our training dataset were stolen or recovered from our model, the risk would be personal information being released, they begin suffer from discrimination from others and being mentally tortured **OR** >We have decided that our training dataset will be fully open-sourced, but before we made sure that the name of participant will be anonymous, however, with data leakage, they become indignant and even start seeking legal appeal. > ### Hacking > If someone found a way to "cheat" our model and make it make any prediction that it want instead of the real one, the risk would be that the public will have a prejudice and it may be very difficult to change the opinion they hold, the truth will only come after plenty of efforts.