# Processing with AI
## Exploration : IA & Ethics
Name:
> Ginet
> Aloïs
Subject:
> Detect dermatological problems using Computer Vision
>[TOC]
## Design brief
### Biais
If we don't source our dataset with enough rigor, the following biais might appear:
>1. Detect dermatological problems on certain types of skin but not on others.
>2. Detecting dermatological problem too late and not be able to detect them at their early development step.
>3. Not detecting rare ore not frequent dermatological problems.
We will ensure that our model is not biaised by:
>1.Ensuring that every type of dermatological problem listed in the tool is trained on a variety of skin types (hairy or not, young or old) and colors.
>2. For every dermatological problem listed in the tool, training the model on the whole evolution of the problem (since the skin evolve at different development stage of the problem).
>3. Being aware to train the tool on rare or not frequent dermatological problem by enriching its training with a broad scope of dermatological issue.
>
That for, we could use model like those in Runway so that we can train a specific dermatological issue on different skin colours and types and at different step of their development so as the get a comprehensive dataset.
### Overfitting
We will make sure our model does not overfit by
> Checking that the elderly are not systematically spotted as suffering from dermatological problem. Since the elderly are much more concerned by dermatological problem, the tool could "overfit" by associating elderly skins with dermatological problem. That would be overfitting. To avoid that, we could use Runway model when training our model to alleviate the real rate of elderly concerned by dermatological problem to bring it closer to non-elderly.
### Misuse
> This could for example prevent some patient to go to the dermatologist specialist if they don't get any alert from the tool. In the case they have a rare dermatological problem, this model could prevent them to see the doctor and therefore endanger their health.
### Data leakage
*Choose the most relevant proposition:*
>In a catastrophic scenario, where all of our training dataset were stolen or recovered from our model, the risk would be that bankers use this model to know if they would or not grant a loan to a customer, since dermatological problem are linked to higher health risk. It could also be used by insurance to calculate different fares depending on wether you have dermatological problem or not. This would be fundamentally unethical.
**OR**
>We have decided that our training dataset will be fully open-sourced, but before we made sure that...
### Hacking
> If someone found a way to "cheat" our model and make it make any prediction that it want instead of the real one, the risk would be that it would calculate your expected lifespan by using your dermatological health analysis, which would only spread anxiety and would not be relevant at all.
> It could also mitigate your Tinder score for instance and crumble your score in Tinder Algorythm if you have dermatological problems.