# Federated ILC Meeting - 2021/05/01
### MNIST Code Link:
PyTorch: https://colab.research.google.com/drive/1RQMTdxwcKaRncLL6qWhCtA_1jrnTxrQT?usp=sharing
TensorFlow:
https://colab.research.google.com/drive/17n-0uUZNcSFJOEDgfokXhYSKGMSn3w3n?usp=sharing
"A perfect categorization model's ROC will reach the top left corner of the graph, which in turn means that the model achieved a sensitivity and specificity of 1"
(See: https://medium.com/swlh/roc-and-auc-for-categorical-model-evaluation-486dc1c267e4)
"ROC AUC represents the probability that the prediction value of a random positive example is higher than the prediction value of a random negative example. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0."
(See: https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)
Therefore, it might be normal that ROC AUC of our model reaches the top left corner of the graph with ROC score 1.0, since our model only performs a simple binary classification task between MNIST image 0 and MNIST image 1.

### Proposed ILC Algorithm:
Suppose we have gradients across 3 environments: `-0.01` , `0.01` , `10` , then the extreme value `10` is called an **Outliner** (See: https://en.wikipedia.org/wiki/Outlier), and we shall exclude it from the dataset using the algorithm such as `Z-Score` or `IQR Score`

Question:
What if they are like 2 clusters?
For example:
```
-0.01,-0.02,-0.03,-0.04,
10, 10.01, 10.02, 10.03, 10.04,10.05
```