# Fairness Meeting ###### tags:`Pathology` ### 2023.02.23 ### "Fair Attribute Classification through Latent Space De-biasing" (CVPR 2021) ![](https://i.imgur.com/xyxCv4p.jpg) 1. Use **GANs** to generate realistic-looking images, and perturb these images in the underlying latent space to generate training data that is balanced for each protected attribute. 2. The latent vector perturbation method: ![](https://i.imgur.com/WkIi2Yn.jpg) 3. Experiments: ![](https://i.imgur.com/L9TY6gY.jpg) * DEO: the absolute difference between the false negative rates for both gender expression. * BA: bias amplification metric, how much more often a target attribute is protected attribute than the ground truth value. * Incons.: inconsistently labeled * G-dep: gender-dependent 4. Critics: * If we do not have a pair(target and protected attributes), can we use the formula of vector perturbation method to find out that a specific protected attribute is originally related to the target? ---- ### 2023.03.02 ### Implement a simple fairness analysis 1. Dataset: Adult Income Dataset (download [link](https://www.kaggle.com/datasets/wenruliu/adult-income-dataset/download?datasetVersionNumber=2)) * **age** * workclass * fnlwgt * education * **educational-num** * **marital-status** * **occupation** * relationship * <font color="#f00">**race**</font> * <font color="#f00">**gender**</font> * capital-gain * capital-loss * **hours-per-week** * **native-country** * <font color="blue">**income**</font> (<=50K, >50K) ![](https://i.imgur.com/ghFf6Mn.png) 2. Finding proxy variables ![](https://i.imgur.com/k2fFnMp.png) 3. Modelling * 1: income $50K↑ * 0: income ++$50K↓++ ![](https://i.imgur.com/ck3c3EI.png) 4. Metrics in fairness ![](https://i.imgur.com/iNIjDLl.png) <!-- * How can we determine the **cutoff**? ![](https://i.imgur.com/CzEBhfz.png) --> 5. Discussion * When we address the fairness issues, we usually find a better loss function or metrics to ensure that the machine truely learns in a correct direction? * Similar concept (i.e., use mutual information to extract feature) with the paper we presented last week. We can apply this process to the dataset of digital pathology too. * dimension很大,不見得抽出feature能觀察到裡面的embedding分佈 ---- ### 2023.03.09 ### 1. Learn how to cut patches ### 2. Implement "Fair Contrastive Learning for Facial Attribute Classification" (CVPR 2022) ![](https://i.imgur.com/k0zn3Z9.png) ![](https://i.imgur.com/G4gM8r4.png) Dataset: ![](https://i.imgur.com/npZrJ7P.jpg) * 200K images * sensitive arrtibutes: gender, age * target attributes(40): big-nose, wavy hair, ... ![](https://i.imgur.com/ZaEygUG.jpg) * 20K images * sensitive attributes: age, ethnicity * target attributes: gender ---- ### 2023.03.16 ### 1. The results of this training model (FSCL) on CelebA dataset ![](https://i.imgur.com/7Qm0CpG.png) * Sensitive attrribute: gender (0: Female ; 1: Male) * Target attributes: 0: boring ; 1: attractive |Accuracy |Equalized Odds| |Group-wise accuracy | | |---------|--------------|-------------------|------------------------|----------------------| |47.893 |19.0 |---------------|**Sensitive Class 0** |**Sensitive Class 1** | | | |**Target class 0** |98.364 |98.485 | | | |**Target class 1** |1.315 |1.056 | ### 2. Read Paper: Deep Fair Clustering for Visual Learning (On going) ---- ### 2023.03.23 ### 1. Improve the accuracy of the model Using the normal classifier Loss Function: Cross Entropy ### Experiment A * Sensitive attribute: gender (0: Female ; 1: Male) * Target attributes: 0: boring ; 1: attractive ![](https://i.imgur.com/pcUCn9v.png) * Data Analysis: ![](https://i.imgur.com/gQSPbFC.png) ---- ### Experiment B * Sensitive attribute: gender (0: Female ; 1: Male) * Target attributes: 0: no glasses ; 1: with glasses ![](https://i.imgur.com/H4h7tHM.png) * Data Analysis: ![](https://i.imgur.com/69UGN5g.png) ### 2. Read Paper: [Deep Fair Clustering for Visual Learning](https://openaccess.thecvf.com/content_CVPR_2020/papers/Li_Deep_Fair_Clustering_for_Visual_Learning_CVPR_2020_paper.pdf) (CVPR 2020) ![](https://i.imgur.com/Qc70NLT.png) ### Objective Functions 1. Fairness-Adversarial Loss 2. Structural Preservation Loss 3. Clustering Reqularization Loss ![](https://i.imgur.com/MUp1JXO.png) ### Next Week: 1. Improve the model <!-- 2. Read: Fairness-Aware Unsupervised Feature Selection --> ---- ### 2023.03.30 ### 1. Build a better Classifier * Dataset: CelebA * Sensitive Attribute: male(1)/female(0) * Target Attribute: with glasses(1)/without glasses(0) ![](https://i.imgur.com/oXWt46M.png) * Accuracy: 0.9429 * Equalized Odds: 0.1034 ![](https://i.imgur.com/aYlLl88.png) ### 2. Approach - Add a Discriminator ![](https://i.imgur.com/ib6umPp.png) * Train & Test Dataset ![](https://i.imgur.com/O8JbShc.png) * Case 1: Fake Label: **female (0)** * Accuracy: 0.931 * Equilized Odds: 0.08 ![](https://i.imgur.com/fiBnbV9.png) * Case 2: Fake Label: **male (1)** * Accuracy: 1.0 * Equilized Odds: 0.0