# Comprehensive Fair Meta-learned Recommender System
https://github.com/weitianxin/CLOVER
### 前言
### sensitive information (age, gender, race)
### Towards Personalized Fairness based on Causal Notion

#### 例如,如圖 1 所示,一些用戶可能對性別敏感,不希望他們的推薦受到這個特徵的影響,而另一些用戶可能更關心年齡特徵,不太關心性別。
#### For example, as shown on Figure.1, some users may be sensitive to the gender and do not want their recommendations to be influenced by this feature, while others may care more about the age feature and are less concerned about gender
<hr/>
### fairness issue
一個公平的推薦系統應該反映用戶的個性化偏好,同時對敏感信息保持公平。
#### A fair recommender system should reflect the personalized preferences of users while being fair with respect to sensitive information.
### clover

1. In this paper, we formulate comprehensive fairness as the multi-task adversarial learning problem, where different fairness requirements are associated with different adversarial learning objectives.
2. 在本文中,我們將綜合公平性表述為多任務對抗性學習問題,其中不同的公平性要求與不同的對抗性學習目標相關聯。
3. Our basic idea is to train a recommender and the adversarial attacker jointly. The attacker seeks to optimize its model to predict the sensitive information. The recommender aims to extract users’ actual preferences and fool the adversary in the meantime.
4. 我們的基本思想是聯合訓練推薦者和對抗性攻擊者。 攻擊者試圖優化其模型以預測敏感信息。 推薦器旨在提取用戶的實際偏好並同時欺騙對手。
5. We model this as a min-max game between the recommender and the attack discriminator.
6. The recommender is asked to minimize the recommendation error while fooling the discriminator to generate fair results. Meanwhile, the discriminator is optimized to predict the sensitive information from the user as accurately as possible
```
𝜃𝑟 is the parameter of the recommender model
and 𝜃𝑑 represents the sensitive information discriminator parameter.
where 𝐿 consists of the recommender loss 𝑙𝑅 and the sensitive attribute discriminator loss 𝑙D
𝐶 is the number of classes of the sensitive attribute,
𝑎ˆ𝑐𝑢 is the probability that the sensitive attribute of user 𝑢 is predicted by the discriminator to be class c
```

### how to relate the adversarial learning problem with the comprehensive fairness mitigation.
### Individual Fairness
Individual Fairness Here, the fairness requirements refer to not exposing sensitive feature in the user modeling process against attackers.

```
Let 𝑈𝑓 represent the sets of fresh users that will arrive in the system
where 𝑒𝑢 is the representation of user 𝑢,
𝑎𝑢 is the sensitive information of 𝑢,
𝑔 is the user representation attacker, aiming to predict the sensitive information from the user representation,
and 𝑀 is the evaluation metric of the prediction performance.
```
This definition requires the user modeling network to defend against any possible attacker that tries to hack the sensitive information.
A lower 𝐼𝐹 score indicates a fairer recommender.
#### Individual Fairness adversarial learning
it requires the user modeling process of the recommender system to protect against the hacker from inferring the sensitive information
we conduct the representation level adversarial learning which aims to generate the user embedding irrelevant to the sensitive information
```
where 𝑔 is the representation discriminator,
𝑒𝑢 is the user embedding of 𝑢.
Here we concatenate the corresponding historical rating 𝑦𝑢𝑖 with the user embedding as the input information.
```

<hr/>
### counterfactual fairness

#### for a given user u, the distribution of the generated recommendation results L for u should be the same if we only change A from a to a′, while holding the remaining features X unchanged.
#### Proposition 3.1. If the adversarial game for individual fairness converges to the optimal solution, then the rating prediction recommender which leverages these representations will also satisfy the counterfactual fairness.
#### proof
Markovchain 𝑎𝑢 → 𝑒𝑢 → 𝑦ˆ𝑢i
#### if the adversarial learning converges to the optimal, the generated recommender results will be independent of the sensitive attribute, i.e., the mutual information between the sensitive attribute 𝑎𝑢 and the representation 𝑒𝑢 for any given user 𝑢 is zero

Therefore, the prediction 𝑦ˆ𝑢𝑖 for any given user 𝑢 is independent of the sensitive attribute 𝑎u
### group fairness
#### group fairness adversarial learning
it requires the recommendation performance of users to be identical between different groups.
```
where ℎ is the prediction discriminator,
𝐸ℎ is the external information for ℎ.
Here we regard the item embedding 𝑒𝑖 as the additional information.
```

To comprehensively mitigate the fairness issues, we perform
multi-task learning with both representation and prediction level adversarial learning as follows:

### clover

<hr/>

### mata learning






### Fair Meta-Learned Recommender System
```
Specifically, the model parameters consist of two parts 𝜃 ={𝜃𝑟, 𝜃𝑑},
where 𝜃𝑟 is the parameter of the recommender model
and 𝜃𝑑 represents the sensitive information discriminator parameter.
where 𝐿 consists of the recommender loss 𝑙𝑅 and the sensitive
attribute discriminator loss 𝑙𝐷 . 𝐿(𝑓𝜃𝑟,𝜃𝑑 ) implies the loss is parameterized by the 𝜃𝑟, 𝜃𝑑.
Note that the recommender loss 𝑙𝑅 is unrelated with the discriminator parameter 𝜃𝑑
```


<hr/>
### experiment
For metrics, we report MAE and NDCG to evaluate the rating prediction and ranking performance.

We adopt AUC, CF (counterfactual), and GF (group) to show the performance of the three kinds of fairness.

A lower 𝐼𝐹 score indicates a fairer recommender




The first two baselines are about content-based filtering. The next two baselines are about the traditional cold-start recommender system

For fair recommendation, we choose the **gender** attribute as the sensitive attribute for ML-100K ML-1M datasets
**age** attribute as the sensitive attribute for the BookCrossing dataset.




### As far as we know, we are the first to explore the fairness issue in the meta-learned recommender system
In the future, we would like to design fairness metrics for multi-class sensitive attributes and explore the fairness issues within the combination of multiple sensitive attributes. We are also interested in considering fairness from a user-item interaction graph perspective
### MeLU: Meta-Learned User Preference Estimator for Cold-Start Recommendation







