# SIGIR2020-Controlling Fairness and Bias in Dynamic Learning-to-Rank ## the Solution for the Bias Problem - To address the bias problem, minimize the loss function as follows $$L(w)= \sum_t \sum_d R^2(d|x_t) + \frac{c_t(d)}{p_t(d)}[c_t(d)-2R(d|x_t)]$$ - where, + $t$ is a timestamp + $d$ is a document + $R$ is a function to fit + $c_t(d)$ is the click of document $d$ at timestamp $t$ + $p_t(d)$ is the probablity of examing document $d$ at timestamp $t$ + $x_t$ is all features that are available at timestamp $t$ - it can be proved that this f![](https://i.imgur.com/eufUQPU.png) ## the Solution for the Unfairness Problem $$argsort[R(d|x) + \lambda err_\tau (d)]$$ - where + $err_\tau (d)$ is defined as $$err_\tau(d)=(\tau-1) \max_{G_i}[D_{\tau-1}(G_i, G)], d \in G$$ + the larger $err_\tau (d)$ is, + $G$ is a group of documents (e.g., from the same source) + $D(G_i, G_j)$ is defined as $$D(G_i, G_j)=\frac{\frac{1}{\tau}\sum_{t=1}^{\tau} E_t(G_i)}{M(G_i)} - \frac{\frac{1}{\tau}\sum_{t=1}^{\tau} E_t(G_j)}{M(G_j)}$$ + if $D(G_i, G_j)$ is large, it means it's not fair for $G_j$ ($G_j$ get less exposure than it should have) + Exposure of group $G_i$: $E_t(G_i) = \frac{1}{|G_i|} \sum_{d \in G_i} p_t(d)$ + Merit of group $G_i$: $M(G_i)=\frac{1}{|G_i|} \sum_{d \in G_i} R(d)$ ## How are we going to use it? - In our ranking model: 1) it only considers $c(d)$; 2) it considers $c(d)$ in the aggregated level; we should incorporate $p_t(d)$ - for diversification, it's hard to directly apply this because $R(d)$ is unknown for most cases; + we can approximite the ranking score as $R(d)$; + we can approximate the $E(G)$ and $M(G)$ from the sample buckets;