Comparison with other approaches Question: So what would be a PI-explanation/sufficient reasons in our context? Reminder: sufficient reasons justifying the classification, irrespective of the value of other criteria. Let us for instance the first example of the paper $s_1 = cdefg \succ ad = s_2$, giving vector $(-1,0,+1,0,+1,+1,+1)$. We get two explanations by means of sufficient reasons (easy to compute, cf. Marques-Silva et al.): $cef$ are satisfied by $s_1$ but not by $s_2$, given that $bd$ are equal, is a sufficient reasons (ie. given that $bd$ are equal, $cef$ is actually preferred to $ag$). Note that for instance $ceg$ would not be a sufficient reason. $cefg$ are satisfied by $s_1$ but not by $s_2$, given that $b$ is equal, is a sufficient reason (ie. given that $b$ is equal, $cefg$ is preferred to $ad$) How much information is revealed
1/20/2022Key concept de nombreuses approches XAI sont locales (i.e. elles cherchent à éclairer une recommandation particulière), contrefactuelles (i.e. elles considèrent des mondes alternatifs où la question posée serait "ni tout à fait la même, ni tout à fait une autre") de fait, ces approches sont de nature abductive (i.e. le processus décisionnel et son issue sont figés, et on cherche à spécifier les intrants - le candidat) par construction, une approche abductive fige le processus décisionnel, et ne permet pas sa remise en question Exemple typique : explications fondées sur les impliquants premiers (Marquis 2020, Darwiche ??, Marques-Silva 2020) - on identifie un sous ensemble minimal d'attributs du candidat tel que toute complétion conduise à la même recommandation Or, si on est amené à expliquer une recommandation, c'est qu'on postule la possibilité de la contester. Cette contestation peut porter sur les causes premières mises en avant par le dispositif explicatif, mais aussi, peut-être, sur les règles de dérivation qui permettent de relier les causes aux effets
1/20/2022Description of the preference model A simple, yet effective, procedure for fitting a value model to the preference expressed by a decision maker has been proposed by Jacquet-Lagrèze & Siskos (1982) MAVT: the analyst assumes a linear model of the preference of the Decision maker $$V(x) = \sum_i v_i(x_i)\ \text{with}\ v_i : \mathcal X_i \to \mathbb [0, w_i], \sum_i w_i = 1$$ Parametrized value functions: the marginal value functions are assumed to be piecewise linear, with predefined cutting points $x_i^1 < ...< x_i^{k_i} \in \mathbb{X}_i$ Preference Information: the DM submits pairwise comparison statements of the form $a^j \succeq a^{j'}$ for some alternatives $a^j$ and $a^{j'}$ Computation: the representation that corresponds "best" to this stance is computed via linear programming Extensions: sorting (UTADIS), robust decision making (GRIP), using another parametric family of value functions...
12/14/2021Performance table Alternative Dimension 1 Dimension 2 Dimension 3 $x_1$ 10 50
12/6/2021or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up