# AOS4 - UTA: Learning a piecewise linear additive value model with LP
## Description of the preference model
A simple, yet effective, procedure for fitting a value model to the preference expressed by a decision maker has been proposed by Jacquet-Lagrèze & Siskos (1982)
* **MAVT:** the analyst assumes a linear model of the preference of the Decision maker
$$V(x) = \sum_i v_i(x_i)\ \text{with}\ v_i : \mathcal X_i \to \mathbb [0, w_i], \sum_i w_i = 1$$
* **Parametrized value functions:** the marginal value functions are assumed to be *piecewise linear*, with *predefined cutting points* $x_i^1 < ...< x_i^{k_i} \in \mathbb{X}_i$
* **Preference Information:** the DM submits *pairwise comparison statements* of the form $a^j \succeq a^{j'}$ for some alternatives $a^j$ and $a^{j'}$
* **Computation:** the representation that corresponds "best" to this stance is computed via *linear programming*
* **Extensions:** *sorting* (UTADIS), *robust* decision making (GRIP), using another *parametric family of value functions*...
---
### UTA: Learning a piecewise linear additive value model with LP
*Variables:*
* values at the cutting points $y_{i,k} \equiv v_i(x_i^k)$
* slack variables $\sigma_j^+ \ge 0$ and $\sigma_j^- \ge 0$ for each alternative $a^j$ in the PI
> The marginal value at any point $x\_i \in [x_i^k, x_i^{k+1}]$ is given by
$\displaystyle v_i (x_i) = \lambda {y_{i,k}} + (1-\lambda) y_{i,k+1}\quad$ with $\displaystyle \lambda = \frac{x_i^{k+1} − x_i}{x_i^{k+1}-x_i^k}$
*Constraints*
* if preference increases (resp. decreases) with the values of attribute $i$, this is implemented with constraints $y_{i,k} \le y_{i,k+1}$ (resp. $y_{i,k} \ge y_{i,k+1}$)
* values of the nadir and zenith points are normalized
* each PI statement $a^j \succeq a^{j'}$ is converted into a single linear constraint:
$$ \sum_i v_i(a^j_i) +\sigma_j^+ - \sigma_j^- \ge \sum_i v_i(a_i^{j'}) +\sigma_{j'}^+ - \sigma_{j'}^-$$
*Objective:* the sum of slack variables is minimized
## Questions
1. Implement the whole pipeline.
2. Test it, maybe on the problem found [here](https://hackmd.io/@InconsistentKB/ByLFKKoFK).
3. Robustify the approach, by adopting e.g. a cautious approach if the problem is underconstrained, or using a non additive extension of the model (e.g. a 2-additive Choquet integral, or a GAI model) if it is overconstrained.
## Bibliography
(to appear soon)
* UTA, Jacquet-Lagrèze and Siskos
* UTA-GMS, Greco, Mousseau and Slowinski
* GRIP
* Choquet