# Feature importance survey
> Nothing in this document is final, just an initial brainstorm of ideas to ask! Especially for hypotheticals, I think we can add a lot more.
## Demographics
*(Demographic information is used solely to understand to what extent our study population is representative.)*
* What is your age?
* What is your gender identity?
* Woman
* Man
* Prefer not to disclose
* Prefer to self-describe
## Experience
* What is your current job title?
* How many years of working experience do you have in the field of data science (data analysis, machine learning, applied statistics, etc.)?
## On importance
- **Q:** Explain in your own words what feature importance is, in the context of machine learning. [open question, meta-global]
- **Q:** How would you describe what it means when a feature is important in a (trained) machine learning model. [open question, global]
- **Q:** Next, explain in your own words what it means when a feature is important for an individual prediction. [open question, local]
- **Q:** In your opinion, is feature importance valuable? [yes / no]
- Please motivate your answer. [open question]
- **Q:** Can you describe a specific use case in which feature importance supports a process or workflow? [open question]
## Hypotheticals
#### Notes for Introduction to 'Tasks'
* We consider classification scenario
* For all of these, consider a model that has been trained already and is being explained etc.
* Some important terminology:
* Feature
* Confidence score or Predicted probability
* ?
* There are no "right" answers!!!
##### Running example
Medical diagnosis of humans/aliens/animals, with features like "vitamine 1 amount", "vitamine 2 amount", potentially also "length", "weight".. although this may lead to assumptions?
### Global
*DC: as my idea was to ask questions about situations where existing techniques disagree, I am not sure what to ask here.*
### Local
**Q:** Suppose a feature is considered important and the value of that feature changes slightly. How would you expect the prediction to be affected?
- The prediction does not change
- The prediction changes a little
- The prediction changes a lot
- Impossible to tell
Can you elaborate why this is the case? [open question]
**Q:** If the model outputs a high confidence for instance A, and a low confidence prediction for instance B, which prediction do you expect to have the highest combined (sum $\sum$) feature importance?
- The high confidence prediction (instance A)
- The low confidence prediction (instance B)
- They are the same
- Impossible to tell
Can you elaborate why this is the case? [open question]
## Hilde's Questions
Thoughts:
* Likert scale type questions options: `[Strongly disagree, disagree, neither agree nor disagree, agree, strongly agree]`
* Additional free-form option: *Please explain why*. [open question]
* **The questions are likely much easier to interpret when we use concrete examples, rather than "feature A". However, we should probably make sure the examples are as non-realistic as possible, e.g. some random alien classification scenario, to avoid anchoring due to existing "domain expertise".**
### Causal Interpretations
Imagine we have trained a machine learning model that predicts whether a user will buy a particular product on our website or not.
**Q:** Suppose that, for a particular user, feature A has a **high** feature importance. Imagine we **change** feature A **a little bit**, but leave all else unchanged. How do you expect the **model's prediction** to be affected?
Please indicate to what extent you agree with the following statements:
- I expect that the model's predicted score changes **a little**.
- I expect that the model's predicted score changes **a lot**.
**Q:** Suppose that, for a particular user, feature A has a **high** feature importance. Imagine we **increase** the value of feature A, but leave all else unchanged. How do you expect the **model's prediction** to be affected?
Please indicate to what extent you agree with the following statements:
- I expect that the model's predicted score **increases**.
- I expect that the model's predicted score **decreases**.
**Q:** Suppose that, for a particular user, feature a has a **high** feature importance. If we **increase** feature A, how does this affect the **user's buying behavior**?
Please indicate to what extent you agree with the following statements:
- I expect that the user is **more** likely to buy our product.
- I expect that the user is **less** likely to buy our product.
### Contrastiveness and Sampling
**Q:** Suppose that for user 1, feature A is important, and the model has **high confidence**.
Please indicate to what extent you agree with the following statements:
DC: Choose 5 at random?
- I expect that instances with **a low predicted probability** have a **different value** for feature A.
- I expect that instances with **a low predicted probability** have a **similar value** for feature A.
- I expect that, on average, instances with **a high predicted probability** have a **different value** for feature A.
- I expect that, on average, instances with **a high predicted probability** have a **similar value** for feature A.
- I expect that the model's prediction is, on average, **different** for other **similar users** with **different feature A**.
- I expect that the model's prediction is, on average, **similar** for other **similar users** with **different feature A**.
- I expect that the model's prediction is, on average, **different** for **all other** users with **different feature A**.
- I expect that the model's prediction is, on average, **similar** for **all other** users with **different feature A**.
(I think these are too niche:)
* Out-of-distribution: should we only consider *existing* instances?
* Training: should we only consider instances the model has seen during training?
### Dependence
Suppose following causal relationship between feature A, B, C.
**Q:** *A -> B -> C.* We train a model with A and C, but cannot observe B. *A and C are causally related.*
**Q:** A <- B -> C. We train a model with A and C, but cannot observe B. *A and C are statistically related, but not causally.*
* Feature importance of A should be, partially, attributed to feature C.
* Feature importance of C should be, partially, attributed to feature A.
* Feature importance of C should not depend on feature importance of A.
* Feature importance of A and C should be attributed equally across A and C.
A is stomach-protection
B is medicine
C is a side effect from B (reduced appetite)
### Selectivity
* I expect all features that were used to train the model to be in the explanation.
(* I expect only the most important features to be in the explanation.)
### Stability / Robustness
Verhaaltje voor intro: harry, theo, en igor lijken op elkaar (feature space) en het model heeft eenzelfde voorspelling.
* I expect that harry theo en igor have similar feature importances.
* I expect that similar instances with **different** predictions (to the current instance) have similar feature importance.
## Visual
These questions can be preceded with a nice visual introduction:
> This is the data (2d scatter). Now we train a model (background gradient). The background color corresponds to the prediction the model would have for each pixel.
>
> If the model makes a decision based on feature A, it looks like this (example), if it makes a decision based on feature B it looks like this (example).
>
> Next there is an instance (blue dot) that we would like to explain.
### Global
<img src="https://user-images.githubusercontent.com/1223300/136392605-d5f91248-0f1f-4582-b390-3b5cecbc84af.png">
Which feature do you think is most important for the model to make predictions?
- Feature A
- Feature B
- Equally important
- Don’t know
### Local
<img src="https://lh5.googleusercontent.com/iP6cQLU3sgSRhHs2pU8Dk6j_2QOOXmDBQB3zIYbC3KV5fXd2IxqzzLkjwR583YSrKF86CEr5CKfF458WSQy25xVEoazAvPZgbdcxhm5CPKJq1GBMOTj2wBqLW7AA5pFZkw=w2500">
Which feature is most important to the prediction for the blue dot?
- Feature A
- Feature B
- They are the same
- Don’t know
<img src="https://lh6.googleusercontent.com/v8youF-bSCSp8Vbj6_PRyySIaOhNu6kd1VnvmmI-aTk2aB-2TLQNuTlvLZPDpmFKdX6ToHrXTbzP98MrYV7hnW4GDDXa34ImQEVoG1svy8LJ5KKT5uxdY3J0E0HlP-Oh1Q=w2500">
Which point has the highest overall feature importance?
- Blue point
- Purple point
- They are the same
- Don’t know
↑ Possibly we can repeat those visual experiments without the blue dot, to ask for "global" feature importance, as opposed to the importance to one point specifically. May be easier to start with too..
## Familiarity
- **Q:** Which Feature Importance techniques have you used? Please separate each answer with a newline. [open question]
- **Q:** Which Feature Importance techniques are you familiar with? [multiple choice, disable back button]
- LIME, SHAP, saliency maps, etc. take extensive list from a recent survey paper
↑ with this we can get an objective overview of XAI techniques. We can these later, to compare the assumptions about the meaning of FI from experts with the underlying assumptions of these techniques. Undoubtedly LIME/SHAP, but perhaps there are other techniques we have not considered?