# Updated version
Thank you for going through our reply and for reading the updated paper. Your recommendations guide us towards improvement :)
*Q2:*
- Fig 2a: Good point! The bump in $\gamma^{obs}$ for CR was odd. Having completed further runs, we are pleased to see that the results have converged towards the curve we expect (somewhere close to $0.5\overline{\eta}$).
- Thanks for pointing that out. The ind. CR line for 7var-covid was simply hidden under the lines for 5var-skill (see Tables 4 and 5 for the numbers); the values did not change significantly. We have updated Figure 2 such that we see when multiple lines are stacked (the earlier a line is added to the plot, the thicker it is).
*Q3:* To address your concern with have added a small table with the average cost of each method. Since the higher cost may be an objection against ICR, we find it important to include Q4 in the main text.
*Q4:* Good point. Causal knowledge enables better predictions for the effect of interventions. However, in observational environments (rung 1), even the SCM (rung 3) does not allow for better prediction than the optimal observational predictor $h^*$.
For your reference, we added a short subsection (Appendix B4) explaining how to derive $h^*$ from the SCM. Depending on the type of structural equations, this estimation may or may not be tractable.
*Q6*: We state that "we consider improvement to be an important normative requirement for recourse" (line 119). We updated line 133 now: "[...] a restriction on acceptance is either redundant or, from our moral standpoint, questionable."
*Q8:* Absolutely, because (1) performing the interventions requires effort (time, attention, money, ..) and (2) in uncertain environments, interventions may have negative effects. Consequently, we face many ethical challenges: Is it better not to offer recourse or to offer recourse with much uncertainty (depending on the setting)? Is it ok to deploy models for which no reliable recourse recommendations can be made? How must the required causal knowledge be validated? Do authorities have responsibility for the consequences of actions that they recommend? How should recourse-seeking individuals be informed about their options?
We do not provide answers to these questions or claim to solve the recourse problem. From a pragmatic viewpoint, (1) we cannot escape the fundamental problem of causal inference, and (2) if we make recourse recommendations, we think that it is (a) better to target improvement than to target acceptance, and (b) beneficial to communicate the uncertainty that we can quantify openly. As such, we acknowledge that ICR is only a small step but are convinced it is a step in the right direction.
# Old
Thank you for going through our reply and for reading the updated paper. Your recommendations guide us towards improvement :)
*Q2:*
- Fig 2a: Good point! The bump in $\gamma^{obs}$ for CR was odd. We attributed the bump to the fact that we only had enough computational resources for three runs with 200 individuals each. Having completed further runs we are pleased to see that the results have converged towards the curve that we would expect (somewhere close to $0.5\overline{\eta}$).
- Thank you for pointing that out. Actually, the acceptance rates $\eta^{obs}$ for CR on 7var-covid did not change significantly (1.0 flat for ind., rising for subp.). The line for ind. CR on 7var-covid was simply hidden under the lines for 5var-skill (see Table 4 and 5 for the numbers). We have updated Figure 2 such that lines that are plotted earlier are thicker, which allows us to see when multiple lines are stacked.
*Q3:* We are not entirely happy with that ourselves. To address your concern with have added a small table to the Figure with the average cost for the four methods. (We would like to report more detailed results in the main text, but as you said, page limit.) Since the higher cost may be an important objection against ICR we find it important to include Q4 in the main text.
*Q4:* Good point. Causal knowledge enables better predictions for the effect of interventions. However, in observational environments (rung 1) even rung 3 causal knowledge (SCM) does not allow for better prediction than the optimal observational predictor $h^*$. As such, $h^*$ and the optimal SCM-based predictor are equivalent pre-recourse.
For the sake of completeness we added a short subsection (Appendix B4) explaining how $h^*$ can be derived from the SCM. Depending on the type of structural equations this estimation may or may not be tractable. Alternatively we can draw i.i.d. samples from $P(Y,X)$ using the SCM and then leverage standard supervised learning to model $P(Y|X)$.
*Q6*: In line 119 we state that "we consider improvement to be an important normative requirement for recourse". We have additionally updated line 133 now: "[...] a restriction on acceptance is either redundant or, from our moral standpoint, questionable."
*Q8, ethical challenges:* Absolutely, because (1) performing the interventions requires effort (time, attention, money, ..) and (2) in uncertain environments interventions may have negative effects. As a consequence, we face lots of difficult conflicts: Is it better not to offer recourse, or to offer recourse with a lot of uncertainty (depending on the setting)? Is it ok to deploy models for which no reliable recourse recommendations can be made? How must the causal knowledge be validated? Do authorities have responsibilities for the consequences of actions that they recommend? How should recourse seeking individuals be informed about their options?
We do not provide answers to these questions, nor do we claim to solve the recourse problem. From a pragmatic viewpoint (1) we cannot escape the fundamental problem of causal inference and (2) if we make recourse recommendations, we think that it is (a) better to target improvement than to target acceptance, and (b) beneficial to openly communicate the uncertainty that we can quantify. As such, we acknowledge that ICR is only a small step, but are convinced that it is a step in the right direction.