patrickrchao
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    <!-- TODOS: 1. Explanation of difference between our work and Diff SCM 2. Comparison of our theoretical results and Lu and Nasr-Esfahany 3. Entropy counterargument 4. Compare our experimental results with Moraffah et al. 2020, Pawlowski et al. 2020, Kocaoglu et al. 2018 5. Explanation of second assumption (DONE) 6. Quick baseline of generation time (In progress) --> ## Second comment to AC chairs In our follow-up conversation with Reviewer TX8v they have clarified their mistake and acknowledged the non-triviality of our theoretical contributions. Also, we have explained the reasons to the reviewer for not including the suggested baselines for interventional experiments. We found it unfortunate that the reviewer chose not to change the score, given that one out of two main limitations presented in the initial review is now resolved. We unfortunately did not hear back from Reviewer ju8f whose reviews we flagged in our earlier comment. ## Response to Reviewer TX8v We thank the reviewer for acknowledging the non-triviality of our theoretical contributions. While we do acknowledge the reviewer's request for additional interventional baselines, but as detailed in our first response, we believe that the baselines selected are the current in-scope SOTA models. We focus on graphs with multiple nodes and continuous variables, rather than the specific label and image graph setting. A key difference in the image setting is the lack of ground-truth interventional distributions, therefore comparisons must rely on indirect evaluation metrics. In our settings, we have access to the exact data generation mechanism and can precisely compare methods. Additionally, CAREFL [15] and VACA [33] reference these baselines but do not evaluate against them for their interventional experiments. Overall, our experiments supplement both our modeling and theoretical contributions. ## Comments to Area Chair We thank the reviewers for their comments and suggestions. We have directly addressed all their concerns. Unfortunately, there were a few erroneous reviews with factually incorrect claims and a few basic misunderstandings. 1. Reviewer TX8v incorrectly claims that the theoretical contribution is just a restatement of our underlying assumptions. We present a simple counter-example to the reviewer's proof sketch of this claim. As we point out in the paper and rebuttal, the assumptions underlying our theorems are intuitive, previously used in counterfactual identification results in other settings, and are somewhat necessary to overcome impossibility results. Even with the assumptions, novel proof ideas are needed to derive the claimed result. 2. We found Reviewer ju8f's comments to be subjective, without any technical backing. Firstly, as we discuss in the rebuttal, aside from a few minor intersections in the utilization of diffusion models, there exists no significant similarity between the content of this paper and the work conducted by Sanchez and Tsaftaris [32] in terms of the setting, theoretical contribution, or experimental analyses. Secondly, the reviewer incorrectly claims that Theorem 1 is the same as that appearing in previous results, which as we point out are based on different settings, have different proof techniques, and lead to different consequences. We will be happy to engage in further discussions with the reviewers. ## Reply to Reviewer ju8f We thank the reviewer for the feedback. Please find below our responses to the specific weaknesses and questions that you mention. > Sanchez and Tsaftaris used DDIMs for learning generation mechanism, inverting it, and answering counterfactual questions for a single variable. This work does this for all variables in a causal graph, and uses classifier free guidance instead of classifier guidance (that is outperformed by conditioning in the results). These are not significant additions in my opinion. While Sanchez and Tsaftaris [32] use DDIMs in a causal setting, we believe the similarities end there. Our work distinguishes itself from that of Sanchez and Tsaftaris in the following aspects: 1. Going from a simpler two node setting to a general graph setting is non-trivial, as it involves new design choices (e.g., single model for all nodes vs. multiple, if multiple models then how they interact etc.) that we navigate. These changes also imply that our algorithms are quite different, taking the DAG into account. Furthermore, it was previously unknown if diffusion models are even the right approach for answering causal queries. 2. Our solution is more general in that we operate on continuous variables, whereas they only handle the simpler case of a discrete label and image. In particular, our paper provides evidence that we may apply the proposed DCM approach for answering causal queries to any setting where diffusion models are applicable: continuous variables, high dimensional settings, categorical data, images, etc. Whereas their approach can only be applied to the bivarate node setting with a discrete label, for example, classifier guidance does not obviously generalize to the continuous setting. 3. Our experiments are more general, cover observational and interventional queries not addressed by Sanchez and Tsaftaris, and validate our DCM approach to both larger graphs and domains beyond images. We do acknowledge that for the specific setting considered by Sanchez and Tsaftaris, their approach might be better suited. 4. In terms of theoretical contribution, we provide rigorous conditions on the latent model and structural equations under which we can estimate counterfactuals. Even for two variable setting considered by Sanchez and Tsaftaris such a theoretical understanding was previously missing. > Theorem 1 is the same as theorem 1 in Lu et al., and Theorem 5 in Nasr-Esfahany et al. In line 266, the paper mentions that its theory is not comparable with prior work, but I don't agree with their justification. We disagree with reviewer’s assertion here. Our theoretical contribution in both Theorem 1 with the single-dimensional case and Theorem 2 (in Appendix B.1) with the multidimensional case distinguishes itself from previous results in multiple aspects: 1. The papers by Lu et al. [18] and Nasr-Esfahany et al. [21,22], and also other prior works in this area, all establish counterfactual identifiability under their own specific model. For example, Nasr-Esfahany et al. [21] consider the case where the learned model is a bijective model in the exogenous variable. Lu et al. [18] consider a reinforcement learning framework and use different assumptions. We consider the setting where a causal mechanism is learned by a latent space model consisting of an encoder-decoder network. This abstraction has not been considered in the above-mentioned literature, but is the right one to capture modern generative techniques such as diffusion models, autoencoders that are becoming increasingly popular for learning causal mechanisms. In this abstraction, our analysis is the first to present sufficient conditions for counterfactual identifiability, which also translates into the first known result for counterfactual estimation error under a relaxed assumption on encoder-decoder guarantees (Corollary 2). 2. Our proof techniques are in fact quite different from those of previous works, in particular, does not use the conditional quantile-based analysis (e.g., [18]) common in this literature. 3. An advantage of our proof technique, is that provides a natural generalization to multidimensional setting (not considered by [18,21,22]), providing insights into assumptions that can be used to overcome impossibility results of [21] (see Appendix B.1). In short, there are merits in all these counterfactual identifiability results, and none subsume each other. We will be happy to add the above discussion to the revised paper. > Line 367 states that there are no assumptions on the structural causal models. Why? How about the assumptions in the theorems? For example, don't you need the generation mechanisms to be invertible? That is a fair point. What we meant is that we don't make additional assumptions about the functional form of the causal mechanisms beyond the stated assumptions. In particular, we do not assume a restrictive class of functions like additive noise models, post non-linear noise models, used commonly for modeling the causal mechanisms. We will rephrase this sentence to be clear. > Footnote 3: Why is HSIC needed? When you sample from DDIM, you sample the noise (first step of the reverse process) independently from the conditions (causal parents), and train the model to generate likely samples. Doesn't training enforce independence by itself, without any further constraints? It is correct that during training, the added noise is independent of the causal parents. Our motivation to add the HSIC term was slightly different to encourage independence between the encoding and causal parents, in line with Assumption 1 of Theorem 1. To clarify, our suggestion was to add a regularization term where we would encode a sample $x^F$ to obtain $z^F$ and add a penalty term of the HSIC between $z^F$ and $x_{\mathrm{pa}}^F$. ## Reply to Reviewer TX8v We thank the reviewer for the feedback. Please find below our responses to the specific weaknesses that you mention. > The theoretical contribution of this paper, particularly Theorem 1, could benefit from further refinement. ... A simple information-theoretic formulation can prove their results ($H$ is the entropy): $$ H(g(X,X_{\mathrm{pa}})\mid U) = H(g(X,X_{\mathrm{pa}})\mid U, X_{\mathrm{pa}} = x') \;\; \mbox{Assumption 1}$$ ..... We thank the reviewer for the interesting approach using the conditional entropy. However, we believe the first line of reviewer’s proof idea does not hold under Assumption 1. Consider the following example. Let $X_{\text{pa}} \in \mathbb{R}$, and $X:=X_{\text{pa}}+\mathrm{sgn}(X_{\text{pa}})U$ for $X_{\mathrm{pa}},\ U\stackrel{ind}{\sim}\mathcal{N}(0,1)$, where $\mathrm{sgn}(a)=1$ if $a\ge 0$ and $-1$ otherwise. Now consider the encoding $g(X,X_{\text{pa}})= X-X_{\text{pa}}=\mathrm{sgn}(X_{\text{pa}})U$. Since $U$ is a symmetric distribution, $g(X,X_{\mathrm{pa}})$ is indeed independent of $X_{\mathrm{pa}}$, establishing Assumption 1. Next, we may compare the two terms in the first line of reviewer's proof idea. The left-hand side term is: $$H(g(X,X_{\text{pa}})\mid U) = H(\mathrm{sgn}(X_{\text{pa}}) U\mid U)=H(\mathrm{sgn}(X_{\text{pa}})) = 1.$$ Therefore, for this example, the conditional entropy of $H(g(X,X_{\text{pa}})\mid U)$ is equal to one bit. Now, in this example, the right-hand side term is: $$H(g(X,X_{\text{pa}})\mid U,X_{\mathrm{pa}}=x') = H(\mathrm{sgn}(x') U\mid U, X_{\mathrm{pa}}=x')=0.$$ Hence, in this example, $H(g(X,X_{\mathrm{pa}})\mid U) \neq H(g(X,X_{\mathrm{pa}})\mid U, X_{\mathrm{pa}} = x')$, contradicting the first line of reviewer's proof idea. As a further demonstration that the proof cannot hold, the proof would immediately generalize to the multidimensional exogenous variable setting if Assumption 2 is strengthened to be invertible, which contradicts the multivariate impossibility result of Nasr-Esfahany and Kiciman [21]. Therefore, we require a more careful and sophisticated analysis to prove the desired result. In conclusion: 1. Our proof of Theorem 1 is novel. It is a first result, that comes with the necessary conditions on the encoding model and structural equation under which counterfactual estimation is feasible. 2. As we point out in Lines 233-249, the assumptions underlying this result are intuitive, and have underlined previous related work in counterfactual identifiability in other models. Dropping some of these assumptions (say, $f$ is monotonic in $U$) lead to simple non-identifiability examples. 3. In Theorem 2 (Appendix B.1), we also present an extension to multidimensional setting, sidestepping the above-mentioned impossibility result with an interesting oracle assumption, opening the direction for further research here. > Empirical contribution: The second area of concern relates to the baselines chosen for evaluating the interventional estimation. I understand that counterfactual estimation requires the reverse function (from data to noise), which can be achieved using normalizing flows (CAREFL) or diffusion models. However, the authors have cited several deep causal models designed for interventional estimation (e.g., Moraffah et al. 2020, Pawlowski et al. 2020, Kocaoglu et al. 2018). The baselines currently selected do not provide a comprehensive comparison and should be expanded. There are multiple reasons for these choices: 1. Moraffah et al. [19] and Kocaoglu et al. [16] primarily address interventions in image and discrete data, lacking support for counterfactuals, as the reviewer highlighted. For a fair comparison, our evaluation centers on methodologies that also allow counterfactual queries. This is because models tailored solely for observational and interventional questions typically require fewer assumptions. 2. Although Pawlowski et al. [25] allow counterfactual queries, their experiments only focus on images without extending it to other types of SCMs. While we acknowledge that their method might be adaptable to broader contexts in theory, we found no straightforward path to adjust their code base for a direct comparison beyond their image-centric experiments. 3. The baselines that we compare against CAREFL [15] and VACA [33] are indeed the most recent SOTA approaches that handle all three causal query types. We hope that the reviewer appreciates our reasoning behind choosing the current set of baselines. ## Reply to Reviewer bCig We thank the reviewer for careful proofreading and suggestions. Please find below our responses to the specific weaknesses and questions that you mention. >The included real data experiments do not show significant improvement over alternative methods; We address the minor improvements in the real data experiments in Lines 353-362 and Appendix D.5. In short, there is a large amount of irreducible error in the experiment as we are computing the absolute error on a single interventional sample rather than the entire distribution, as we do in the synthetic experiments. This is analogous to estimating $X_1$ from $\mathcal{N}(\theta,1)$. Even with perfect knowledge of $\theta$, we will always have an irreducible error of order $1$. Therefore, in the fMRI experiments, we believe the ranking and relative differences to be more important than the absolute error. >The improvement over existing regressors for observational and interventional queries is somewhat modest in all synthetic experiments considered Since observational and interventional queries are inherently easier than counterfactual queries, therefore we should expect smaller improvements. For interventional queries, we still are on average 2.8x times better than the compared deep generative models. >In light of section 4 and the experiments, do you believe your approach (and the discussion) should be especially targeted to counterfactual queries? Answering counterfactual queries typically represents the most challenging rung on Pearl's ladder of causation. The existing literature lacks flexible model classes that can effectively handle counterfactual queries. Therefore, our main focus is directed towards counterfactuals. >The assumptions of access to the underlying true DAG and absence of hidden confounders are naturally extremely strong. I am wondering if it would be possible to at least intuitively discuss what one may expect when these assumptions are 'weakly' violated? We agree that this is a reasonable concern, and we thank the reviewer for their input. We would like to emphasize a couple of points: 1. It is the minimal necessary set of assumptions for causal reasoning from observational data alone. A long list of literature e.g., [14,15,25,33,34] cited in the Introduction relies on the same assumption, including SOTA methods in this deep structural causal models area that we closely compare against VACA [34] and CAREFL [15]. 2. Even if only a partial graph is available or some relations are known to be confounded, it is still possible that certain causal queries are identifiable, see e.g., "Complete Identification Methods for the Causal Hierarchy", Shpitser and Pearl JMLR 2008. >The authors offer a nice discussion of their apparently strong assumption from Theorem 1 that the encoding produced for a node is independent of its parents. I wish the second assumption from Theorem 1 (the strictly increasing aspect) also had a longer discussion. The strictly increasing (invertibility) assumption is standard in the literature [18,21,22] as it obviates trivial cases, e.g., distinguishing between $X:=X_{\text{pa}}+U$ and $X:=X_{\text{pa}}-U$. We will add further discussion on this assumption in the paper. ## Reply to Reviewer MpiP Thank you very much for your encouraging comments and valuable feedback. Please find below our responses to the specific weaknesses that you mention. > In each node, how well can Z_T capture the distribution of U_i? Theoretically speaking, it suffices for encodings to capture the exogenous noise up to a nonlinear transformation (see the discussion in Section 4 and Theorem 1). Our encodings $Z$ are flexible enough to represent arbitrary distributions of $U$. To empirically evaluate this, we ran a simple experiment where we compare the true noise $U$ and our estimated noise. We consistently find a correlation of $r=0.995$ and a clear monotonic relationship. > In the Enc and Dec algorithms, the authors ignore the random noise term at diffusion steps when t < T. Does this affect the expressiveness of the diffusion model and the learning of exogenous U's distribution? Can we keep the noise term for t < T? Could you clarify what you mean by the random noise term? To clarify, we use DDIM for a deterministic encoding and decoding, i.e., given the model $\varepsilon_\theta$, both the Enc (Alg. 4) and Dec (Alg. 5) algorithms are deterministic. We use the model $\varepsilon_\theta$ for the mapping between timesteps. > The endogenous node numbers in the SCMs used in the experiments are around 10. Can the method be applied to larger SCMs, e.g. number of endogenous nodes > 50? How does the scale of SCM affect the inference errors? One major advantage of our proposed DCM approach is the ability to generalize to larger graphs. Since each diffusion model only uses the parents as input, modeling each node depends only on the incoming degree of the node (the number of causal parents). While the number of diffusion models scales with the number of non-root nodes, each model is a small simple fully connected network with three hidden layers, and we may train each model in parallel. With a larger graph, inference errors may naturally accumulate through the graph, as the downstream predictions depend on upstream values. This behavior is present in any model and is not exclusive to our approach. In practice, though, it's possible to significantly reduce the complexity of the problem by focusing on a smaller subgraph. This can be done if, for example, one is only interested in the effect on a specific target node. In such a case, we wouldn't need to model every node, simplifying the process. We discuss this in Lines 153-159. > Apart from tabular datasets, how can the method be applied to other datasets, e.g. image datasets? Due to the use of diffusion models, our DCM approach is well-equipped to handle images, which is indeed the popular regime for diffusion models. In fact, we may apply the proposed DCM approach to any setting where diffusion models are applicable: continuous variables, high dimensional settings, categorical data, images, etc. > What is z_i in line 8 of Algorithm 3? Is it x_i^F? This is indeed a typo, it should be $z_i^F$, thank you for the correction. ## Reply to Reviewer 7CrS Thank you for appreciating our work. Please find below our responses to the specific weaknesses and questions that you mention. > An obvious weakness of the method is that it needs to run multiple reverse diffusion processes in sequential order of the graph topology to generate all nodes in the graph (for answering the observational/interventional queries) which could be practically very expensive. However, this is not discussed in the limitation section of the paper, as well as there is no experiment that compares the running time of the proposed method with those of other baselines using variational autoencoders (e.g., VACA). We will be happy to add a discussion about this to limitation discussions. For run time results, in the below table, we present the training times in minutes for one seed on the ladder graph using the default implementation and parameters. For a fair comparison, these are all evaluated on a CPU. | DCM | ANM | VACA | CAREFL| | -------- | ------- | -------- | ------- | | 15.3 | 4.1 | 142.8 | 110.5| <!-- \begin{array}{l r r r r} \text{} & \text{DCM} & \text{ANM} & \text{VACA} & \text{CAREFL}\\ \hline \text{Training Time (min)} & 15.3 & 4.1 &142.8 & 110.5 \end{array} --> Note that ANM is the fastest as it uses standard regression models, and our proposed DCM approach is about 7-9x faster than CAREFL and VACA. The generation times are all on the order of 1 second. For VACA and CAREFL, we use the implementation provided by the respective authors. We believe with an optimized implementation and hardware support (e.g., GPU) these run times could be reduced. > The model in the paper, after being trained, is mainly used for generating data rather than doing any “inference” as stated in the title of the paper. For example, Algorithm 3, called Counterfactual Inference, only generates the counterfactual sample corresponding to an observed factual sample and an intervention set ... or mathematically, $P(Y_{X=x'} = y' \mid Y=y, X=x)$. Yes, by design, due to our deterministic encoding scheme (Alg. 4), in the abduction step, we generate a single point estimate of the set of exogenous random variables $\mathbf{U} =\{U_1,\dots,U_K\}$ given factual data $x^{\mathrm{F}}$, rather than computing $p(\mathbf{U}\mid x^{\mathrm{F}})$. The reviewer's suggestion is interesting, and can be achieved by changing our encoding scheme. We will clarify this and the terminology used in the paper. >Since the data used in this paper is quite small (e.g., 10 nodes with 3 dimensions each for the two large graphs (line 306)), it is not sure how this method can be applied to large-scale data. One major advantage of our proposed DCM approach is the ability to generalize to larger graphs. Since each diffusion model only uses the parents as input, each modeling procedure is very tractable and does not scale with the graph size. Similarly, due to use of diffusion models, our DCM approach is well-equipped to handle high-dimensional setting, as a popular regime for diffusion models is high-dimensional images. In other words, we may apply the proposed DCM approach to any setting where diffusion models are applicable: continuous variables, high dimensional settings, categorical data, images, etc. ----------------- <!-- [The counterfactuals are point estimates due to the invertibility property with respect to the noise, i.e., we do not sample from a distribution as in the interventional case.] --> <!-- [We agree, but want to point out that our approach is still significantly faster than VACA and CAREFL (see simple runtime comparison). Even simple additive noise models would need to run sequentially, which could be slow if one uses a complex model.] Need to mention that DDIM also allows for speed ups if you allow for slight performance degradation, a tunable tradeoff parameter. This is not possible with other methods. --> <!-- [Assuming sufficient data and correct specification of the graph, the model should remain accurate. However, unrelated to our approach, causal queries (beyond simple effect estimation) with SCMs often encounter difficulties with large graphs, as they aim to model the generation process of each individual node. In practice, though, it's possible to significantly reduce the complexity of the problem by focusing on a smaller subgraph. This can be done if, for example, one is only interested in the effect on a specific target node. In such a case, we wouldn't need to model every node, simplifying the process.] --> <!-- Diffusion models are well equipped to the multidimensional setting, as the most popular regime is for images, e.g., 224x224x3=150,000. We may apply our DCM approach to any setting where diffusion models are applicable, continuous variables, high dimensional settings, categorical data, images, etc. In fact for causal graphs with images, diffusion models should as the most effective approach for modeling, as there are large literatures on improving image generation, inference speed, class guidance, etc. --> <!-- Capturing the exact data generating $U_i$ is not critical here; we only need to capture it up to a nonlinear transformation of $U_i$. For instance, consider the equation $Y = X + N$. We can generate equivalent, accurate counterfactuals if our model looks like Y = X + Z / 2, where Z = 2 * N. Here, Z is not the same as N, but the model transforms it accordingly.] --> <!-- $Y = X + N$. We can generate equivalent, accurate counterfactuals if our model looks like Y = X + Z / 2, where Z = 2 * N. Here, Z is not the same as N, but the model transforms it accordingly.] --> <!-- VACA (Sanchez-Martin et al., AAAI 2022) and CAREFL (Khemakhem et al., AISTATS 2021). However, we would like to emphasize that this shortcoming is not unique to DCM, and also underlies VACA [1], CAREFL [2], heteroscedastic noise model [3] etc. Note that the methodology of counterfactual estimation following Section 3.3 [4] using functional causal models such as [5, 6, 7] requires full observability of the variables as well for a counterfactual point estimate. Similarly, “Deep Structural Causal Models for Tractable Counterfactual Inference” [8] and "Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation" [9] claim counterfactual inference but operate assuming the evidence is the entire graph. These methods all make the same assumption on the evidence of the counterfactual queries, so we would like to point out this is the norm, not the exception. Although we understand the concern of the reviewer, we want to emphasize that we compare DCM to other SOTA models in this realm and DCM should not be penalized for this limitation. (PC: This can be trimmed a lot) --> <!-- Need something about [1] "VACA: Design of Variational Graph Autoencoders for Interventional and Counterfactual Queries", Sanchez-Martin et al. (2021) [2] "Causal Autoregressive Flows", Khemakhem et al. (2020) [3] "Identifying Patient-Specific Root Causes with the Heteroscedastic Noise Model", Strobl et al. (2022) [4] "Elements of Causal Inference", Peters et al. (2017) [5] "Nonlinear causal discovery with additive noise models", Hoyer et al. (2008) [6] "Causal Inference on Discrete Data using Additive Noise Models", Peters et al. (2009) [7] "On the Identifiability of the Post-Nonlinear Causal Model", Zhang et al. (2009) [8] "Deep Structural Causal Models for Tractable Counterfactual Inference", Pawlowski et al. (2020) [9] "Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation", Lu et al. (2020) --> <!-- but in comparing to the next most competitive model, we have the following gains in performance: \begin{array}{ccc} \text{} & \text{Improvement over Regression} & \text{Improvement over Deep Models} \\ \hline \text{Observational} & \text{1.50x} & \text{8.49x} \\ \text{Interventional} & \text{1.16x} & \text{2.87x} \\ \text{Counterfactual} & \text{2.58x} & \text{4.63x} \\ \end{array} These values are averaged over ladder and random SCMs. Since our most direct comparison is over the SOTA deep generative models, we see a moderate improvement across the board, and a uniform improvement in the counterfactual setting. Lastly, we note that observational and interventional queries are inherently easier than counterfactual queries, therefore we should expect smaller improvements. --> <!-- As a further example, the proof presented would immediately generalize to the multivariate setting if assumption 2 is strengthened to be invertible, which contradicts the multivariate negative result in [2]. To the point of the strength of the assumptions, assumptions 2 and 3 are very standard, see [1,2,3,4,5]. Assumption 1 is necessary for the proof to go through, see [2] for a similar result. The assumptions are not that strong, see: [1] Sample-Efficient Reinforcement Learning via Counterfactual-Based Data Augmentation, Lu et al. (2020) [2] Counterfactual (Non-)identifiability of Learned SCMs, Nasr-Esfahany and Kıcıman (2023) [3] Nonlinear causal discovery with additive noise models, Hoyer et al. (2008) [4] On the Identifiability of the Post-Nonlinear Causal Model, Zhang et al. (2009) [5] Identifying Patient-Specific Root Causes with the Heteroscedastic Noise Model, Strobl et al. (2022) --> <!-- Differences between our work and Sanchez and Tsaftaris (Diff SCM) 1. Our encoding/decoding slightly different in DDIM (I think they have a mistake) 2. We perform Classifier-free guidance. I think this point is important as naively doing classifier guidance would be far more expensive, as this would involve training a separate classifier model for each node which could be very expensive. Furthermore, this does not generalize to the continuous variable setting. 3. We use it on an entire graph and perform actual experiments 4. In particular, we can model arbitrary graph structures, while they are restricted to the X -> Y setting of two nodes. 5. We consider observational and interventional queries as well 6. Not clear that diffusion models are good in this small setting, obvious for images. We evaluate this and show that diffusion models are actually good --> <!-- this is a subjective point -->

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully