# Interpretability ### DiBS: Differentiable Bayesian Structure Learning May read again, but not very relevant ### DAGs with No Curl: An Efficient DAG Structure Learning Approach ### GraN-DAG ### Review of Causal discovery based on graphical model ### MissDAG ### Neural Relational Inference for Interacting Systems URL:https://proceedings.mlr.press/v80/kipf18a.html An explicit interaction structure, in an unsupervised way. Proposed *neural relational inference* model to learn the dynamics with a GNN over a discrete latent graph, and perform inference over these latent variables. #### Neural relational inference An encoder that predicts the interactions given the trajectories, and a decoder that learns the dynamical model given the interaction graph. * The encoder $q_\phi(z|x)$ returns a factorized distribution of $z_{ij}$ , where $z_{ij}$ is a discrete categorical variable representing the edge type between object $v_i$ and $v_j$ * Given the dynamics is Markovian, then the decoder goes through![](https://i.imgur.com/UC3vzHB.png) * Following VAE structure with ELBO consisting of a reconstruction error and KL term for uniform prior ### Amortized Causal Discovery: Learning to Infer Causal Graphs from Time-Series Data (Lowe 2022) Extend from NRI framework and work on time-series data. Propsed *Amortized Causal Discovery*, a framework that leverages shared dynamics to learn to infer causal relations from time-series data implemented as a VAE model. * Granger Causality * *learn to infer causal relations across samples with different underlying causal graphs but shared dynamics.* * Define an encoding function to infer Granger causal relations. Define a decoding function f  which learns to predict the next time-step under the inferred causal relations * Allow cycle because of time-series. * new inference methods using test-time adaptation and new algorithms to handle hidden confounders. 1, causal and CVs 关系学causal graph 2, intervention causal ###### tags: `recent` Possible Directions for interpretability of the learned CVs: 1. learn causal interaction of the CVs using Structural Causal Model. For example, relationship like the following: ![](https://hackmd.io/_uploads/BJihWPHd2.png) **Questions:** * The goal in this case is not well motivated and the benefit for doing so is not clear * In the [original paper](https://openreview.net/forum?id=LtON28ko1bh), one possible use is to simulate new data. But Grant M. Rotsko has already introduced a [method to learn the free energy in latent space](https://arxiv.org/abs/2210.07882) which would be much faster and reliable (I think). 2. Inspired by RAVE, learn causal relationship between learned CVs and potential candidates for CVs. Such a method is different from RAVE in that it can be used to model combination of non-linear causal relationship between known candidates and learned CVs. Specifically, in RAVE, we first need a set of potential order parameters $x_1,...,x_n$. The learnt CVs are represented as the form $c_1x_1 + ... + c_nx_n$ where $\sum_{i} c_i^2 = 1$ and use KL divergence to measure the resemblence. (I need to check the code how they do it exactly though.) Here, with the tools to causal relationships, we can learn which order parameter is the 'parent' of a particular CV that learned from a black box method. In particular $CV_1 = f(PA(CV_1))$ where $PA(CV_1)$ is the set of parent of $CV_1$ and is a subset of the potential order parameters $\{x_1,...,x_n\}$. In essence we can understand this as a unsupervised learning of nonlinear representation of learned CVs in order parameters using causal model. **Questions:** * I'm not sure how siginificant of this direction is and not sure if there is anything similar has been done recently. * I would say the second direction is more concrete and the goal is well motivated but I would need to think about how to do it. **Screen shot from paper on RAVE:** ![](https://hackmd.io/_uploads/rkQurwH_2.png)