owned this note
owned this note
# Causally Correct Partial Models
## What is a partial model:
In this paper they call a model that predicts a future observation $y_T$ by conditioning only on the initial state of the agent $s_0$ and an action sequence $a_{<T}$ a partial model.
$$
q_\theta(y_T\vert a_{<T}, s_0)
$$
This is contrasted with models that condition explicitly on past observations as well, etc, as reviewed in the intro of the paper.
## How to train your partial model?
A problem with these types of models is that it's unclear how to train them in a way that allows one to draw causally correct inferences. More on what they mean by causal correctness later.
The most basic idea is to fit the model directly to the observed data in a maximum likelihood fashion. What you get, essentially approximates $p(y_T\vert a_{<T}, s_0)$.
The problem is that this $p(y_T\vert a_{<T}, s_0)$ is confounded by the policy, as the paper's first equation shows. As you change the policy from which you sample your data, your model changes with it, which may lead to incorrect inferences about the usefulness of action sequences, as illustrated in the Bear Hug example.
### Interventions
If one wants to draw causally correct inferences, the goal has to be to approximate distributions which are invariant with respect to changing the policy. One such invariant quantity is the intervention conditional $p(y_T\vert do(a_{<T}), s_0)$, which is the distribution of $y_T$ as you force the agent to take the action sequence $a_{<T}$. This quantity is be invariant, as by forcing the actions, you eliminate the influence of the policy from the distribution.
However, this isn't a very useful quantity as it cannot capture \emph{conditional behaviour}, i.e. when the second action depends on observations in the first step. A more general class of interventions does not force the value of the action to be a predefined constant at each step, but instead replaces the policy $\pi(a\vert s)$ with another policy $\psi(a\vert s)$. What we therefore will want to estimate is $p_{do(\psi)}(y_T\vert s_0)$, which is the marginal distribution of $y$ under the intervention where we replace the policy $\pi$ by alternative action sampling strategy $\psi$.
### Causally correct partial models
What the authors call a causally correct model, is a model $q_\theta$ of a generative process $p$ such that the model can emulate the effect of any intervention chosen from a set of interventions $\mathcal{I}$. That is, $\forall do(\psi) \in \mathcal{I}$
$$
q_{\theta, do(\psi)} \approx p_{do(\psi)}(x)
$$
Crucially, a pre-requisite for learning such a model from data is that $p_{do(\psi)}$ is identifiable from observational data. I.e. if we only have access to the joint distribution $p$, can we reason about $p_{do(\psi)}$ based on our causal assumpitons. I talk about identifiability [here](https://www.inference.vc/untitled/).
## Fixing identifiability first
In the partially observed Markov decision process depicted in Figure 3a, the causal quantities of interest are non-identifiable.
![](https://i.imgur.com/N69PI2t.png)
Remember that we're restricting ourselves to learning partial models. We assume that once the data's been sampled by the agent, only actions $a_{<T}$ and the observation $y_T$ is retained, the rest of the data, past observations $y_{<T}$ and the agent's hidden state $s_{<T}$ are discarded. The assumption is we only have access to the joint distribution of $a_{<T}$ and $y_T$.
If we restrict ourselves so, the causal conditionals of interest (what would happen if another policy would sample data) turn out to be non-identifiable: it is impossible to draw such causal inferences from the distribution we have access to.
To see why, consider sources of statistical association between the second action $a_1$ and the observation $y_2$:
* __causal association:__ a_1 influences the state of the environment $e_2$, resulting in an observation $y_2$. Therefore, $a_1$ has an direct causal effect on $y_2$, mediated by $e_2$
* __spurious association due to confounding:__ The unobserved hidden state $e_1$ is a confounder between the action $a_1$ and the observation $y_2$. The state $e_1$ has an indirect causal effect on $a_1$ mediated by the observation $y_1$ and the agent's state $s_1$. Similarly $e_1$ has an indirect effect on $y_2$ mediated by $e_2$.
Disambiguating between these two sources of statistical association is necessary for learning causally correct models. However, if nothing else is observed, this won't be possible. The two main ways one can overcome this limitation requires either:
* observing one variable on the confounding path, either a mediating variable between $e_1$ and $a_1$ or a mediating variable between $e_1$ and $y_2$. If one has the option to do that, the _backdoor adjustment formula_ can be used.
* observing a variable that fully mediates the causal effect of the action $a_1$ on the outcome $y_2$. If this was possible, one could use the _frontdoor adjustment_ formula.
So, can we use either? To use the frontdoor adjustment formula, we would need to observe $e_2$, which is the environment's hidden state, and is assumed to be fundamentally unobserved. So using frontdoor formula is ruled out.
The backdoor adjustment formula is an option though. We could technically observe $y_1$ and $s_1$, as both of these were available at the time we generated the data, we'd just have to log them. However, the whole spiel with partial models is that both of these are assumed to be very high dimensional, so including them in our modeling is undesirable, we'd rather consider them unobserved.
The solution the authors propose is inserting a stochastic bottleneck $z_t$ between the agents' state $s_t$ and the chosen action $a_t$. This $z_1$ can be lower dimensional, therefore more desirable than observing and modelling the whole agent state $s_t$.
![](https://i.imgur.com/JrUSsb4.png)
Essentially, rather than the agent generating the action immediately from its state $s_1$, it first draws a random variable $z_1$, which is observed, and then draws the action $a_1$ from there. Doing this creates a backdoor, an observed mediating variable which blocks the confounding path between $a_1$ and $y_2$.
The policy deciding the agent's actions splits into two as a result: $m(z_t\vert s_t)$ and $\pi(a_t\vert z_t)$. We will be able to reason about interventions where the second part of the policy, $\pi$ is changed (shown by the red arrow), but we have to assume $m(z_t\vert s_t)$ is fixed.
## causally correct models
Now that we have a joint distribution $p(a_{<T}, z_{<T}, y_T)$ which we know is amenable to causal inferences, we have to choose to build a model $q_\theta$ to learn which will allow us to answer the causal queries we have. Whether or not this is possible boils down to how we structure this model. The authors choose a model of the following form:
$$
q_\theta(y_T, z_{<T} \vert a_{<T}) = q_\theta(y_T\vert z_{<T}, a_{<T}) \prod_{t=1}^{T}q_\theta(z_t\vert z_{<t}, a_{<t})
$$
To make this more compact, the model is described as an RNN, in terms of probabilistic components $q_\theta(z_t\vert h_t)$, $q_\theta(y_t\vert h_t)$ and the recurrence function $h_{t+1} = f_\theta(h_t, a_t, z_t)$.
This model is illustrated by the schematic in Figure 3e:
![](https://i.imgur.com/pLvApv3.png)
With a model like this, it's possible to emulate interventions of the form where $\pi(a_t\vert z_t)$ is replaced by an alternative $\psi(a_t\vert z_t, h_t)$, using the backdoor formula:
$$
q_{\theta, do(\psi)}(y_{t+1}\vert h_t) = \mathbb{E}_{z_t \sim q_\theta(z_t\vert h_t)}\mathbb{E}_{a_t \sim \psi(a_t\vert z_t, h_t)} q_\theta(y_{t+1}\vert h_t, a_t).
$$
### what's up with $s_t$ and $h_t$?
Something that's quite confusing about this paper is the use of $h_t$, and $s_t$ to describe the state of the agent, and the state of the RNN making predictions, respectively. Here, we're making an intervention where the new policy $\psi$ is in fact conditioned on $h_t$, the state of the prediction network. But the agent, as they act, don't have access to $h_t$.
I think the correct way of interpreting what's goin on here is as follows: The model $q_\theta$ will allow us to make predictions under policies which differ from the policy $\pi$ deployed to sample data only in a way that requires additional knowledge of previous values of $a_{<T}$ and backdoor variables $z_{<T}$.
When sampling data, the policy is as follows:
$$
p(a_T\vert y_{<T}, a_{<T}) = \int \pi(a_T\vert z_T) m(z_T\vert s_T) z_T
$$
where $s_T$ is a deterministic function of $y_{<T}$ and $a_{<T}$.
The causal partial model $q_\theta$ allows us to evaluate policies which sample the next action from the following distribution:
$$
\tilde{p}(a_T\vert y_{<T}, z_{<T}, a_{<T}) = \pi(a_T\vert z_T, h_T)m(z_T\vert s_T)dz_T
$$
where $h_T$ is a deterministic function of $z_T$ and $a_{<T}$. While this set of policies can be quite flexible, it's still a restricted subset of all policies one could use. In particular, $m$ is always assumed to be fixed, and we cant' reason about how an agend with a different $m$ would behave. Secondly, $\pi$ cannot take past observations $y_{<T}$ into account, only via the past actions $z_{<T}$ and $a_{<T}$ which contain second-hand information about these. So imagine an agent that can only express an improvement to its policy in terms by looking at past action sequences and a partial view of its own past states. This is why, in the first experiment (Section 5.1) the causal model can't always find the optimal policy and its value.
Importantly though, we aren't able to emulate the full range of interventions
## stuff about importance sampling
$$
p(s_0u, a_1, r_1, s_1, a_2, \ldots) = p(s_0)\pi(a_1\vert s_0)p(r_1\vert a_1, s_0) p(s_1\vert a_1, s_0) \pi(a_2\vert s_0, a_1, s_1, r_1)
$$
$$
\tilde{p}(s_0, a_1, r_1, s_1, a_2, \ldots) = p(s_0)\tilde\pi(a_1\vert s_0)p(r_1\vert a_1, s_0) p(s_1\vert a_1, s_0) \tilde\pi(a_2\vert s_0, a_1, s_1, r_1)
$$
$$
\mathbb{E}_{\tau\sim \tilde{p}} r_5 = \mathbb{E}_{\tau\sim p} \frac{\tilde{p}(\tau)}{p(\tau)} r_5 = \mathbb{E}_{\tau\sim p} \prod_{i=1}^5 \frac{\tilde{\pi}(a_i\vert \tau_{0:i-1})}{\pi(a_i\vert \tau_{0:i-1})} r_5
$$
## do
$$
p(x,y,u) = p(u)p(x\vert u)p(y\vert x, u)
$$
$$
p(y\vert x) = \frac{\int p(x,y,u) du}{\iint p(x,y,u) dy du} = \frac{p(x,y)}{p(x)}
$$
## do-calculus
$$
p_{do(x_0)}(x,y,u) = p(u)\delta(x-x_0)p(y\vert x, u)
$$
\begin{align}
p(y\vert do(x_0)) &= \iint p_{do(x_0)}(x,y,u) du dx \\
&= \iint p(u)\delta(x-x_0)p(y\vert x, u) du dx \\
&= \int p(u) p(y \vert x_0, u) du \\
&= \mathbb{E}_u p(y \vert x_0, u)
\end{align}