# CATE project update (experiments)
The goal of this project update is to see how three factors impact the quality of conditional average treatment effect (CATE) estimation across two groups of inputs (i.e., the groups are defined by a binary variable).
The three factors are:
1. Applying group distributionally robust optimization (GDRO), which defines the loss as the worst case over the two groups, when estimating $P(Y|T, X)$.
2. Using separate "heads" in a neural network for each group (i.e., parameter sharing).
3. Cross-fitting.
I examine how these factors impact the mean-squared prediction error and CATE error on both train and test data. I use four different "metamodels", which use one or more arbitrary prediction models to estimate the CATE. I use a neural network as the underlying model(s).
**Overall, there is some evidence that GDRO is helping, but I'm not sure if its enough, and little or no evidence for the other methods (though see cross-fitting discussion). Overall I'm looking for advice on (a) are these empirical results strong enough to be interesting? and (b) is there any intuition that could help me understand what to do next?**
I will use three datasets:
1. *IHDP Group="First"* $(N_{\text{train}}=746, D=25)$: IHDP dataset grouped on binary "First" variable, resulting in a fairly even allocation between the two groups.
2. *IHDP Group="Prenatal"* $(N_{\text{train}}=597, D=25)$: IHDP dataset grouped on binary "Prenatal" variable,, resulting in a more skewed allocation. All observations used for training (so no test).
3. *TOY1* $(N_{\text{train}}=159, D=2)$: This is a toy dataset I invented. The CATE is defined as the difference in two random draws from a GP.
All results are averaged over 5 random datasets and 5 random neural network initializations (so 25 total runs). The treatment is binary. *Important note*: the binary group indicator is always used as an input to the model (e.g., of the $D=2$ inputs to TOY1, one is the group indicator).
## Results
### Results summary
1. GDRO generally results in a small improvement in CATE estimation.
2. Separate heads for each group has little impact.
3. Although using more folds improves the CATE estimation error, it improves the CATE estimation error for both groups. So, it seems more folds (which is the recommended practice anyway) helps the model overall but isn't targeted at helping the underrepresented group. Moreover, the cross-fit model was not the best performing model overall. Therefore, based on the evidence here, there is no reason to recommend cross-fitting as a solution to unequal performance across groups.
Note: the cross-fit model is called an "R-Learner" or "RNet". It uses a particular estimation technique (basically Double ML for CATE estimation rather than ATE estimation). I did not apply cross-fitting to the other methods; rather, I used a particular method that employs cross-fitting. This method fit very poorly on the TOY1 dataset (I think because the dataset violates the modeling assumptions), so I removed the RNet results on this dataset.
To start, here's the prediction error and training error across train/test splits and datasets (recall there is no test data for IHDP Group="Prenatal"). The third column best illustration of the problem: unequal quality of CATE estimation across the two groups.

I will now review how each of the three factors impacts the estimation error for predictions and CATEs. See Appendix below for more details on the experiments.
### 1. Impact of GDRO

GDRO generally reduces the disparity in estimation equality of predictions and CATEs, although it has less impact on the TOY1 dataset. Is this a significant difference? I'm not sure.
### 2. Impact of separate head for each group

Using a separate head in the neural network for each group does not improve performance (and often makes it worse).
### 3. Impact of cross-fitting
To start, here are the results broken down by the four metamodels, with RNet being the only one that uses cross-fitting. Recall I removed RNet results on the TOY1 dataset. RNets tend to outperform only SNets, so using cross-fitting . Recall, however, that this is a particular method that differs from the rest in more than just cross-fitting, so perhaps something else it causing the difference in performance.

To investigate the impact of cross-fitting only, here are results for only RNets but with different number of folds (above was 4 folds). As we can see, the performance improves, but it improves across both groups. So the conclusion is that more folds is better (which is expected), but it does not seem to target group disparity in particular.

## Appendix
### Metamodels
- **SNet**: Uses a "single" model that takes $T$ as input.
- **TNet**: Uses "two" models, one for $T=0$ and one for $T=1$.
- **RNet**: This is an extension of Double ML for CATE estimation. It uses one model to estimate $E[Y\mid X]$, a second model to estimate $E[T\mid X]$, and third model to regress the residuals on each other to estimate the CATE. I use a neural network for the first model, logistic regression for the second model, and kernel ridge regression for the third model.
- **HNet**: This is a compromise between an SNet and a TNet. It uses a single base layer (shared across $T=0$ and $T=1$) but separate "heads" for $T=0$ and $T=1$. I use three HNets in my experiments, including Tarnet and Dragonnet.
### Models
These are the inputs to the metamodels (e.g., TNet uses two models). I use fully connected neural networks of 2-4 layers, 100-200 hidden units, and ReLU activation functions. I use the NTK parameterization (i.e., scaling by $1/\sqrt{width}$) and a $N(0,1)$ initalization.
Specifically, the multi-headed networks use 2-layers of 200 units for the base and 2-layers of 100 units for each of the heads. The non-multi-headed networks use either 2 or 4 layers with 200 units for the first half of the layers and 100 units for the second half of the layers.
Hyperaparameters (learning rate, l2 weight regularization, and any particular parameters of the network (Tarnet and Dragonnet have a few) are manually (and rather unrigorously) tuned by a small gridsearch). I use Adam for optimization.
### Datasets
I will use three datasets:
1. *IHDP Group="First"* $(N_{\text{train}}=746, D=25)$: This is a popular infant health dataset. The inputs $X$ and treatment $T$ are real data but the outcomes $Y$ are simulated. For this dataset, I used the binary "First" variable to define the group, resulting in a fairly even allocation between the two groups.
2. *IHDP Group="Prenatal"* $(N_{\text{train}}=597, D=25)$: This is the same dataset as above except grouped on "Prenatal", resulting in a more skewed allocation. Because there are so few observations in Group 0, I use the entire dataset for training (so there are no test results), but note that estimating the CATE on training data is already a difficult task (since the data is only observed for one treatment at a time).
3. *TOY1* $(N_{\text{train}}=159, D=2)$: This is a toy dataset I invented. The CATE is defined as the difference in two random draws from a GP, but for Group 1 I use the difference in RBF GPs while for Group 0 I use a weighted combination between the Group 1 CATE and the difference in Matern 3/2 GPs with a smaller lengthscale.
All results are averaged over 5 random datasets and 5 random neural network initializations (so 25 total runs). The treatment is binary. The table below summarizes the datasets ($N$ is the number of training observations, $D$ is the input dimension):
| Dataset | $N_{all}$ | $N_{group0}$ | $N_{group1}$ | $D$ |
| --------------------- | --------- | ------------ | ------------ | --- |
| IHDP Group="Prenatal" | 746 | 27 | 719 | 25 |
| IHDP Group="First" | 597 | 316 | 281 | 25 |
| TOY1 | 159 | 35 | 124 | 2 |
*Important note*: the binary group indicator is always used as an input to the model (e.g., of the $D=2$ inputs to TOY1, one is the group indicator).
#### Example of TOY1
*Important note*: the binary group indicator is always used as an input to the model (e.g., of the $D=2$ inputs to TOY1, one is the group indicator).
*Note: for TOY1 reversed the Group 0 and Group 1 labels for the above plots, so that Group 0 was always the more difficult group across datasets. In this second, I use the actual group labels (so group 1 is the more difficult group).*
THe CATE in TOY1 is the difference between two random draws from a GP. For Group 0 I use the difference in RBF GPs and for Group 1 I use a weighted combination between the Group 0 CATE and the difference in Matern 3/2 GPs with a smaller lengthscale. The number of observations is skewed 75/25 towards Group 0. Here's an example of the two CATEs:

Here's the underlying 4 GPs that went into constructing these CATEs.
