We thank the reviewers for the helpful comments. We have made changes according to the minor comments. Below we would like to respond to reviewers' concerns.
## Review A
1. We will call it MPDP, but we are open to suggestions by the reviewer.
2. $n$ is public information. This is because the adversary by observing the network traffic naturally knows the total number of users. $n$ is also needed for obtaining the result (e.g., Equation 2 and 3). We will add more discussion about it.
3. For the final comment about having users choose the path randomly: It is true that users need to synchronize on the chosen path. We will clarify that. Thanks for pointing this out.
## Review B
* **Comment on Section 4 being immediate**: In Section 4, we analyzed how different adversary models (collusion among different groups of entities) affect the privacy guarantee for using shuffling/mixing and using homomorphic encryption. As the reviewer pointed out, for each of the combinations, the analysis is simple. However, previous work in this area failed to consider some of these adversarial models. For example, in Section 4.5, we analyzed an adversary missed by existing work [7, 15, 23]. Namely, they assume users collude by only sharing true values; but users can also share their perturbed values if they collude with the adversary. Our main contribution in Section 4 is a systemization of adversarial models.
Also, we want to argue that introducing a framework to analyze the privacy properties of different settings so that the results are easy to understand is a contribution in itself. In fact, in the process of writing this paper, we have considered several other factors that could impact privacy and would have made the analysis more mathematically sophisticated, but we chose to abstract them away. For example, in Section 4.4., we analyze the perturbation properties in this setting and showed that doing that do not enable the parties to gain more information. For Section 5.3, we have also analyzed the possibility that the shufflers drop some reports (instead of adding fake reports), but decided to leave the analysis out of the paper to keep the exposition simple. We can add these discussions.
* **Comment on partial compromise of shufflers**: In Section 5, we can add the discussion about partial compromise of some shufflers. The analysis would be standard. With partial compromise of the shufflers, the privacy guarantees will degrade gracefully, i.e., proportional to the number of uncompromised shufflers, rather than being fully broken (ref. Equation 5 and 6).
* **Comment on partial compromise of users**: We note that there are two kinds of _compromise_ for users. They share with the adversary their true values, and they share their reported values. Our definitions are based on differential privacy, which considers the worst case where all other users’ true values are known to the adversary. To analyze the case when some fraction of users share their reported values, one can use Theorem 4 and 5. We can add the discussion about partial compromise of users, which would also be standard.