# STOC Reviews ## Overview We believe that we can provide an effective rebuttal to the main concerns raised by reviewers A and B. Reviewer C did not raise substantive concerns. We include the rebuttal below. ## Reviewer A ### On the epistemic framework #### Reviewer Comment > The main drawback of the paper is that the model is quite restrictive, as the authors admit in Section 1.2. They assume there is a ground truth "correct" outcome, which I have a hard time imagining in most real elections. #### Answer The assumption of objectively correct and incorrect alternatives was introduced by Condorcet in the 18th century and nowadays underlies one of the central research directions in computational social choice (e.g., “When do noisy votes reveal the truth?” (EC-13) and “Making right decisions based on wrong opinions” (EC-17)). We cannot think of applications of voting where liquid democracy is inapplicable, so it seems that the ground-truth assumption is as relevant to liquid democracy as it is to voting in general. Furthermore, the paper of Kahng et al. (2021), arguably the best-known paper in the computational study of liquid democracy, relies on the same assumption, as do follow-up papers, such as that of Caragiannis and Micha (2019). In order to shed new light on this influential line of work, our paper must start from the same basic (and well-established) assumption. This is more than a central theoretical assumption in the field. Delegation of responsibilities is already in use in companies that, for instance, handle financial portfolio management: in these settings, sector-specific expertise is required to understand financial opportunities so stakeholders can entrust in-house experts to determine the best investment strategies (that is, a ground truth). More generally, liquid democracy can be used in institutions when it is intended to strengthen the outcome's legitimacy while identifying pockets of expertise that are not apparent to a centralized leadership. Several companies are exploring the possibility of using liquid democracy and we have been directly involved with some institutions that are considering doing so (e.g., CoDesignIt, YoungShareholders, Krause House,…). The size of these entities usually implies that the underlying network is both relatively large and well-connected, such that our model is particularly well suited to inform real-world deployment of liquid democracy in these instances. ### On random delegations #### Reviewer Comment > I also don't find the assumption of random delegation realistic on any reasonably societal scale, since voters will get information via some social network that itself might be quite preferentially attached (like Twitter). #### Answer We agree with the idea that delegations may further depend on the graph topology in some circumstances. However, this does not relate to modeling random delegations (this is a common way to model complex interactions as well-parametrized random models capture interesting trends). Further, exploring delegation patterns influenced by the graph structure is a great next step. Our proofs, in a simpler set up, are not trivial and constitute a necessary first step to approach these subsequent questions. As mentioned above, our setup is also motivated by real-world examples (not Twitter though, agreed). ### On the difference between new and previous findings #### Reviewer Comment >The negative results essentially say that delegation mechanisms cannot outperform either direct democracy or dictatorships. The authors show that the above problem does not arise if we remove the social network. #### Answer Kahng et al. (2021) and Caragiannis and Micha (2019) do not conclude that “delegation mechanisms cannot outperform either direct democracy or dictatorships [with a] social network,” and we do not conclude that “the above problem does not arise if we remove the social network.” Kahng et al. (2021) and Caragiannis and Micha (2019) conclude that *there exist particular network structure and competence assignment* such that a stronger version of upward delegation (one of our models) is worse off than direct democracy and dictatorship respectively. Similarly, there exist particular network structure and competence assignment such that liquid democracy outperforms both direct democracy and dictatorship. We are precisely interested in understanding how likely liquid democracy is to outperform direct democracy for a given social network. To start answering this question we study the case of complete graphs, but note that our model “can be seen as embedded into the delegation process, where the probability that $i$ delegates to $j$ takes into account the probability that $i$ is familiar with $j$ in the first place.” In other words, “we can extend our results to a model where a directed social network is first sampled, and then a $(q, \varphi)$-model is followed. The social network must be sampled such that the neighbors of each voter are chosen uniformly at random, although the number of such neighbors could follow any small-tailed distribution. Intuitively, delegation proportional to weighting the neighbors of $i$ (rather than the entire population) is equivalent to a possibly different weighting over the entire population.” ## Reviewer B ### On why the core lemma is about the interplay between concentration of power and competence, and why the results don't "confound different things" #### Reviewer Comment > if $\varphi(x,y)$ is proportional to y, and we have $p_1 = 1, p_i <0.6$ for all others, then a constant fraction of voters will delegate to 1. The reason this can't happen in the model is not because of the delegation process, but since the competence levels are *sampled* from a fixed distribution independent of phi! Thus it is not possible (i.e. highly unlikely) that there is just one representative substantially better than all. (a) For any y, there will always be a constant fraction of voters with $p_i>y$, and (b) since delegation cannot distinguish between them, there cannot be power concentration. [...] If I am right, then the model and the results as presented are confounding several things. Separating them (in particular separating power concentration from better aggregated accuracy, i.e. conditions (1) and (2) of the core Lemma) can clarify those things, yield more general results, and allow follow-up studies to use these results in other models. #### Answer This is incorrect. If $p_1 = 1$, $p_i < 0.6$ for other $i$, and $\varphi(x, y) \propto y$. then, every voter that delegates will choose voter $1$ with probability $\frac{1}{1+\sum_{i = 2}^n p_i}$ and with remaining probability will delegate to another voter. As long as the average competence $\frac{\sum_{i = 1}^n p_i}{n}$ is lower bounded by a constant $>0$ (this is a very mild assumption), this delegation probability is $O(1/n)$. Therefore, with high probability, voter $1$ will receive no more than $O(\log n)$ direct delegations. Our analysis shows that this remains true even after including indirect delegations (although this fact is not obvious), and hence, voter $1$ receives a sublinear number of votes. However, even in extreme cases with linear max weight, do no harm can still be satisfied. In the reviewer's example, the voter that would have received a linear number of delegations was correct with probability $1$ so this would likely not lead to harm. However, even this assumption is unnecessary. Say $1/4$ of all voters delegate to a single voter $i$ with competence $p_i < 1$. Here, if the remaining voters don't delegate and have average competence $> 2/3$, then, with high probability, even with delegations, the weighted majority will be correct. This highlights the fact that our core lemma is not a necessary condition; we could easily strenghthen it to include instances like these where a single voter receives a linear number of votes or even other corner cases depending on the interplay between the weights and competence increase. It does, however, capture succint, interesting and general patterns useful for our analysis. <!-- As in the reviewer's example, fix $p_1$ and others $p_i = p^*$. If the delegation mechanism is such that $w_1 = an$ and $w_i = 1$ for $(1-a)n$ voters and $w_i=0$ for the rest. If $p_1=1$, one will have do no harm as long as $\sum w_i p_i>1/2$ (which is the case here as long as $p^*>1/2(1-a)$). Now, look into $p_1=3/4, a = 1/3, p^*=3/5$ for instance, then $\sum w_i p_i>\sum p_i>1/2$ but the gain converges to $-1/4,$ so there is harm. We could state a core lemma for that specific case where one voter has a linear weight and the others have unit weight, with a corresponding condition (2) that relates the maximal linear weight to the competence levels. And we could do so for all the cases you can imagine (what if the other voters above do have not unit weight but some have $w_i = f_i(n)$?). --> <!-- Crucially, and related to your point on concentration of power (see next answer), what matters is the *interplay* between the competence assignment post delegation and the associated weight assignment. In our strong core lemma, we in fact relate the average competence post delegation and the maximum weight. What matters is not concentration of power alone, not to mention the distribution from which the $p_i$’s are sampled. --> ### On Concentration of Power #### Reviewer Comment >We know that in reality there is (at least sometimes/often) power concentration. Yet the message of the paper is that we should not worry about power concentration. #### Answer As we explain in the paper (and have mentioned in a previous rebuttal to this reviewer), the message of our paper is not that we should not worry about concentration of power; we even discuss instances where concentration of power has happened in reality. Rather, the point of our work is to understand when liquid democracy does no harm, and this question relates to (i) when concentration of power happens and (ii) when it impedes the discovery of the ground truth, which also depends on the voters’ competence levels. Some experimentation we report showed vast concentration of power. Other recent experiments on liquid democracy found limited concentration of power, in line with our theoretical predictions. Our model necessarily cannot generalize all possible settings, but it does fit some practical ones. Hence, we are interested in identifying cases when the latter outcome occurs. <!-- So, yes, we are interested in understanding the cases where concentration of power is balanced by an increase in expertise (like in the experiments mentioned above) and the cases where it turns into dictatorship (like the case we mention in our paper twice). --> <!-- The Condorcet Jury Theorem identified regimes where collective intelligence arises and others where it does not. We believe the same is true for liquid democracy, and we endeavor to uncover the positive and negative regimes so that we precisely know when one should really be worried about concentration of power. --> ### Citations #### Reviewer Comment >A paper that I have already mentioned in previous review is: >Magdon-Ismail, Malik, and Lirong Xia. "A mathematical model for optimal decisions in a representative democracy." Advances in Neural Information Processing Systems 31 (2018). > >I read this paper again now and it is very much related, although the model is somewhat different. In that paper voters form "groups" that each elect a "representative" to vote. In practice this is equivalent to all voters in the group delegating to the representative. In particular, the "max-process" and "noisy-max process" resemble the models in the current paper where delegation is always ar usually to better voters. [...] > >many papers on bins size distribution. what is known >-https://daneshyari.com/article/preview/1151260.pdf >-Chung, Fan, and Linyuan Lu. "Concentration inequalities and martingale inequalities: a survey." Internet mathematics 3.1 (2006): 79-127. section 9 >-Generalizations of Polya’s urn Problem Fan Chung1, Shirin Handjani2, and Doug Jungreis. #### Answer We are aware of Malik and Xia (2018). They work with pre-defined groups, look for maximal probability of correctness, and consider costly voting. Our assumptions, goals, and methods are quite different. However, we have added some more discussion of related work on representative democracy in the epistemic framework and have included this citation. Next, in the STOC version, we did cite Chung et al (2003): they study the distribution of the number of components with $k$ people at time $t$. However, these bounds are never strong enough to conclude that there are no bins of any certain size, let alone none of any sufficiently large size, which is what we need and prove. Chang and Linyuan (2006) is a survey citing Chung et al (2003). Stark (2006) expands the result of Chung et al (2003) in the case where the probability of attachment depends on $n$, which is not of interest here. As we discuss in the paper, the urn process we analyze is extremely well-studied. It just so happens that none of the previous papers consider the property (upper bound on the largest bin size) that we need.