owned this note
owned this note
Published
Linked with GitHub
# "On the Economics of Anonymity"
###### tags: `Tag(HashCloak - Validator Privacy)`
Author(s): Alessandro Acquisti, Roger Dingledine, and Paul Syverson
Paper: [https://www.freehaven.net/doc/fc03/econymics.pdf]
### Table of Contents
[toc]
:::info
>Abstract: Decentralized anonymity infrastructures are still not in wide use today. While there are technical barriers to a secure robust design, our lack of understanding of the incentives to participate in such systems remains a major roadblock. Here we explore some reasons why anonymity systems are particularly hard to deploy, enumerate the incentives to participate either as senders or also as nodes, and build a general model to describe the effects of these incentives. We then describe and justify some simplifying assumptions to make the model manageable, and compare optimal strategies for participants based on a variety of scenarios.
:::
## 4 Applying The Model
Consider a set of $n_s$ agents interested in sending anonymous communications.
* Imagine that there is only one system which can be used to send anonymous messages, and one other system to send non-anonymous messages.
* Each agent has three options: only send her own messages through the mix-net;
* Send her messages but also act as a node forwarding messages from other users.
* Or don’t use the system at all.
* By sending a message without anonymity,
* Or by not sending the message.
* Thus initially we do not consider the strategy of choosing to be a bad node, or additional honest strategies like creating and receiving dummy traffic.
### 4.1 Adversary
Although strategic agents cannot choose to be bad nodes in this simplified scenario, we still assume there is a percentage of bad nodes and that agents respond to this possibility. Specifically we assume a global passive adversary (GPA) that can observe all traffic on all links (between users and nodes, between nodes, and between nodes or users and recipients).
### 4.2 Honest Agents
**Myopic Agents:** Myopic agents do not consider the long-term consequences of their actions. They simply consider the status of the network and, depending on the payoffs of the one-period game, adopt a certain strategy. Suppose that a new agent with a privacy sensitivity $v_{{a}_i}$ is considering using a mix-net with (currently) $n_s$ users and $n_h$ honest nodes.
Then if
![](https://i.imgur.com/h7eaEJz.png)
agent $i$ will choose to become a node in the mix-net. If
![](https://i.imgur.com/5G7w0QY.png)
then agent $i$ will choose to be a user of the mix-net. Otherwise, $i$ will simply not use the mix
Our goal is to highlight the economic rationale implicit in the above inequalities. In the first case agent i is comparing the benefits of the contribution to her own anonymity of acting as a node to the costs.
**Strategic Agents: Simple case** Strategic agents take into consideration the fact that their actions will trigger responses from the other agents.
We start by considering only one-on-one interactions. First we present the case where each agent knows the other agent’s type, but we then discuss what happens when there is uncertainty about the other agent’s type.
* Suppose that each of agent $i$ and agent $j$ considers the other agent’s reaction function in her decision process. Then we can summarize the payoff matrix in the following way:
![](https://i.imgur.com/a4GJEWP.png)
As before, each agent has a trade-off between the cost of traffic and the benefit of traffic when being a node, and a trade-off between having more nodes and fewer nodes. In addition to the previous analysis, now the final outcome also depends on how much each agent knows about whether the other agent is honest, and how much she knows about the other agent’s sensitivity to privacy.
**Strategic Agents: Multi-player Case.** Each player now considers the strategic decisions of a vast number of other players.
* We define the payoff of each player as the average of his payoffs against the distribution of strategies played by the continuum of the other players.
* In other words, for each agent, we will have: $u_i = \sum_{n_s} u_i (a_i , a_{−i})$ where the notation represents the comparison between one specific agent $i$ and all the others.
In order to have a scenario where the system is self-sustaining and free, and the agents are of high and low types (sensitivity), the actions of the agents must be visible and the agents themselves must agree to react together to any deviation of a marginal player.
* In realistic scenarios, however, this will involve very high transaction/coordination costs, and will require an extreme (and possibly unlikely) level of rationality for the agents.
* This equilibrium will also tend to collapse when the benefits from being a node are not very high compared to the costs.
* Paradoxically, it also breaks down when an agent trusts another so much that she prefers to delegate away the task of being a node
## 5 Alternate Incentive Mechanisms
As the self-organized system might collapse under some of the conditions examined above, we discuss now what economic incentives we can get from alternative mechanisms.
1. *Usage fee:* If participants pay to use the system, the “public good with free-riding” problem turns into a “clubs” scenario.
2. *“Special” agents:* Such agents have a payoff function that considers the social value of having an anonymous system or are otherwise paid or supported to provide such service.
3. *Public rankings and reputation:* . A higher reputation not only attracts more cover traffic but is also a reward in itself.
## 6 A Few More Roadblocks
### 6.1 Authentication in a Volunteer Economy
It may in fact be plausible to build a strong anonymity infrastructure from a wide-spread group of independent nodes that each want good anonymity for their own purposes.
>Volunteers are problems: users don’t know the node operators, and don’t know whether they can trust them.
Even when this is feasible, identifying individuals is a problem. Classic authentication considers whether it’s the right entity, but not whether the authenticated parties are distinct from one another. One person may create and control several distinct online identities.
### 6.2 Dishonest Nodes vs. Lazy Nodes
We have primarily focused on the strategic motivations of honest agents, but the motivations of dishonest agents are at least as important.
A flat-out dishonest agent participates only to compromise anonymity or reliability.
* In doing so, however, a dishonest agent will have to consider the costs of reaching and maintaining a position from which those attacks are effective.
* which will probably involve gaining reputation and acting as a node for an extended period of time, a cost if the goal is to generally break reliability.
* Such adversaries will be in an arms race with protocol developers to stay undetected despite their attacks
A “lazy” node, on the other hand, wants to protect her own anonymity, but keeps her costs lower by not forwarding or accepting all of her incoming traffic.
* By doing so this node decreases the reliability of the system. While this strategy might be sounder than the one of the flat-out dishonest node, it also exposes again the lazy node to the risk of being recognized as a disruptor of the system.
### 6.3 Bootstrapping The System And Perceived Costs
We must discuss how a mix-net system with distributed trust can come to be. We face a paradox here: agents with high privacy sensitivity want lots of traffic in order to feel secure using the system. They need many participants with lower privacy sensitivities using the system first. The problem lies in the fact that there is no reason to believe the lower sensitivity types are more likely to be early adopters.
Difficulties in bootstrapping the system and the myopic behavior of some users might make the additional incentive mechanisms discussed in **Section 5** preferable to a market-only solution.
### 6.4 Customization And Preferential Service Are Risky Too
Leaving security decisions up to the user is traditionally a way to transfer cost or liability from the vendor to the customer; but in strong anonymity systems it may be unavoidable. For example, the sender might choose how many nodes to use, whether to use mostly nodes run by her friends, whether to send in the morning or evening, etc. After all, only she knows the value of her anonymity. But this choice also threatens anonymity — different usage patterns can help distinguish and track users.
Limiting choice of system-wide security parameters can protect users by keeping the noise fairly uniform, but introduces inefficiencies; users that don’t need as much protection may feel they’re wasting resources. Yet we risk anonymity if we let users optimize their behavior.
## 7 Future Work
There are a number of directions for future research:
* **Dummy traffic:** Increases costs but it also increases anonymity.
* In this extension we should study bilateral or multilateral contracts between agents, contractually forcing each agent to send to another agent(s) a certain number of messages in each period.
* With these contracts, if the sending agent does not have enough real messages going through its node, it will have to generate them as dummy traffic in order not to pay a penalty.
* **Reliability:** As noted above, we should add reliability issues to the model.
* **Strategic dishonest nodes:** As we discussed, it is probably more economically sound for an agent to be a lazy node than an anonymity-attacking node.
* Assuming that strategic bad nodes can exist, we should study the incentives to act honestly or dishonestly and the effect on reliability and anonymity.
* **Unknown agent types:** We should extend the above scenarios further to consider a probability distribution for an agent’s guess about another agent’s privacy sensitivity.
* **Comparison between systems:** We should compare mix-net systems to other systems, as well as use the above framework to compare the adoption of systems with different characteristics.
* **Exit nodes:** We should extend the above analysis to consider specific costs such as the potential costs associated with acting as an exit node.
* **Reputation.** Can have a powerful impact on the framework above in that it changes the assumption that traffic will distribute uniformly across nodes. We should extend our analysis to study this more formally.
* **Information theoretic metric:** We should extend the analysis of information theoretic metrics in order to formalize the functional forms in the agent payoff function.
## 8 Conclusions
We have described the foundations for an economic approach to the study of strong anonymity infrastructures. We focused on the incentives for participants to act as senders and nodes. Our model does not solve the problem of building a more successful system — but it does provide some guidelines for how to think about solving that problem.
Much research remains for a more realistic model, but we can already draw some conclusions:
* Systems must attract cover traffic (many low-sensitivity users) before they can attract the high-sensitivity users.
* Weak security parameters (e.g. smaller batches) may produce *stronger *anonymity by bringing more users.
* But to attract this cover traffic, they may well have to address the fact that most users do not want (or do not realize they want) anonymity protection.
* High-sensitivity agents have incentive to run nodes, so they can be certain their first hop is honest.
* There can be an optimal level of free-riding: in some conditions these agents will opt to accept the cost of offering service to others in order to gain cover traffic.
* While there are economic reasons for distributed trust, the deployment of a completely decentralized system might involve coordination costs which make it unfeasible.
* A central coordination authority to redistribute payments may be more practical, but could provide a trust bottleneck for an adversary to exploit.