chloethedev.eth
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # PriFi: Low-Latency Anonymity for Organizational Networks ###### tags: `Tag(HashCloak - DC Net Readings)` Author(s):Ludovic Barman*, Italo Dacosta, Mahdi Zamani, Ennan Zhai, Apostolos Pyrgelis, Bryan Ford, Jean-Pierre Hubaux, and Joan Feigenbaum Paper: [https://arxiv.org/pdf/1710.10237.pdf](https://arxiv.org/pdf/1710.10237.pdf) ### Table of Contents [toc] :::info >Abstract: Organizational networks are vulnerable to traffic-analysis attacks that enable adversaries to infer sensitive information from the network traffic — even if encryption is used. Typical anonymous communication networks are tailored to the Internet and are poorly suited for organizational networks. We present PriFi, an anonymous communication protocol for LANs: it protects users against eavesdroppers and provides traffic-analysis resistance. PriFi builds on Dining Cryptographers networks but reduces the high communication latency of prior work via a new client/relay/server architecture, in which a client’s packets remain on their usual network path without additional hops, and in which a set of remote servers assist the anonymization process without adding latency. PriFi also solves the challenge of equivocation attacks, which are not addressed by related works, by encrypting the traffic based on the communication history. Our evaluation shows that PriFi introduces a small latency overhead (≈ 100ms for 100 clients) and is compatible with delay-sensitive applications such as VoIP. ::: ![](https://i.imgur.com/rcS3J0b.png) ![](https://i.imgur.com/fIyRsC7.png) ## 3 System Overview PriFi is similar to a low-latency proxy service (e.g., a VPN or SOCKS tunnel) working within a LAN, creating tunnels between clients and the PriFi relay (e.g., the LAN’s router). * Informally, these tunnels protect honest client’s traffic from eavesdropping attacks. * The traffic is anonymized, preventing a third-party to assign a packet or flow to a specific device or end-user. * Additionally, unlike traditional proxy services: 1. The server does not need to be trusted, i.e., security properties hold in case of compromise. 2. The communications provably resists traffic-analysis attacks. ### 3.1 System Model Consider n clients $C_1, ... , C_n$ that are part of an organizational network and are connected to a $relay$ $R$. * The relay is the gateway that connects the LAN to the Internet and typically is already part of the existing infrastructure. * ($e.g.$, a LAN or WLAN router, Figure 1). * The relay can process regular network traffic in addition to running the PriFi software. * Hence, PriFi can be deployed to a network with minimal changes. * In the Internet, there is a small set of m servers called guards $S_1,...,S_m$, whose role is to assist the relay in the anonymization process. * These guards could be maintained by independent third parties, similar to Tor’s volunteer relays, or sold as a “privacy service” by companies. * To maximize diversity and collective trustworthiness, these guards are distributed around the world, preferably across different jurisdictions. * Therefore, the connections between the guards and the relay are assumed to be high latency. ### 3.2 Threat Model Let $\mathcal{A}$ be a computationally-bounded global passive adversary who observes all network traffic. * In addition to the passive adversary, since PriFi is a closed-membership system, we consider and address active attacks from insiders, but not active attacks from outsiders. * Clients can be controlled by the adversary, but we require at least two honest clients at all times. * Otherwise, de-anonymization is trivial. * The guards are in the anytrust model \[12, 67, 73\]: * At least one guard is honest (but a client does not need to know which one), and they are all highly available. The relay is considered malicious, $i.e.$, it might actively try to de-anonymize honest users or perform arbitrary attacks, but it will not perform actions that only affect the availability of PriFi communications such as delaying, corrupting or dropping messages. * On one hand, a fullymalicious relay would make little sense in our setting, where it is the gateway that connects the LAN to the Internet: * Due to its position in the network, it can degrade or deny service for any protocol anyway (e.g., drop all packets). * However, the relay is part of the infrastructure of the organization, and users would take administrative actions if the network is not operating properly. * On the other hand, an adversarial model where the relay is honest-but-curious would be too weak: * In practice, if the relay is compromised (e.g., it gets hacked), it could perform active attacks to de-anonymize clients. * Therefore, we use the formulation “malicious-but-available” to reflect that the relay needs to faithfully forward messages and to provide service. * but if the relay maliciously attacks the protocol, only the availability would suffer, not the security properties. ### 3.3 Goals #### 3.3.1 Security Goals * \[G1\] **Anonymity**: An adversary has negligible advantage in attributing an honest user’s message to its author. * This includes traffic-analysis resistance ($e.g.$, the adversary can observe network-level traffic features). * \[G2\] **Accountability**: Misbehaving insiders are traceable without affecting the anonymity of honest users. #### 3.3.2 System Goals * \[G3\] **Low latency**: The delay introduced by PriFi should be small enough to support network applications with high QoS requirements. * $e.g.$, VoIP and videoconferencing. * \[G4\] **Scalability**: PriFi should support small to medium organizations. * $i.e.$, up to a few hundred users, a number typically observed in ICRC sites, Figure C.1. #### 3.3.3 Non-Goals PriFi does not target the following goals: * Hiding *all* traffic features: * PriFi protects one honest user’s traffic among all honest users’ traffic, but does not hide global/aggregate communication volumes or time-series of packets. * Informally, an eavesdropper could learn that some honest user is browsing the Web, or using VoIP, but not which honest user. > This point is fairly orthogonal to the design of PriFi, and can be addressed by adding padding and/or dummy traffic. * External sender/receiver anonymity: * PriFi’s anonymity set consists of the set of users connected to a LAN. * Users outside the LAN are not anonymous. * If both sender and receiver are part of a PriFi LAN (not necessarily the same), the protocol has sender and receiver anonymity. * Intersection attacks (correlating users’ presence on the PriFi network with messages or other users): * PriFi has no perfect solution to this problem. * Mitigation to this problem is discussed in Section 8. ### 3.4 PriFi Solution Overview PriFi starts with a setup phase where clients authenticate themselves to the relay: * Then, clients and guards derive shared secrets. * Finally, clients are organized in a schedule (a secret permutation) to decide when they communicate. **Upstream Traffic:** The clients and the guards run a DC-net protocol. * Communication occurs in short time-slots. * Each time-slot: * Each client and guard sends a ciphertext to the relay. * The slot-owner can additionally embed some payload. * The relay waits for all ciphertexts, then computes the anonymized output. * This reveals one or more IP packet(s) without source address; * The relay replaces it with its own IP address (as in a NAT) and forwards it to its destination. Due to the construction of the DC-net, this protocol provides provable anonymity. * Ciphertexts are indistinguishable from each other to the adversary. * During a slot, each client sends exactly the same number of bits. * Finally, if the contribution from any honest client is missing, the output is undecipherable. > Informally, this achieves our goal G1 for upstream traffic. **Precomputation of Ciphertexts:** The ciphertexts from the guards are independent from the anonymous payloads. * Hence, a key optimization is that guards’ ciphertexts are batch-computed and sent in advance to the relay. * The relay buffers and pre-combines the ciphertexts from multiple guards, storing a single stream of pseudo-random bits to be later combined with clients’ ciphertexts. * This enables PriFi to have low latency despite the presence of high-latency links between the guards and the relay. **Latency-Critical Path:** Another important advantage of combining locally the ciphertexts is that clients’ packets remain on their usual network path. * The added latency is due mostly to waiting on all clients. * Similar systems that route clients’ traffic over the Internet incur significantly higher latency. **Downstream Traffic:** When receiving an answer to an anonymous message sent in some time-slot, the relay encrypts it under the (anonymous) slot-owner’s public key, then broadcasts the ciphertext to all clients. * As each client receives exactly the same message, this achieves our goal G1 for downstream traffic. * For broadcasting the downstream message, , rather than performing n unicast sending: * The relay leverages on the LAN and uses UDP broadcast, letting layer-2 network equipment (e.g., switches) replicate the message if needed. * In WLANs, the broadcast requires only one message. * Achieving receiver anonymity at no cost (in the absence of link-layer retransmissions). ## Basic PriFi Protocol ### 4.1 Preliminaries * Let $λ$ be a standard security parameter, and let $\mathbb{G}$ be a cyclic finite group of prime order where the Decisional Diffie-Hellman (DDH) assumption \[3\] holds * (e.g., an elliptic curve). * Let (KeyGen, $S, V$) be a *signature scheme*, with KeyGen$(\mathbb{G}, 1^λ)$ an algorithm that generates the private-public key pair $(p, P)$ usable for signing. * We denote as $Sp(m)$ the signature of the message $m$ with the key $p$. * Let KDF : $\mathbb{G}(1^λ)→ \{0,1\}^λ$ be a key derivation function that converts a group element into a bit string that can be used as a symmetric key. * Let $(E,D)$ be a symmetric nonce-based encryption scheme \[56\]. * We denote as $E_k(m)$ the encryption of the message $m$ with the key $k$. * Let $H$ : $\{0,1\}^∗ → \{0,1\}^λ$ be a standard cryptographic hash function. * Let PRG : $\{0,1\}^λ → \{0,1\}^∗$ be a standard pseudo-random generator. * Let $F_1$ :{0,1} λ→G be a public, invertible mapping function from binary strings to $\mathbb{G}$, and * let $F_2$ : $\{0,1\}^∗ → G$ be a hash function to G which maps to any point in G with uniform probability (e.g., Elligator Squared \[65\]). * Finally, let $F_3$ : $\{0,1\}^∗ → N$ be a public function that maps bitstrings to integers. **Identities:** Each party has a long-term key pair (denoted with the hat symbol) generated with KeyGen$(\mathbb{G},1^λ)$: * $(\hat{p}_i ,\hat{P}_i)$ for client $C_i$. * With $i∈\{1,...n\}$ * $(\hat{p}_j,\hat{P}_j)$ for guard $S_j$. * With $j∈\{1,...m\}$ * $(\hat{p}_r,\hat{P}_r)$ for the relay. Let $\underline{v}$ be the vector notation for $v$. * For each epoch, the group definition $G$ consists of all long-term public keys: * $G=(\underline{\hat{P}_i},\underline{\hat{P}_j},\underline{\hat{P}_r})$, * $i∈\{1,...n\}$, * $j∈\{1,...m\}$, * and $G$ * is known to all parties * (e.g., via a public-key infrastructure). Finally, let $T =(\hat{P}_1,\hat{P}_2,...)$ be a static roster of allowed clients known to the relay and the clients. * (e.g., via a configuration file). ### 4.2 Protocols PriFi starts with the protocol *Setup* (Protocol 1), followed by several runs of the protocol *Anonymize* (Protocol 2). #### 4.2.1 Setup Each client authenticates to the relay using its long-term public key, and generates a fresh ephemeral key-pair. * Then, each client $C_i$ runs an authenticated Diffie-Hellman key exchange protocol with each guard $S_j$, using the fresh key pair, to agree on a shared secret $r_{ij}$. * This secret is used later to compute the DC-net’s ciphertexts. * Then, the guards shuffle the client’s ephemeral public keys $\underline{P_i}$ using a verifiable shuffle * $e.g.$, Neff Verifiable Shuffle \[45\]) to produce a permutation $π$. * The public keys in $π$ correspond to the keys in $\underline{P_i}$, in a shuffled order, such that no one knows the full permutation. * Only a client holding the private key $p_i$ corresponding to an input in $\underline{P_i}$ can recognize the corresponding pseudonym key in $π$. *Setup* has the following properties (proved in A.1): **Property 1:** * A shared secret $r_{ij}$ between an honest client $C_i$ and an honest guard $S_j$ is known only to $C_i$ and $S_j$. **Property 2:** * Let $C_0, C_1$ be two honest clients who ran *Setup* without aborting and $α(0),α(1)$ the position of their respective shuffled key in $π$. * Then, the adversary $A$ has negligible advantage in guessing $b ∈ [0,1]$ such that the mapping client → position $(b→α(0),(1−b)→ α(1))$ is in $π$. **Remark:** The setup (client/server secret sharing + Verifiable Shuffle of client pseudonyms) is similar to the related work \[12, 73\]. ![](https://i.imgur.com/lCqMKlr.png) #### 4.2.2 Anonymize After *Setup*, all nodes continuously run *Anonymize*. * In each time-slot, clients and guards participate in a DC-net protocol. * All guards compute one $\ell$-bit pseudo-random message from the PRGs seeded with the shared secrets, and send it to the relay. * All clients perform likewise. * Except for the client owning the time-slot, who additionally includes its upstream message(s) $m_i$ in the computation. * In practice, $m_i$ is one or more IP packet(s) without source address, up to a total length $\ell$. * If the slot owner has nothing to transmit, it sets $m_i=0^{\ell}$. * Once the relay receives the $n + m$ ciphertexts from all clients and guards, it XORs them together to obtain $m_k$. * If the protocol is executed correctly, $m_k$ is equal to $m_i$, as the values of PRG($r_{ij}$),$i ∈ \{1...n\}, j ∈ \{1...m\}$ cancel out. * If $m_k$ is a full IP packet, the relay replaces the null source IP in the header by its own (just like in a NAT) and forwards it to its destination. * If it is a partial packet, the relay buffers it and completes it during the next schedule. * Then, the relay broadcasts one downstream message $\underline{d}$ to all clients, each $d ∈ \underline{d}$ being encrypted with a public key $\tilde{P}_k ∈ π$ corresponding to an anonymous client. * The relay does not know for which client he encrypts. * Additionally, $\underline{d}$ is of arbitrary length $\ell'$, possibly much larger than $\ell$, easily accommodating downstreamintensive scenarios. * Finally, we emphasize $\underline{d}$ can contain data for multiple users (from previous rounds). * If the relay has nothing to transmit, he sends a single 0 bit to indicate the end of the round. ![](https://i.imgur.com/zvjt9M3.png) *Anonymize* has the following property (proved in A.2): **Property 3 \[Goal G1\]:** * After a run of Anonymize: * Let $C_{i1}, C_{i2}$ be two honest clients, * $k_1=α(i_1), k_2= α(i_2)$ the time slots in which they communicated, * and $m_{k1}, m_{k2}$ the anonymous upstream messages for those slots. * Then, $\mathcal{A}$ has negligible advantage in guessing $b∈[1,2]$ such that $m_{kb}$ is the message sent by $i_1$. ### 4.3 Practical Considerations **End-to-End Confidentiality:** A malicious relay can see the upstream message plaintexts. * This is also the case in a traditional LAN, and clients should use standard endto-end encryption (e.g., TLS) on top of PriFi. **Bandwidth Usage:** To reduce idle bandwidth usage, the relay periodically sends a “load request” in which clients can anonymously open or close their slots. * The relay skips a closed slot, saving time and bandwidth. * If all slots of a schedule are closed, the relay sleeps for a predetermined interval, further saving bandwidth at the cost of higher initial latency when resuming communications. * Both up/down-stream sizes $\ell$ and $\ell'$ can be dynamically tuned without interrupting the communications. * This allows to better match the sending/receiving rates of the clients and further reduce idle bandwidth usage. ### 4.4 Limitations of this Protocol **Accountability:** No mechanism enforces dishonest parties to correctly follow the protocol; malicious parties can anonymously disrupt the communications. This is a well-known DC-net issue. > Addressed in Section 5. **Downstream-Anonymity:** This notion refers to the clients being indistinguishable when receiving downstream messages, which is trivially the case if the relay truthfully sends the same downstream data to all clients. In this strawman, no mechanism enforces this. > This is addressed in Section 6. ## 5 Disruption Protection In the previous protocol (Protocol 2), a malicious active insider can modify or jam upstream communications by transmitting well-chosen bits instead of the ciphertext defined by the protocol. * This is particularly problematic since the attacker is provably anonymous and untraceable. PriFi uses a retroactive, hash-based “Blame” mechanism on top of a “classical” XOR-based DC-net, which: 1. Keeps the typical operation (in the absence of jamming) as fast as possible, and reduces the bandwidth lost due to the protection. 2. And excludes a disruptor with high probably (1/2 per flipped bit). > Leveraging the LAN topology, our exclusion takes seconds, orders of magnitude faster than the related work \[73\] (see Figure C.3). ### 5.1 Protocol We modify the previous *Anonymize* protocol (Protocol 2) to protect against disruption from malicious insiders. * In short, we add a hash-based detection of corruption and a blame mechanism to exclude a disruptor. **Summary:** * The relay sends the hash of the upstream message on the downstream traffic. * If the anonymous sender detects an incorrect hash, it requests a copy of its own disrupted message by setting a flag $b_{echo_last}$ to 1. * When receiving a disrupted copy $m^{'}_k$ of a previously sent message $m_k$, the client searches for a bit position $l$ such that $(m_k)_l = 0$ and $(m^{'}_k ) l = 1$. * If such $l$ exists, then the client requests to de-anonymize the $l^{th}$ bit of his own slot $k$, by sending ZIZKPoK$_{k,l}(\tilde{P}_k:\tilde{p}_k=log \tilde{P}_k)$ in his next upstream message, * a non-interactive proof of knowledge of the key $\tilde{p}(_k$ mod $_n$) corresponding to slot $k$ in $π$ [2]. * The proof is bound to the public values $l$ and $k$. * For simplicity, we write: $PoK_{k,l}(\tilde{p}_k : \tilde{p}_k = log \tilde{P}_k)$ hereafter. * Ignoring (1) the mod n and (2) the acronym for “Non-Interactive, Zero-Knowledge”. * If no such $l$ exists, the message was disrupted but the disruptor cannot be traced without simultaneously de-anonymizing a client. The *Anonymize* and *Blame* protocols are described in Protocols 3 and 4, respectively, and have the following properties (proved in Appendix A.3): ![](https://i.imgur.com/6NdjjRA.png) ![](https://i.imgur.com/V8ToQs0.png) **Properties 4+5 \[Goal G1\]:** * The anonymity of any honest client is unaffected by the information made public in *Blame* (Protocol 4): $\underline{PRG{(r_{ij})}_l}$ in step 2, or $r_{ij}$ in step 5. **Property 6 \[Goal G2\]:** * Let $C_i$ be the owner of a slot $k$, and let $C_d, d \neq i$, be another client (guard). * If $C_d$ sends an arbitrary value $q$ instead of the value $c_i$ (or $s_j$) as specified in the protocol, then $C_d$ is identified as the disruptor and is excluded from subsequent communications. **Property 7:** * An honest entity is never identified as a disruptor. **Limitations:** * The detection relies on the capacity for the jammed client to transmit $b_{echo_last}$ and the PoK, and the adversary can jam these values too. * In practice, $l$ is fairly large (e.g., 5 kB), and the client can use redundancy coding over his message to make the task harder for the adversary. * When this probabilistic solution is insufficient: * users can use a verifiable DC-net [13] to transmit without the risk of jamming. > In practice, this verifiable DC-net would run in background with very low-bandwidth and high-latency, just enough to transmit the proof-of-knowledge. **Remarks:** * In step 3 of *Blame* (Protocol 4), we observe why the client starting the blame checks that ${(m_k)}_l$ = 0 * Otherwise, revealing $\underline{PRG{(r_{ij})}_l}$ over a non-anonymous channel would flag this client as the sender, as $\bigoplus{_j} PRG(r_{ij}) \neq (m_k)_l$. * In step 5 of the *Blame* (Protocol 4), we remark at least one mismatching pair exists, otherwise the slot would not have been disrupted. * If multiple disruptors exist, Blame excludes one. > Personal Note: This seems like an issue? Can multiple *Blame* rounds be conducted in a quick enough succession (or in parallel) so that disruptors are found before the next group formation? ## 6 Equivocation Protection In both versions of Anonymize (Protocols 2 and 3), the relay needs to broadcast the downstream traffic d to all clients to ensure receiver-anonymity. However, no mechanism enforces this, and the malicious relay can notably perform equivocation attacks: * $i.e.$, sending different messages to each client, hoping that their subsequent behaviors will reveal which client actually decrypted the message. **Equivocation Example:** * Clients $C_1$ and $C_2$ are both honest. * On the first round, the relay decodes an anonymous DNS request. * Instead of broadcasting the same DNS answer to $C_1$ and $C_2$, the relay sends two different answers containing IP$_1$ and IP$_2$, respectively. * Later, the relay decodes an anonymous IP packet with IP$_2$ as destination. * It can guess that $C_2$ made the request, as $C_1$ has never received IP$_2$. > The attack is possible because a malicious party relays the information, which can happen in both Dissent \[73\] and Verdict \[13\]. > On the contrary, we note that due to their different design, mix-nets and onion-router networks are typically not affected by this attack. **PriFi Solution:** Intuitively, to thwart an equivocation attack, clients need to agree on what they received before transmitting their next message. In PriFi, this is achieved without adding extra latency and without synchronization between clients. * Clients encrypt their messages before anonymizing them. * The encryption key depends on the history of downstream messages and also on the shared secrets with the guards. * The relay is unable to compute a plaintext if not all clients share the same history. ### 6.1 Protocol We modify the previous Anonymize and Blame protocols (Protocol 5 and 6). These are the final variants used in our implementation. **Anonymize:** * Each client $C_i$ keeps a personal history $h_i$ of downstream communications. * Upon receiving a downstream message $\underline{d}$, each client updates the history. * Each upstream message is then symmetrically-encrypted with a fresh key $γ$. * This value $γ$ is sent to the relay, blinded by a function of the downstream history $h_i$. * If all clients agree on the value $h_i$, the relay can unblind $γ$ and decrypt the message. **Blame:** The previous Blame protocol finds a disruptor when values $c_i$ or $s_j$ are not computed correctly; we add a way to validate the new values $κ_i$ and $σ_j$. * Since they are elements of $\mathbb{G}$, we apply a standard zeroknowledge proof \[2\]. * More precisely, we use an Or/And type of NIZKPoK which allows the clients to prove either: 1. Their ownership of the slot. 2. That $κ_i$ is computed correctly The *Anonymize* and *Blame* protocols have the following properties (proved in Appendix A.4): **Properties 8+9+10+11 \[Goal G1\]:** The anonymity of any honest client is unaffected by the extra information revealed: * $κ_i$ in step 3 of DCNet-Gen-Client. * $σ_j$ in step 2 of DCNet-Gen-Guard. * $\underline{Q_i}$,PoK in step 7 of Blame. * Or $r_{ij}$ in step 9 of Blame. **Property 12 \[Goal G1\]:** If $∃i, j$ two honest clients who received $\underline{d}_i \neq \underline{d}_j$ on round $k$, then neither the relay nor $\mathcal{A}$ can decrypt $m_k$ for any subsequent rounds $k'>k$. **Property 13 \[Goal G2\]:** If a client $C_i$ sends an arbitrary value $κ'_i$ instead of the value $κ_i$ as specified in the protocol, then $C_i$ is identified as the disruptor and is excluded from subsequent communications. **Properties 14+15:** Honest parties are not blamed for equivocation attacks, nor disruption attacks. ### 6.2 Practical considerations ![](https://i.imgur.com/JNXUBHj.png) ![](https://i.imgur.com/QPwWpEa.png) **Packet losses:** We note that this protection is decoupled from link-layer retransmissions; should one client fail to receive a packet, it will ask the relay again after a timeout, delaying all clients for this specific round (which is unavoidable for traffic-analysis resistance) but not invaliding the whole epoch with a wrong $h_i$ value. > Personal Note: Definitely an improvement from Dissent, (think Verdict as well). With slashing, $\mathcal{A}$ may use this feature to simply cause users, who need to continually delay the network to ask the relay for data again, to lose funds (be deemed malicious and not chosen for subsequent group formation rounds?) **Remark:** Due to the “Or” format of the PoK, we note that a malicious client can: 1. Jam their own slot 2. Then send arbitrary $Q_i$ values in step 7 of *Blame*. * and still pass the PoK because of their knowledge of $\tilde{p}_k$. > This has no benefit for the adversary, as mismatching Qi ’s are simply checked in the following steps of Blame, with no effect on honest parties (Property 10). ## 7 Evaluation We implemented PriFi in Go \[53\]. We evaluate it on a LAN topology typical of a small organizational network. **Methodology:** Our evaluation is five-fold: 1. We measure the end-to-end latency via a SOCKS tunnel without data, by having a client randomly ping the relay. 2. We compare PriFi with the related work. 3. We replay network traces that correspond to various workloads on PriFi and we measure the added latency and the bandwidth usage. 4. We explore limits of the system, then we introduce and evaluate two scenarios: 1. Having a local trusted guard (which will be relevant in the case of the ICRC) 2. And having clients outside of the LAN. 5. Finally, we explore the effect of churn and user mobility on PriFi. **Experimental setup:** We use Deterlab \[21\] as a testbed * The topology simulated consists of a 100Mbps LAN with 10ms latency between the relay and the clients; * We run 3 guards, which each have a 10Mbps link with 100ms latency to the relay and the clients. * We use 9 machines: * 1 dedicated to the relay and 1 per guard; * The clients are simulated on the remaining 5 machines, distributed equally. * All machines are 3GHz Xeon Dual Core with 2GB of RAM. > We focus our evaluation between 2 and 100 users, which is inspired from the ICRC operational sites (Figure C.1) **Reproducibility:** All experiments presented in this paper are reproducible with a few simple commands after cloning the repository \[53\]. Additionally, all raw logs and scripts to recreate the plots are available in a separate repository \[54\] ### 7.1 End-to-End Latency without Data Figure 3(a) shows the latency of the PriFi system, $i.e.$, the time needed for an anonymized packet to be sent by the client, decoded by the relay, and sent back to this same client. ![](https://i.imgur.com/lry2hLN.png) * In this experiment, one random user is responsible for measuring these “pings”, whereas others only participate in the protocol without sending data * ($i.e.$, the number of active user is 1, anonymous among all users). * A major component of the latency is the buffering of messages by the clients. * With only one slot per schedule, clients must wait for this slot before transmitting data. * This waiting time is depicted by the red curve in Figure 3(a). * To reduce the time spent waiting on the slot; we alter the scheduling mechanism and let clients “close” their slot if they have nothing to send, thus enabling other clients to transmit more frequently. * This reservation mechanism improves the situation where many users are idle. * It introduces additional delay in some cases, as the client needs to wait for the next reservation to open his slot, and wait again for his slot. * To further reduce latency, we pipeline rounds. * $i.e.$, we run DC-net rounds in parallel, instead of the “pingpong” presented in *Anonymize*. * This allows to better utilize the available bandwidth to reduce latency, until the capacity of the links are reached. ### 7.2 Comparison with Related Work **Benchmarking:** We select two related works for comparison: 1. The closest is Dissent In Number (abbreviated D#) \[73\]; * Like PriFi, it provides provable traffic-analysis by using binary-strig-based DC-nets. * Has similar assumptions ($M$ anytrust servers) * But with no particular emphasis on being low-latency. 2. Then, we compare with a more recent ACN, Riffle [37]. * Which has a similar topology. * But which emphasizes on minimizing the download bandwidth for clients. The first major difference between PriFi and both Riffle and D# is in the functionality of the guards: Both protocols require several server-to-server communications per round before outputting any anonymized data. We deploy Riffle and D# on our setup and compare their latency against PriFi (Figure 3(b)). ![](https://i.imgur.com/1vNgOgW.png) **Higher-level comparison:** We also numerically compare PriFi against: * Riposte [11]: PIR-based protocol. * Riposte uses expensive cryptography to save bandwidth. As a result, Riposte can processes up to 2 messages/sec with 2 servers (Fig. 7) * Whereas our relay outputs hundreds of messages/sec (Figure 3(a)). * And DiceMix [57]: a group-arithmetic-based DC-nets. * DiceMix’s latency is in the order of seconds (≈20s for 100 participants; Fig. 4) * These DC-netsallow for better collision resistance and proofs of correct computation. * When we evaluated them in PriFi, we found out that the generation of pseudo-randomness alone was too slow for low-latency communications (a fact also highlighted in Fig. 6, ### 7.3 Latency with Data **CRAWDAD Traces:** We evaluate the performance of PriFi when replaying the dataset ‘apptrafictraces’ [60] from CRAWDAD [14]. We selected three sub-traces: 1. A ‘Skype’ trace where one client performs a VoIP (non-video) call for 281 seconds, 2. A ‘Hangouts’ where one client performs a video call for 720 seconds, 3. An ‘Others’ trace where Gmail, Facebook, WhatsApp, Dropbox and other nonaudio, non-video services run for 15 minutes. Using the same setup as before, 5% of the clients are randomly assigned packet traces from a pool and, after a random delay $r ∈ [0,30]$ seconds, they replay the packets through PriFi. The relay decodes the packets and records the time difference between the decoded packet and the original trace. ![](https://i.imgur.com/0Pa4tCw.png) ![](https://i.imgur.com/E4dt1zJ.png) > In Figure 4(b), we see that the WAN bandwidth usage decreases with the number of clients. This is a shortcoming which indicates that PriFi spends more time waiting and less time transmitting, due to the increased time needed to collect ciphers from more clients. This is not desirable, as it means that overall PriFi cannot fully utilize the available bandwidth to offer minimal latency; this could be mitigated by increasing the pipelining of rounds for slow clients so that all clients answer in a timely fashion. We learn the following: 1. The mean added latency in the case of a Skype call (with 5% active users) is below 100ms for up to 80 clients, and below 150ms for 100 clients. * The International Telecommunication Union estimates that the call quality starts degrading after a 150ms one-way latency increase [66], and users start noticing a degraded quality after a 250ms one-way latency increase [68]. * Hence, while the current implementation starts to struggle with 100 clients, it supports VoIP calls for 0–80 users. 2. The mean added latency in the case of the ‘Others’ dataset is always below 100ms, which seems acceptable for Web pages usually load in seconds. 3. The replay of the ‘Hangouts’ data exhibits similar behavior as the ‘Skype’ dataset. * We see that the latency increases reasonably until 70 users, but then drastically increases, meaning that the current implementation cannot transmit the data fast enough, and that buffering occurs at the clients. We emphasize that all traffic has been anonymized through the same PriFi network, regardless of QoS, and in practice large file transfer (e.g., backups) would probably either be excluded from a low-latency network, or anonymized through other means (e.g., PriFi set up with a lower latency, but higher throughput). ### 7.4 Scalability & Different Scenarios **Scalability:** Figure 6(a) shows the performance for larger anonymity sets and with more guards; while the number of guards has almost no impact on performance (due to the buffering at the relay), our implementation would not be low-latency for more than a few hundred clients. **Local Trusted Guard:** We simulate this new deployment with one guard in the LAN instead of 3 remote guards. * The latency between the relay and the unique guard is 10 ms. We observe that the latency experienced by clients is roughly cut in half in this case with the additional benefit that the only WAN bandwidth usage is the anonymized goodput. * Figure 6(b), purple dotted curve versus blue solid curve **VPN:** When a member of an organization is accessing the network remotely (e.g., when traveling), it can benefit from PriFi’s protection from outside the organizational LAN; the cost is in performance, since the relay waits upon the slowest client to decode an anonymous message. * We simulate this new deployment by having all clients outside the LAN * This is modeled by having the latency between clients and relay to be 100 ms instead of 10 ms. * In this case, the baseline for latency is 200 ms. * We observe that in this scenario, end-to-end latency varies from 280 ms to 498 ms (Figure 6(b), red solid curve versus blue solid curve). * While is too high to support VoIP and videoconferencing, it might be acceptable for web-browsing. * We note that all users are slowed down; although this has not been explored in this work, this could be mitigated by having two PriFi networks, one reserved for local users, the other one accepting remote users. * Local users would participate in both networks, ensuring a sufficiently-large anonymity set, and would communicate only using the fastest PriFi network **Loss rates:** We briefly explore the impact of various loss rates in Figure 7(a). While it is an imperfect representation, this could sketch the performance in a real WLAN. * The results show that: * The current implementation requires “good” link quality (loss rates ≤ 10%, see [62] Figure 3a) to keep a low latency, which then degrades rather quickly with increasing loss rates. * We note that current WLANs typically have good resilience to message drops; noise and collisions result in increased jitter rather than losses [34]. * Implementing PriFi directly on Network Interface Cards (NIC) could give better control and performance. * Finally, we note that WLANs have a cheaper broadcast than LANs (which is not reflected in this experiment). ### 7.5 Client Churn In DC-nets, churn invalidates the current communications and leads to data re-transmissions and global downtime where no one can communicate. Although re-transmissions are acceptable with PriFi’s small and frequent rounds (e.g., a few 100KB of payload each 10ms), frequent churn could prevent delay-sensitive applications from running on top of PriFi. To our knowledge, our contribution here is the first analysis of the impact of churn on DC-nets in a realistic scenario where nodes are mobile (e.g., Wireless devices). **Dataset:** To characterize node mobility, we use a standard dataset [50] from CRAWDAD [14]. * It contains four hours of wireless traffic, recorded in a university cafeteria. * Those traces contain the Data Link layer (and show the devices’ association and disassociation requests). * The dataset contains 254 occurrences of churn over 240 minutes, in which there are 222 associations (33 unique devices) and 32 disassociations (12 unique devices). In comparison with the ICRC scenario, this dataset is likely a pessimistic model, as node mobility in a cafeteria is likely higher than in offices. **Dataset Analysis:** Each device (dis)connection induces a re-synchronization time of $D$ milliseconds (for *Setup*), * Where $D$ is dominated by the number of guards $m$ and clients $n$ and the latency between them. * A typical value for $D$ would be a few seconds We analyze two strategies to handle churn: * The *naïve* approach stops communication for $D$ sec at every churn. * The abrupt disconnections stops communication for $D$ sec at every disconnection only. * Devices connect using a graceful approach (Setup done in the background, keeping the previous Anonymize protocol running until the new set of clients is ready). * This strategy can be enforced by the relay. ![](https://i.imgur.com/L579dxd.png) In Table 1, we show: 1. The raw number of communication interruptions, which: * Directly comes from the node mobility in the dataset. 2. The network availability percentage. * Computed as $1−downtime \over{total time}$. 3. The maximum continuous downtime. * $i.e.$, the longest network interruption. **Analysis:** Using PriFi in the cafeteria scenario represented by the dataset [50] would naturally decrease network availability, as churn induces global downtime. **Anonymity Metrics:** We now analyze the size of the anonymity set with respect to time. * We display the difference, in percentage, between the actual anonymity set size and the baseline tendency. We see that size of the anonymity set does not vary more than ±8%, and the mean number of users is 50. Hence, our estimation for the worstcase of “anonymity loss” in this scenario is of 4 users, which seems acceptable in an anonymity set of 50 users. ## 8 Discussion on Intersection Attacks 1. In the context of an organization, desktop computers should be connected to PriFi most of the time, providing a baseline anonymity set. 2. Like with Tor, users should be cautious to “blend-in” by having traffic-patterns similar to other users. ## 9 Related Work One straightforward way to protect against local eavesdroppers is by tunneling the traffic through a VPN that is outside of the adversary’s control. * However, this provides no guarantee when the VPN provider is malicious. * Moreover, VPNs protect neither communication patterns nor, in the case of a local eavesdropper, the communication source. Anonymous Communication Networks (ACN) are the closest related work, but existing solutions translate poorly to the LAN setting: * Onion Routing: Tor [22] does not provide traffic analysis resistance against a global passive adversary [46, 47, 49, 69, 70]. * Some older works focus on low latency and efficiency, at the cost of traffic-analysis resistance (LAP [33], Dovetail [58], Hornet [8]). * Taranet provides traffic-analysis resistance through traffic shaping [9]. * Mix Networks: Older solutions (e.g., Crowds [55], Mixminion [15], Tarzan [26]) are not trafficanalysis resistant. * While recent works typically address this issue, due to the Anonymity Trilemma [17], this incurs high costs (either in latency or bandwidth), and a common way to keep these costs under control is by having application specific ACNs. **PIR-based Protocols:** A category of ACN rely on PIR [10] and ORAM [4] to implement efficient messaging systems (Riposte [11], Riffle [37]). * They are typically not made for low latency, as the anonymity set is built over a time period. **SDN-based** Some solutions use Software Defined Networks to anonymize packet headers; they are typically not traffic-analysis resistant against active attacks [44, 63, 78]. **DC-nets:** DC-nets [7, 12, 73] have provable traffic-analysis resistance and typically have high bandwidth and latency costs. * PriFi fits this category and focuses on low latency in the context of organizational networks. * Unlike the binary-string-based DC-nets used in PriFi, some related works rely on group arithmetic (Verdict \[13\], \[30\], DiceMix \[57\]), which enables computations to be proven correct. Unfortunately, their high computational cost makes them unsuited for low-latency applications. > Personal Note: What would the specifications be to allow this to work on consumer devices in a low-latency setting? **PriFi16:** Despite considering malicious insiders, PriFi does not provide a solution to insider jamming. PriFi acknowledges the problem of the equivocation without providing a concrete solution. Its protocol is described without any security analysis or proof. **Summary:** PriFi is a low-latency, trafficagnostic solution working like a VPN, conceptually close to Tor [22], Hornet [8] and Taranet [9], but tailored for LANs/WLANs. ## 10 Conclusion We have presented PriFi, an anonymous communication network that protects organizational networks from eavesdropping and tracking attacks. PriFi exploits the characteristics of (W)LANs to provide low-latency, trafficagnostic communication. * PriFi reduces the high communication latency of prior work via a new client/relay/server architecture. * This new architecture removes costly server-to-server communications, and allows client’s traffic to be decrypted locally, remaining on its usual network path. This avoids the latency bottleneck typically seen in other systems. PriFi also addresses two shortcomings of the related work: 1. Users are protected against equivocation attacks without added latency or costly gossiping. 2. Leveraging the LAN topology, disruption attacks are detected retroactively and orders of magnitude faster than the related work.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully