# Questions ? ### I would like to ask if you can estimate which of the both approaches would be faster based on their implementation even though there is no evaluation Because Quotient follows a mixed-protocol based approach, which leads to high conversion costs and has lower communication costs due to the 2PC setup, we assume that Falcon will be slightly faster in a direct comparison. ### Can the FALCON and QUOTIENT protocols be extended (by reduction or similar techniques) to support more than 3 or 2 (respectively) parties? Such a procedure is generally unfamiliar to us. ### Is it possible to scale FALCON (resp. QUOTIENT) to work with multiple parties (>3 resp. 2)? Such a procedure is generally unfamiliar to us. ### Which setup would better scale in distributed learning setup while preserving privacy? Because Quotient follows a mixed-protocol based approach, which leads to high conversion costs and has lower communication costs due to the 2PC setup, we assume that Falcon will be slightly faster in a direct comparison. ### On slide 8 there are two DReLU functions. What is the difference? Actually, they are the same function. The one on the left side is the orginal function and shows what it computes. And the one on the right side is the modified DReLU function to reduce the computation time with multi-party computation restrictions and computes the same result. ### for FALCON you defined a malicious attacker that can deviate from the protocol. Does this security definition also include attackers that are providing manipulated training data to create potential wrong results for all other parties? As long as the attacker is loyal to the protocol and follows the protocol flow, it is considered as semi-honest party. In our understanding, an attacker who sends false or arbitrary data is a semi-honest attacker. The security goal of confidentiality should not depend on the data used. Sabotage of the model generally belongs to denial of service attacks and is usually not considered in more detail ### Are there any known extensions or modifications to QUOTIENT such that security against malicious adversaries would be guaranteed? You also stated that FALCON aborts if working with a malicious adversary. How would the honest party recognize if an attacker acts malicously (e.g. provides malicious training data)? In Quotient, the malicious attacker is not considered and there are no optional adjustments mentioned. ### Why is Falcon exactly a 3-Party Protocol and Quotient a 2-Party Protocol? Is there any special reason why this was chosen by the developers? The 3-party structure allows honest-majority approaches. ### Why is falcon limited to three parties? Is there a way to utilize the protocol for more parties e.g. by running it multiple times and combining the results? We believe it is theoretically possible to combine multiple 2PC or multiple 3PC networks, this is the Federated Learning approach. ### It is a shame that there is no data available to compare the two on common scenarios. However, i would like to know if there is any way to extend the Quotient protocol to include more than two parties by chaining multiple OTs together? Especially given the motivation, that multiple sources should be used to increase the training data this would be a game changer. Such a procedure is fundamentally unknown to us. ### You said that there is no data to compare the performance of the two approaches, but can you make a general statement about how well the approaches perform? Unfortunately, an additional search for direct comparisons did not yield any results. Because Quotient follows a mixed-protocol based approach, which leads to high conversion costs and has lower communication costs due to the 2PC setup, we assume that Falcon will be slightly faster in direct comparisons. ### How can the concept of private neural net training and the XAI (explainable AI) movement be successfully combined? (the problem turns even further into a black box since the training data is entirely unaccountable) Since the model isnt availabe to any party, it is impossible for one party to explain the model, but there should be multiple approaches using extensions for the multi party protocols to address this problem. So it should be possible to decrypt the model if an explanation is necessary if all parties agree. Also you could generate more information about the model during the training and inference to allow an explanation. ### Quotient doesn't seem to be more secure compared to Falcon event though it uses both, Garbled Circuirts and Secret Sharing and doesn't this result in lower performance (speed)? Yes, in general mixed-protocol approaches lead to high conversion costs, but at the same time they allow the combination of different efficient operations in different protocols. So there is a trade-off between computational complexity, communication complexity and expensive conversions. ### On slide 10 you wrote that falcon provides security-with-abort in an honest majority security setting. Can you explain how this can be done in comparison to the dishonest majority? Security-with-abort means that in case of honest-majority FALCON can detect the malicious activity and abort the computation as a countermeasure. Falcon does not provide any security guarantee with the dishonest majority. ### On slide 5 you showed the addition of a bias. Can you please explain why the bias is needed and is it always 1 ? The bias (w0) can become any number by training but the factor its multiplicated with stays constantly 1. This is just a mathematical trick to define the bias as part of an vector of weights, the more inuitive mathematical representation would be x1 * w1 + x2 * w2 + ... + xn * wn + b ### Considering the two frameworks. Is there a possibility to combine and extend the two approaches so that the gaps are closed and the scalability is increased? One of them is a 2 party protocol and the other one is 3 party protocol. They use some different techniques, approaches to overcome the same computation problem in DNN with MPC. They can not be combined easily. ### Regarding QUOTIENT scheme, why the quantizaton is MPC-aware? How to interpret this "MPC-aware"? The idea is quantization using mathematical operations which are efficient for the multi party computation setup. Quotient uses for normalization the divisioin by the next-power-of-two instead of the division by the closest-power-of-two. ### On slide 5 you talked about two activation functions, Sigmoid and ReLU. What is the difference between the two of them? You just explianed Sigmoid, do neurons just use this function ? And why is it needed anyway ? Why do we need to limit the out of the neuron with an activation function ?Why do we need to limit the output of the neuron with an activation function ? Unfortunately, there was not enough time to go into details. In principle, the activation function is, as far as I know, copied from the functioning of neuronal cells. The intuitive idea behind it is that a neuron should only become active above a certain value of its inputvalues. The sigmoid function has the nice property that it cannot take a value above 1 as output value. This property has the effect that the output values cannot grow exponentially and therefore the output of a neuron cannot become too dominant. However, experience has shown that ReLu without this property leads to almost equivalent results and is easier to compute and is therefore usually prefered. ### Slide 5 shows two activation functions "Sigmoid" and "relu". Slide 8 shows a function "DReLU" (according to the audio a activation function).1) Is the DReLU function similar to (or functionally the same as) the relu activation function (my guess based on the name)? 2) Does that mean the activation function is limited to DReLU/relu or is it also possible to use a Sigmoid-like activation function? The DReLU function is also an activation function just like the two activation functions seen on slide 5. Rectified Linear Unit: ReLU(x) = max(x, 0) and Dynamic Rectified Linear Unit: DReLU(x) = max(α*β*x, x) where α ∈ {0,1} and β is a dynamic variable updated from the last error. But both are activation functions. An activation function of a DNN is not limited to DReLU or ReLU, but there are also some alternatives and Sigmoid function is also one of them. ### Could you elaborate, whether there exist complementary approaches for PP-NNT which are better suited for specific scenarios? There are many general characteristics with probably many exceptions. Here is a short overview: - HE-based MPL: frameworks have a low communication overhead - GC-based MPL: frameworks often used in scenarios with low computational complexity, hight communication overhead - SS-based MPL: There are multiple variants with different characteristics which differ with respoect to number of parties, computation complexity, communication rounds and security assumptions. - Mixed-protocol based MPL: Trade-off between computational complexity and communication complexity. Expensive conversions. ### Suppose there is a scenario that requires the highest security but involves only two parties. Is it possible to run FALCON with two parties by emulating a third party, for example? Or would it then be better to use QUOTIENT? If the highest security is needed, malicious behaviour is expected. In such a case, an other question comes up, who will emulate the third party? If we assume that the first party also emulates the third party, than the hones majority is given as long the first party behaves correctly. Since the party who emulates the third party holds 2 secret sharings and can compute the other parties secret sharing, the privacy isnt secured anymore. In my opinion using this approach would guarantee honest majority but is generally not recommended since it breaks the assumptions. ### I would like to ask you: Are Falcon and Quotient already used for privacy-preserving neural network training in practice? If not, why not? Due to the high communication and complexity costs, secure multiparty computation approaches are only used in very few scenarios. Often, adapted protocols are for a specific use case are used. These are newly developed or created by modifying existing ones. We are not aware of any practical scenario in which Falcon or Quotient are currently used. ### As one user in FALCON is not necessarily trusted, could the protocol run as well only with trusted remaining two? As long as the not trusted party does not act maliciously, the protocol can be run well by the parties. But FALCON is a 3 party protocol and it requires exactly 3 parties to be run. ### Slide - Multi-Party Computation, we have secret number 208. How did we come up with 275 + 753 + 204 with additive sharing? What is the role of the mod in this category? It is how the logic of additive sharing works. For example we have a secret number 'A', to split the secret number 'A' into 3 additive sharings in a finite field, one choose 2 random numbers 'B', 'C' in the finite field and computes the third secret sharing 'D' = 'A' - ('B' + 'C'). The mod operation defines the upper bound of the finite field of the cryptographic scheme. In our example, the secret number equals to 208 and the additive sharings are 275, 753, 204 respectively. The finite field of the scheme is 2^10 = 1024. It means that 208 = (275 + 753 + 204) mod 1024.~~