## UC-Security We refer to the real and ideal paradigm in [[Can04](https://eprint.iacr.org/2003/239.pdf), [Can00](https://eprint.iacr.org/2000/067.pdf)] to show how to analyze the security of a protocol. Two concepts are important to formulate before defining the security: 1. The real-life model of protocol execution 2. The ideal process for evaluating an ideal functionality **The real-life model of protocol execution**. The real-life model consists of a protocol π, and a collection of interacting computing elements (including an adversary A, an environment Z, and several parties). Each of these elements runs the protocol on its own local input and makes independent random choices. We call these elements as machines. The environment represents the interaction via inputs given to the protocol and outputs received from it. The adversary A has the ability to control a subset of the parties and the communication network. The machines running the protocol and the adversary interact with each other by exchanging messages and potentially corrupting parties, following a specified set of rules. The environment Z is notified when a party is corrupted. An execution of π with adversary A and environment Z, on initial input z, starts by running Z on input z. From this point on, the machines take turns in executing. The execution of the protocol continues until the environment halts. The final output of the protocol execution is determined by the output of the environment. The resulting output is a single bit that indicates whether the environment perceives its interaction as being with the real-life model for protocol π or with the ideal process. We can denote the output of the environment Z when interacting with the adversary A and the parties running protocol π on a security parameter k, input z, and random input r as $EXEC_{π,A,Z} (k, z, r)$. **The ideal process for evaluating an ideal function F**. The ideal process involves an ideal functionality F, an ideal process adversary S, an environment Z, and a set of dummy parties. The ideal functionality captures the desired functionality of the task and operates as a trusted party that can securely communicate with all participants of the protocol. It processes the inputs of all participants locally and provides them with the desired outputs. The adversary S can either send messages to the ideal functionality F or corrupt a party, and both the environment Z and the ideal functionality F are informed of the party’s corruption. An execution of the ideal functionality F with adversary S and environment Z, on initial input z, starts by running Z on input z. From this point on, the machines take turns in executing. The protocol execution continues until the environment halts, and the final output is a single bit determined by the environment. This output indicates whether the environment perceives its interaction as being with the real-life model or with the ideal process for ideal functionality F. We can denote the output of the environment Z when interacting in the ideal process with adversary S and ideal functionality F, using a security parameter k, input z, and random input r, as $IDEAL_{F,S,Z}(k,z,r)$. Informally, privacy holds in the ideal process because a party only learns its own input and output, ensuring that it cannot learn anything beyond that. Each party remains unaware of the inputs of other parties. **Security**. The security of a protocol is analyzed by comparing the capabilities of an adversary in the real protocol execution to its capabilities in an ideal process that is inherently secure. We say that a protocol π securely realizes an ideal functionality F if, for any real-life adversary A, there exists an ideal-process adversary S such that no environment Z, given any input, can determine with more than negligible probability whether it is interacting with A and the parties running π in the real-world process, or it is interacting with S and F in the ideal process. :::success Definition 1. We say that π UC-realizes F if for every real adversary A, there exists an ideal adversary F such that for any environment Z we have $$IDEAL_{F,S,Z} \stackrel{c}{≈} EXEC_{π,A,Z}$$ ::: In the real-world model, where there is no trusted party, we consider a protocol π is secure if it securely realizes an ideal functionality F. Since adversarial attacks cannot succeed in the ideal world, this guarantees that any attacks on protocol executions in the real world will also fail. <!-- **The Adversarial Model for Adaptive Security**. The adversary has the ability to adaptively corrupt parties during the computation. Once a party is corrupted, it sends its internal state to the adversary and follows the adversary’s instructions thereafter. The environment Z (and the ideal functionality F in the ideal process) are notified of the party’s corruption. If t of n signers are corrupted, then the ideal functionality outputs the verification result under the control of the adversary. **The Adversarial Model for Proactive Corruption model**. The adversary also has the capability to decorrupt parties. When a party is decorrupted, it resumes the execution of the original protocol and no longer sends its state to the adversary. However, it’s important to note that the adversary retains knowledge of the complete internal state of the decorrupted party at the moment of decorruption. This model considers the possibility that parties are corrupted for a certain period of time only. Thus, honest parties may become corrupted throughout the computation (like in the adaptive adversarial model), but corrupted parties may also become honest. The proactive model makes sense in cases where the threat is an external adversary who may breach networks and break into services and devices, and secure computations are ongoing. When breaches are discovered, the systems are cleaned and the adversary loses control of some of the machines, making the parties honest again. The security guarantee is that the adversary can only learn what it derived from the local state of the machines that it corrupted, while they were corrupted. --> **Rewinding Technique**. Rewinding is a technique used in security analysis to replay a computation or protocol from a previous state, typically to explore different branches of execution to recover information that was not initially revealed. The environment, the adversary, the ideal functionality, (dummy) parties are machines [[Can00](https://eprint.iacr.org/2000/067.pdf)] that can be rewind. <!-- We will use the rewinding technique in two places in the simulation. More precisely, the simulator interacts with the environment and rewind the environment to a previous state, the environment will not notice the rewinding that happened. The simulator can - extract the adversary’s secrets, so that the simulator can simulate the the real or ideal process. - simulate the adaptive corruption. --> We do not delve deeply into the reasons why rewinding is permitted in simulation. We show one example of technically rewinding machines. Snapshots of a virtual machine (VM) can be captured at any point, allowing for easy restoration and rewinding of the VM. By restoring a snapshot, the VM returns to the exact same state it was in before, without any awareness of the rewinding that occurred. ## References [[Can04]](https://eprint.iacr.org/2003/239.pdf) Universally Composable Signature, Certification, and Authentication [[Can00]](https://eprint.iacr.org/2000/067.pdf) Universally Composable Security: A New Paradigm for Cryptographic Protocols