# The Decracy Consensus System ## Intro Looking at the scaling solutions in the blockchain ecosystem system today most of the chains are looking to implement a layer two system that will increase the output of the chain so it can be more scalable. In Decracy we strongly believe that this system should be the layer one as the basis of a fair and global scalable product used by millions of users around the globe. That's why the team has decided to go with a DAG model for the chain structure and with a custom avalanche-like system for the consensus that offers with the additions that the team made a great working layer one. The Decracy consensus system is a new hybrid consensus system. An evolved version of the sampling consensus Familly tailor-made to suit the needs of a global network. Additionally, it is enhanced with the most specific design decisions to be drastically faster and well suited to the needs of a truly global scalable system. Alongside the identification layer, is the heart of the system representing the ethos of Decracy to become a universally used value. The consensus system does not only protects the network from malicious acts but also enables the very fast confirmation of transactions in a very secure way. Combined these two values give the network a very small complexity factor when it comes to the user interface along with an extremely secure environment that users can trust and use seamlessly. While sampling like consensus mechanisms are very fast and scale very well with more participants, Decracy model with the X-ID validators in charge validating blocks gives the model even more great output on transactions per second(TPS). Sampling like models were created with a proof of stake system in mind, and still most of the implementations are like that. Decracy idea goes a step beyond that and takes into consideration that stakers are never equal members if they are not X-ID validators. So the weighting of the consensus voting is [Amount]% in the hands of X-ID validators and the rest [Amount]% in the hands of the stakers. This, the X-ID system and the reputation system itself give the system a very high security and a even higher output against vanilla avalanche mechanisms. The sampling consensus mechanisms rely on network sampling to decide within the network. The logic follows a cascading effect within the participants of the consensus to achieve very fast decisions even on polarized networks. The execution flow starts when a new query is received and is presented to the validator for verification. Upon arrival the validator concurrently send to peers that are connected to him the query for verification, this is called sampling, in a cascading effect. By repeating this workflow for each participant, for a number of times the network can with high precision decided upon any query by tracking a confidence number for the query. As soon as the confidence factor is achieved the system presents the query as validated. This particular model scales along the number of participants and it reflects a mechanism designed for a global scale system. ### Protocol Main Features #### Efficient scalable : The protocol is lightweight and therefore it can scale and maintain low latency. #### Byzantine tolerance: It can tolerate a large percentage of Byzantine participants with no impact on safety. In particular, you can have up to 50% of the nodes being Byzantine, i.e., nodes that try to trick the network and keep the entire network imbalanced. However, they will be unable to do this in a way that causes two nodes to decide on two different colors. #### Egalitarian ecosystem: The sampling protocol gives rise to an egalitarian ecosystem, i.e., all nodes in the network are born the same. There are no miners and no special privileges are given to them. #### No liveness guarantee for conflicting transactions: If an attacker tries to spend the same money twice in two different transactions (double-spending), then the sampling protocol will not be able to decide between these two, causing this money to be lost. Classical consensus and Nakamoto protocols would have decided on one transaction or the other, however, the sampling protocol might not. This is a very interesting property for the protocol, that it implicitly and naturally punishes bad actors without any additional complications to the protocol. ### Scaling One common problem with a sampling consensus mechanism is the sear amount of data it creates, especially if we take under consideration the large amount of transaction per second it can handle. A sample consensus mechanism can use terrabytes of storage in a short time period just cause it handles so much data. A systematic prunning of the data with checkpoints is the remedy for that problem as the chain it self prunes after a given period and all participants agree that this is the only true nature of the chain. #### Epochs Epochs are large sums of time in the chain. They are collections of event blocks and they are used to mesure participation and rewarding in the system. Epochs are [Amount] days periods that the consensus mechanism actively stores all the data that are contained in this period. #### Checkpoints Checkpoints are activated at the end of each epoch and are prunning mechanisms that summarise the latest epoch. All participating validators are agreeing on the chain that they see and create a signature with all the private keys related to that epoch and seal the epoch. That means that this a new starting point for the consensus mechanism and the network it self as everything before that is considered as valid. In any other case this alone would be cosidered problematic but not with Decracy as the main actors propagating the chain are the X-ID validators that provide the highest of security. Additionaly the pruning system does not discard the previous epoch but rather creates meta data for it and stores that for a smaller fingerprint. This can be described as state of the system at each checkpoint.