# Discussion for: Tushar Chandra, Robert Griesemer, and Joshua Redstone, “Paxos Made Live: An Engineering Perspective” (invited talk, PODC 2007)
## Paper overview
At Google, engineers used the Paxos consensus algorithm to implement a fault-tolerant service that is used by several applications. They found the gaps between the algorithm's theoretical description and real-world requirements and try to share the lessons learned.
The paper's key insight is that production systems have additional requirements that are not described in theoretical papers like "Paxos Made Simple", such as latency, throughput, adding/removing nodes, and handling disk corruption. While solutions to each problem are studied, implementing a fault-tolerant system requires developers to understand solutions from literatures in different fields, implementing them correctly in a single codebase, and testing.
The paper first introduces the system they built --- a fault-tolerant, replicated key-value store --- and its requirements. After reviewing Paxos, the authors explain additional algorithms/mechanisms they implemented to ensure fault tolerance and performance, such as disk corruption detection, master leases, group membership, and snapshots. They also outline software engineering practices, such as the use of a custom DSL for implementing the protocol, runtime consistency checks, and extensive testing methodologies. The paper also shows the performance benchmark result of the system.
## Discussion questions
### Q1
In section 4, the authors use slightly different terminology from "Paxos Made Simple" to overview the Paxos algorithm. How can we map these concepts to the terminology in "Paxos Made Simple"?
#### Discussion summary for Q1
The table below shows the mapping of the terminology of the two papers.
| Paxos Made Simple | Paxos Made Live | Note |
|:----------------- |:------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| Replicas | Processes | Each replica plays multiple roles. |
| Coordinators | Proposers | A coordinator is an equivalent of a proposer that received "promise" messages from a majority of acceptors. |
| Prepare message | Propose Message | |
| Accepted message | Acknowledge | |
| Ignore | Reject | "Paxos Made Live" uses the word "reject" which sounds like sending an explicit reject message, whereas "Paxos Made Simple" just ignores the message. |
| Learner | Master coordinator | |
| Proposal number | Sequence number | |
Phase 1 in "Paxos Made Simple" corresponds to "Elect a replica to be the coordinator" in this paper (leader election). Phase 2 is split into two phases: sending "accept" messages and receiving "accepted" messages from a majority.
### Q2
Section 6.2 of the paper describes a "database consistency check" and various
"inconsistency incidents" that have occurred. But using Paxos for state machine replication ought to provide strong consistency, so what's going on here?
#### Discussion summary for Q2
Using the Paxos algorithm is not sufficient to implement a strongly consistent database in practice. In the original Paxos paper, many implementation details were abstracted away or ignored. This is common the first time an protocol is studied theoretically. However, when implementing Paxos into code, there are implementation details and errors that one must consider. For example, the following can cause problems:
- Hardware failure
- Byzantine fault
- Disk failures or corruption
- Human errors
- System operators may make mistakes.
So, the authors needed to add more logic and complexity to their implementation to handle these additional errors that were neglected in the original Paxos analysis.
### Q3
*(Contributed by Sicong Huang)*
To specify the two state machines as the core of the algorithm, the authors designed a state machine specification language and built a compiler to translate such specifications into C++. I wonder if the amount of complexity that come with it is justified? And how they made sure the compiler's correctness?
*(Lindsey adds:)* In your discussion about this, consider the pros and cons of designing a custom domain-specific language and compiler. Why do you think the Chubby developers chose to do this (as described in section 6.1 of the paper)? What are the benefits and drawbacks of their approach?
#### Discussion summary for Q3
- Pros
- Allows to decouple the implementation details from the algorithm
- In particular, using a DSL makes it easier for one to make fundamental changes later on
- Can express algorithms in a nicer syntax and improves readability
- Better readability might imply better maintainability
- Cons
- Need to design a language and build and test the compiler
- Additional complexity to handle when debugging. Can be phrased as a pro that logic is segregated.
- Learning curve: engineers working on the project must learn new language and tooling
- May not be ideal if people are coming in and leaving
- May be difficult to do performance optimization
### Q4
In section 6.3 of the paper, Chandra et al. write, "By their very nature, fault-tolerant systems try to mask problems. Thus they can mask bugs or configuration problems while insidiously lowering their own fault-tolerance." Give an example of how this might happen in Paxos or in another fault-tolerant system with which you are familiar (e.g., chain replication).
#### Discussion summary for Q4
Paxos can hide bugs or configuration errors because it can tolerate faults. Imagine a Paxos-based fault-tolerant distributed system consisting of 3 nodes. In this system, we expect the system to tolerate a fault in one node because having 2/3 is enough to achieve consensus. Suppose that, due to a misconfiguration of the system (e.g., a typo in hostname) or a serious bug, one node was faulty to begin with. From a user's point of view, the system is operating normally. However, it does not tolerate (any more) faults in the system and is in a dangerous state. In this case, the system is "masking" errors (misconfiguration or bug). Developers need clever techniques, such as distributed logging or fault injection, to test the system properly.
## Errata
Typos or other issues found in the paper:
[TODO for scribes]
## Other
Any interesting points or questions from the group discussion that didn't fit above:
Domain-specific languages for implementing consensus protocols:
- [DistAlgo](https://distalgo.cs.stonybrook.edu)
- [PSync: a partially synchronous language for fault-tolerant distributed algorithms](https://dl.acm.org/doi/10.1145/2837614.2837650)