Article Source
- Title: DISTRIBUTED SYSTEMS Concepts and Design Fifth Edition
- Authors: George Coulouris, Jean Dollimore, Tim Kindberg and Gordon Blair
Chubby
Chubby [Burrows 2006] is a crucial service at the heart of the Google infrastructure offering storage and coordination services for other infrastructure services, including GFS and Bigtable. Chubby is a multi-faceted service offering four distinct capabilities:
• It provides coarse-grained distributed locks to synchronize distributed activities in what is a large-scale, asynchronous environment.
• It provides a file system offering the reliable storage of small files (complementing the service offered by GFS).
• It can be used to support the election of a primary in a set of replicas (as needed for example by GFS, as discussed in Section 21.5.1 above [1]).
• It is used as a name service within Google.
At first sight, this might appear to contradict the overall design principle of simplicity (doing one thing and doing it well). As we unfold the design of Chubby, however, we will see that its heart is one core service that is offering a solution to distributed consensus and that the other facets emerge from this core service, which is optimized for the style of usage within Google.
We begin our study of Chubby by examining the interface it offers; then we look in detail at the architecture of a Chubby system and how this maps onto the physical architecture. We conclude the examination by looking in detail at the implementation of the consensus algorithm at the heart of Chubby, Paxos.
Chubby interface
Chubby provides an abstraction based on a file system, taking the view first promoted in Plan 9 [Pike et al. 1993] that every data object is a file. Files are organized into a hierarchical namespace using directory structures with names having the form of:
/ls/chubby_cell/directory_name/.../file_name
where /ls refers to the lock service, designating that this is part of the Chubby system, and chubby_cell is the name of a particular instance of a Chubby system (the term cell is used in Chubby to denote an instance of the system). This is followed by a series of directory names culminating in a file_name. A special name, /ls/local will be resolved to the most local cell relative to the calling application or service.
Chubby started off life as a lock service, and the intention was that everything would be a lock in the system. However, it quickly became apparent that it would be useful to associate (typically small) quantities of data with Chubby entities – we see an example of this below when we look at how Chubby is used in primary elections. Thus entities in Chubby share the functionality of locks and files; they can be used solely for locking, to store small quantities of data or to associate small quantities of data (effectively metadata) with locking operations.
A slightly simplified version of the API offered by Chubby is shown in Figure 21.10. Open and Close are standard operations, with Open taking a named file or directory and returning a Chubby handle to that entity. The client can specify various parameters associated with the Open, including declaring the intended usage (for example, for reading, writing or locking), and permissions checks are carried out at this stage using access control lists. Close simply relinquishes use of the handle. Delete is used to remove the file or directory (this operation fails if applied to a directory with children).
In the role of a file system, Chubby offers a small set of operations for whole-file reading and writing; these are single operations that return the complete data of the file and write the complete data of the file. This whole-file approach is adopted to discourage the creation of large files, as this is not the intended use of Chubby. The first operation, GetContentsAndStat, returns both the contents of the file and any metadata associated with the file (an associated operation, GetStat, just returns the metadata; a ReadDir operation is also provided to read names and metadata associated with children of a directory. SetContents writes the contents of a file and SetACL provides a means to set access control list data. The reading and writing of whole files are atomic operations.
In the role of a lock-management tool, the main operations provided are Acquire, TryAcquire and Release. Acquire and Release correspond to the operations of the same name as introduced in Section 16.4 [1]; TryAcquire is a non-blocking variant of Acquire. Note that although locks are advisory in Chubby an application or service must go through the proper protocol of acquiring and releasing locks. The developers of Chubby did consider an alternative of mandatory locks, whereby locked data is inaccessible to all other users and this is enforced by the system, but the extra flexibility and resilience of advisory locks was preferred, leaving the responsibility of checking for conflicts to the programmer [Burrows 2006].
Should an application need to protect a file from concurrent access, it can use both the roles together, storing data in the file and acquiring locks before accessing this data.
Chubby can also be used to support a primary election in distributed systems – that is, the election of one replica as the primary in passive replication management (refer back to Sections 15.3 and 18.3.1 for discussions of election algorithms and passive replication respectively). First, all candidate primaries attempt to acquire a lock associated with the election. Only one will succeed. This candidate becomes the primary,S with all other candidates then being secondaries. The primary records its victory by writing its identity to the associated file, and other processes can then determine the identity of the primary by reading this data. As mentioned above, this is a key example of combining the roles of lock and file together for a useful purpose in a distributed system. This also shows how primary election can be implemented on top of a consensus service as an alternative to the algorithms such as the ring-based approach or the bully algorithm introduced in Section 15.3.
Finally, Chubby supports a simple event mechanism enabling clients to register when opening a file to receive event messages concerning the file. More specifically, the client can subscribe to a range of events as an option in the Open call. The associated events are then delivered asynchronously via callbacks. Examples of events include that the file contents have been modified, a handle has become invalid, and so on.
Chubby is very much a cut-down file system programming interface compared to, for example, POSIX. Not only does Chubby require read and update operations to apply to whole files, but it does not support operations to move files between directories, nor does it support symbolic or hard links. Also Chubby maintains only limited metadata (related to access control, versioning and a checksum to protect data integrity).
Chubby architecture
As mentioned above, a single instance of a Chubby system is known as a cell; each cell consists of a relatively small number of replicas (typically five) with one designated as the master. Client applications access this set of replicas via a Chubby library, which communicates with the remote servers using the RPC service described in Section 21.4.1. The replicas are placed at failure-independent sites to minimize the potential for correlated failures – for example, they will not be contained within the same rack. All replicas are typically contained within a given physical cluster, although this is not required for the correct operation of the protocol and experimental cells have been created that span Google data centres.
Each replica maintains a small database whose elements are entities in the Chubby namespace – that is, directories and files/locks. The consistency of the replicated database is achieved using an underlying consensus protocol (an implementation of Lamport’s Paxos algorithm [Lamport 1989, Lamport 1998]) that is based around maintaining operation logs (we look at the implementation of this protocol below). As logs can become very large over time, Chubby also supports the creation of snapshots – complete views of system state at a given point of time. Once a snapshot is taken, previous logs can be deleted with the consistent state of the system at any point determined by the previous snapshot together with the applications of the set of operations in the log. This overall structure is shown in Figure 21.11.
Figure 21.11
A Chubby session is a relationship between a client and a Chubby cell. This is maintained using KeepAlive handshakes between the two entities. To improve performance, the Chubby library implements client caching, storing file data, metadata and information on open handles. In contrast to GFS (with its large, sequential reads and appends), client caching is effective in Chubby with its small files that are likely to be accessed repeatedly. Because of this caching, the system must maintain consistency between a file and a cache as well as between the different replicas of the file. The required cache consistency in Chubby is achieved as follows. Whenever a mutation is to occur, the associated operation (for example, SetContents) is blocked until all associated caches are invalidated (for efficiency, the invalidation requests are piggybacked onto KeepAlive replies from the master with the replies sent immediately when an invalidation occurs). Cached data is also never updated directly.
The end result is a very simple protocol for cache consistency that delivers deterministic semantics to Chubby clients. Contrast this with the client caching regime in NFS, for example, where mutations do not result in the immediate updating of cached copies, resulting in potentially different versions of files on different client nodes. It is also interesting to compare this with the cache consistency protocol in AFS, but we leave that as an exercise to the reader (see Exercise 21.7 [1]).
This determinism is important for many of the client applications and services that use Chubby (for example, Bigtable, as discussed in Section 21.5.3 [1]) to store access control lists. Bigtable requires consistent update of access control lists, across all replicas and in terms of cached copies. Note that it is this determinism that led to the use of Chubby as a name server within Google. We mentioned in Section 13.2.3 that the DNS allows naming data to become inconsistent. While this is tolerable in the Internet, the developers of the Google infrastructure preferred the more consistent view offered by Chubby, using Chubby files to maintain name to address mappings. Burrows [2006] discusses this use of Chubby as a name service in more detail.
Implementing Paxos
Paxos is a family of protocols providing distributed consensus (see Section 15.5 for a wider discussion of distributed consensus protocols). Consensus protocols operate over a set of replicas with the goal of reaching agreement between the servers managing the replicas to update to a common value. This is achieved in an environment where:
-
Replica servers may operate at an arbitrary speed and may fail (and subsequently recover).
-
Replica servers have access to stable, persistent storage that survives crashes.
-
Messages may be lost, reordered or duplicated. They are delivered without corruption but may take an arbitrarily long time to be delivered.
Paxos is therefore fundamentally a distributed consensus protocol for asynchronous systems (see Section 2.4.1 [1]) and indeed is the dominant offering in this space. The developers of Chubby stress that the above assumptions reflect the true nature of
Internet-based systems such as Google and caution practitioners about consensus algorithms that make stronger assumptions (for example, algorithms for synchronous systems) [Burrows 2006].
Recall from Chapter 15 that it is impossible to guarantee consistency in asynchronous systems but that various techniques have been proposed to work around this result. Paxos works by ensuring correctness but not liveness – that is, Paxos is not guaranteed to terminate (we return to this issue below once we have looked at the details of the algorithm).
The algorithm was first introduced by Leslie Lamport in 1989 in a paper called The Part-Time Parliament, [Lamport 1989, Lamport 1998]. Inspired by his description of Byzantine Generals (as discussed in Section 15.5.1), he again presented the algorithm with reference to an analogy, this time referring to behaviour of a mythical parliament on the Greek island of Paxos. Lamport writes amusingly about the reaction to this presentation on his web site [research.microsoft.com].
In the algorithm, any replica can submit a value with the goal of achieving consensus on a final value. In Chubby, agreement equates to all replicas having this value as the next entry in their update logs, thus achieving a consistent view of the logs across all sites. The algorithm is guaranteed to eventually achieve consensus if a majority of the replicas run for long enough with sufficient network stability. More formally, Kirsch and Amir [2008] present the following liveness properties for Paxos:
-
Paxos-L1 (Progress): If there exists a stable majority set of servers, then if a server in the set initiates an update, some member of the set eventually executes the update.
-
Paxos-L2 (Eventual Replication): If server s executes an update and there exists a set of servers containing s and r, and a time after which the set does not experience any communication or process failures, then r eventually executes the update.
The intuition here is that the algorithm cannot guarantee to reach consistency when the network behaves asynchronously but will eventually reach consistency when more synchronous (or stable) conditions are experienced.
The Paxos algorithm
The Paxos algorithm proceeds as follows:
-
Step 1: The algorithm relies on an ability to elect a coordinator for a given consensus decision. Recognizing that coordinators can fail, a flexible election process is adopted that can result in multiple coordinators coexisting, old and new, with the goal of recognizing and rejecting messages from old coordinators. To identify the right coordinator, an ordering is given to coordinators through the attachment of a sequence number. Each replica maintains the highest sequence number seen so far and, if bidding to be a coordinator, will pick a higher unique number and broadcast this to all replicas in a propose message.
It is clearly important that the sequence number picked by a potential coordinator is indeed unique, two (or more) coordinators must not be able to pick the same value. Let us assume we have n replicas. A unique sequence number can be guaranteed if every replica is assigned a unique identifier, ir, between 0 and n–1, and then selects the smallest sequence number s that is larger than any sequence numbers seen so far, so that s mod n = ir (for example, if the number of replicas is 5, we look at the replica with unique identifier 3 and the last sequence number seen was 20, then this replica will pick a sequence number of 23 for its next bid).
If other replicas have not seen a higher bidder, they either reply with a promise message indicating that they promise to not deal with other (that is, older) coordinators with lower sequence numbers, or they send a negative acknowledgement indicating they will not vote for this coordinator. Each promise message also contains the most recent value the sender has heard as a proposal for consensus; this value may be null if no other proposals have been observed. If a majority of promise messages are received, the receiving replica is elected as a coordinator, with the majority of replicas supporting this coordinator known as the quorum.
-
Step 2: The elected coordinator must select a value and subsequently send an accept message with this value to the associated quorum. If any of the promise messages contained a value, then the coordinator must pick a value (any value) from the set of values it has received; otherwise, the coordinator is free to select its own value. Any member of the quorum that receives the accept message must accept the value and then acknowledge the acceptance. The coordinator waits, possibly indefinitely in the algorithm, for a majority of replicas to acknowledge the accept message.
-
Step 3: If a majority of replicas do acknowledge, then a consensus has effectively been achieved. The coordinator then broadcasts a commit message to notify replicas of this agreement. If not, then the coordinator abandons the proposal and starts again.
Note that the terminology above is that used by Google, for example in Chandra et al. [2007]. In the literature, descriptions of the protocol may use other terminology, for example based around the roles of proposers, acceptors, learners and so on.
Figure 21.12
In the absence of failure, consensus is therefore achieved with the message exchanges shown in Figure 21.12. The algorithm is also safe in the presence of failures – for example, the failure of a coordinator or of another replica or problems with lost, reordered or duplicated or delayed messages, as discussed above. A proof of correctness is beyond the scope of this book but relies heavily on the ordering imposed by step 1 coupled with the fact that, because of the quorum mechanism, if two majorities have agreed on a proposed value there must be at least one replica in common that agreed to both. The quorum mechanism also ensures correct behaviour if the network partitions, since only at most one partition is able to construct a majority.
Returning to the issue of termination, it is possible for Paxos to fail to terminate if two proposers compete against each other and indefinitely outbid each other in terms of higher and higher sequence numbers. This is consistent with the impossibility result of Fischer et al. [1985] concerning absolute guarantees of consensus in asynchronous systems.
Additional implementation issues
In Chubby, it is not sufficient to reach agreement on a single value; there is a need to reach agreement on a sequence of values. In practice the algorithm must therefore repeat to agree a set of entries in the log. This is referred to as Multi-Paxos by Chandra et al. [2007]. In Multi-Paxos certain optimizations are possible, including the election of a coordinator for a (potentially long) period of time thus avoiding repeated executions of step 1.
The paper by Chandra et al. also discusses the engineering challenges of implementing Paxos in a real-world setting, and in particular in the complex distributed system setting offered by the Google infrastructure. In this entertaining and instructive paper, they discuss the challenges of moving from algorithmic description and formal proof to making the algorithm operate effectively as part of the Chubby system, including dealing with disk corruptions and other contextual events such as system upgrades. The paper emphasizes the importance of a stringent testing regime, especially for such key building blocks of fault-tolerant systems, resonating with the overall Google principle of extensive testing mentioned in Section 21.3.
References
[1] George Coulouris, Jean Dollimore, Tim Kindberg and Gordon Blair, DISTRIBUTED SYSTEMS Concepts and Design Fifth Edition, Pearson Education, Inc., 2012.