Click on a chapter number to access the page for the relevant chapter.
Presentation points for Chapter 13:
Transactions and Concurrency Control
To study the design of services whose long-lived objects are intended to be shared by multiple clients. To establish a model of a single-process server of long-lived objects. To set up the all-or-nothing and isolation properties of transactions within this model, so as to be able to study how they are maintained in the presence of concurrent clients and server failures. To study the three main approaches to concurrency control, all of which maintain the isolation property of transactions in the presence of their concurrent execution at a single server. Distributed transactions and the extension of the concurrency control methods to distributed services are left until Chapter 14.
Points to emphasize
In our model, a server provides operations on its objects which can be made long-lived by keeping copies in a recovery file. Mutual exclusion mechanisms can be used to make the operations atomic and the notify and wait operations can be used to synchronize the operations of different clients. The failure model for transactions assumes that servers may crash and that messages may be lost.
A transaction consists of the execution of a sequence of client requests that access or update one or more of the objects. A transaction may commit or abort. Transactions have two crucial properties: the `all-or-nothing' property and the `isolation' property. The `all-or-nothing' property requires that the effects of transactions that commit must be made permanent and when a transactions aborts, the result must be the same as if it had never existed. Recovery is concerned with ensuring the `all-or-nothing' property which has two aspects: failure-atomicity and durability. Failure-atomicity requires that the `all-or-nothing' property is maintained in the presence of server crashes and concurrent transactions that may abort at any time.
Concurrency control deals with the isolation property by ensuring that the effects of concurrent transactions are isolated from one another. Serial execution of transactions maintains isolation. But doing one transaction at a time does not provide adequate performance, particularly for interactive applications. Serial equivalence is defined (on page 523) as interleaving transactions having the same effect as if they are done one-at-a-time. To achieve it, all pairs of conflicting operations of two transactions should be executed in the same order at all of the objects they both access. Strict executions ensure the isolation property in the presence of aborting transactions. Two-phase locking uses locks on objects to ensure serial equivalence of transactions. The operation conflict rules give us the lock compatibility tables. Rules must be added for lock promotion. Strict two-phase locking prevents cascading aborts. Lock managers should use threads and synchronisation operations. Any locking scheme must address the issue of deadlock. Deadlock detection is relatively simple at a single server.
Optimistic concurrency control assumes that conflict is unlikely and allows concurrent transactions to perform their operations on tentative versions without checking for conflicts between them. Each transaction is validated before it is allowed to commit. If the validation fails, the transaction is aborted. Starvation is an associated problem. The validation rules are derived from operation conflict rules. These rules are simplified by allowing only one transaction at a time to validate and write its permanent objects. There are two alternatives: backward validation and forward validation.
Timestamp ordering ensures serial equivalence by ordering transactions according to the times at which they start. Each request to perform an operation is validated for its conflicts with other operations on the same object. If the validation fails, the transaction is immediately aborted. Strict executions may be ensured by delaying Read operations. Deadlock cannot occur.
The idea of serial equivalence is difficult. The section on conflicting operations (on Page 524) should be emphasized.
Some of the ideas are quite subtle, in particular the interaction between two-phase locking and serial equivalence and the interaction between strict executions and recovery (pages 526-8). These ideas can be tested in Exercises 13.2-13.7.
The section 13.4 on locks may be taught without sections 13.6 and 13.7.
When discussing optimistic concurrency control, make sure students realize that parallel validation with a check on rule 3 (page 547) is an option which is used in distributed transactions in Chapter 14.
Motivate the idea that transactions are necessary, i.e. that a service interface cannot always provide atomic operations which correspond to all of a client's requirements for atomicity. In the Banking example, the provision of a `Transfer' operation might be useful, but would not remove the need for transactions in Figure 13.2. The ACID mnemonic (page 519) is a useful teaching aid.
It is important that students should understand serial equivalence, strict executions and cascading aborts. Students could be asked to discuss solutions to exercises 13.7, 13.11, 13.12 and 13.13.
The synchronisation between clients operations in a server provides a useful background for thinking about the implementation of locks. The exercises 13.3 and 13.4 address this issue.
The methods for concurrency control are based on those used in databases but they are valid whether the objects are permanent or not. To simplify the discussion of concurrency control, it is assumed that the high-level operations provided by a service can be translated into read and write operations on simple objects. The rules for the three approaches to concurrency control are derived from the operation conflict rules (Figure 13.9). It may help to discuss these ideas.