Race Conditions

  • When two or more processes execute concurrently (interleaving)
  • They share a modifiable resource
  • Execution outcome might depend on which order the shared resource is accessed and modified

Critical Sections (CS)

  • Code segments with race conditions are designated as critical sections
  • At any point in time, only one process can execute in the critical section
  • Properties of correct CS implementations
    • Mutual Exclusion: If a process is executing a CS, all other processes are prevented from entering the CS
    • Progress: If no process is in a CS, one waiting process should be granted access
    • Bounded Wait: After a process requests entry to a CS, there is exists an upper bound on the number of times other processes can enter the CS before it
    • Independence: Process not executing in a CS should never block other processes
  • Symptoms of incorrect synchronisation
    • Deadlock: all processes blocked
    • Livelock: usually related to deadlock avoidance mechanism, processes keep changing state to avoid deadlock, making no proggress
    • Starvation: some processes are blocked forever

Assembly Level Implementation

  • Atomic instructions, e.g. TestAndSet Register, MemoryLocation
    • Load the current content at MemoryLocation into Register
    • Stores 1 into MemoryLocation
    • Performed as a single machine operations, nothing can execude between the two steps
  • This works, but employs busy waiting (keep checking until can enter), wasting CPU
  • Variants include: Compare & Exchange, Atomic Swap, Load Link / Store Conditional
void EnterCS(int* lock) while(TestAndSet(lock) == 1);
void ExitCS(int* lock) *lock = 0;

High Level Language Implementation

Peterson’s algorithm

// Process P0, mirror for P1
Want[0] = 1;
Turn = 1;
while (Want[1] && Turn == 1);
// CS
Want[0] = 0;
  • Disadvantages:
    • Busy waititng, wasting CPU
    • Low level, we want higher-level programming contstruct
    • Needs modification for more than 2 processes
    • Not general: only does mutual exclusion, nothing else

High Level Abstraction

  • Semaphore: A generalised synchronisation mechanism
  • Only specifies behaviours, implementations can be different
  • Provides a way to block a number of processes (sleeping processes) AND a way to unblock (wake) one or more sleeping processes
  • How it works
    • A semaphore consists of an integer S and a list to keep track of waiting processes
    • S is initialised to the number of processes we want to allow at once
    • Wait(S): If S ≤ 0, block. Then, decrement S (aka P(), Down())
    • Signal(S): Increment S. Wake up one sleeping process (if any) (aka V(), Up())
    • Invariant: (#wait() is number of completed wait calls)
  • General semaphore: S ≥ 0, aka counting semaphore
  • Binary semaphore: S = 0 or 1
    • In all situations, a binary semaphore is sufficient: general semaphore can be implemented using binary semaphores
  • Wait(S); \\CS; Signal(S); This usage of a semaphore is commonly known as a mutex (mutual exclusion)