however, registers, condition code are not shared between threads
virtual address space: stack, heap are shared
Ways to pass thread argument:
malloc/free
pointer to stack: parent usually wait children
cast of int: only one argument
voletile
: write to memory (not keep in register) as soon as possible
sleep: sleep does not guarantee execution order
mutex: boolean synchronization variable
lock is about 1000 cycles, might be more overhead than sequential
Semaphore: non-negative global integer synchronization variable. Manipulated by P and V operations.
P(s): decrement and return is positive, else suspend thread until V(s)
V(s): increment or restart one blocked thread
#include <semaphore.h>
int sem_init(sem_t *s, 0, unsigned int val);} /* s = val */
int sem_wait(sem_t *s); /* P(s) */
int sem_post(sem_t *s); /* V(s) */
Counting Semaphore: keep track of resource state Binary Semaphore: notify other treads
1-element buffer
Producers will contend with each to get empty
Consumers will contend with each other to get full
sbuf
: Producer-Consumer on an n-element Buffer
Circular buffer
rear: index of most recently inserted element (front if none)
front: (index of next element to remove – 1) mod n
init(int v) { items = front = rear = 0; }
insert(int v) {
if (items >= n) error();
if (++rear >= n) rear = 0;
buf[rear] = v;
items++;
}
int remove() {
if (items == 0) error();
if (++front >= n) front = 0;
int v = buf[front];
items--;
return v;
}
mutex: enforces mutually exclusive access to the buffer and counters slots: counts the available slots in the buffer items: counts the available items in the buffer
/* Create an empty, bounded, shared FIFO buffer with n slots */
void sbuf_init(sbuf_t *sp, int n) {
sp->buf = Calloc(n, sizeof(int));
sp->n = n; /* Buffer holds max of n items */
sp->front = sp->rear = 0; /* Empty buffer iff front == rear */
pthread_mutex_init(&sp->mutex, NULL); /* lock */
Sem_init(&sp->slots, 0, n); /* Initially, buf has n empty slots */
Sem_init(&sp->items, 0, 0); /* Initially, buf has zero items */
}
/* Clean up buffer sp */
void sbuf_deinit(sbuf_t *sp) {
Free(sp->buf);
}
/* Insert item onto the rear of shared buffer sp */
void sbuf_insert(sbuf_t *sp, int item) {
P(&sp->slots); /* Wait for available slot */
pthread_mutex_lock(&sp->mutex); /* Lock the buffer */
if (++sp->rear >= sp->n) /* Increment index (mod n) */
sp->rear = 0;
sp->buf[sp->rear] = item; /* Insert the item */
pthread_mutex_unlock(&sp->mutex); /* Unlock the buffer */
V(&sp->items); /* Announce available item */
}
/* Remove and return the first item from buffer sp */
int sbuf_remove(sbuf_t *sp) {
int item;
P(&sp->items); /* Wait for available item */
pthread_mutex_lock(&sp->mutex); /* Lock the buffer */
if (++sp->front >= sp->n) /* Increment index (mod n) */
sp->front = 0;
item = sp->buf[sp->front]; /* Remove the item */
pthread_mutex_unlock(&sp->mutex); /* Unlock the buffer */
V(&sp->slots); /* Announce available slot */
return item;
}
Reader-writers: one writer or multiple readers at a time
First readers-writers problem (favor reader) - Starvation Cases
Second readers-writers problem (favor writers) - Starvation Cases
Favor Reader: Favor Writer:
FIFO queue: guarantee fairness (every request is handled in timely manner)
Implementation: ```rwqueue.{h,c} / Queue data structure / typedef struct { sem_t mutex; // Mutual exclusion int reading_count; // Number of active readers int writing_count; // Number of active writers // FIFO queue implemented as linked list with tail rw_token_t head; rw_token_t tail; } rw_queue_t;
/ Represents individual thread's position in queue / typedef struct TOK { bool is_reader; sem_t enable; // Enables access struct TOK *next; // Allows chaining as linked list } rw_token_t; ```
Usage:
pthread wrappers
Pthread_rwlock_rdlock(pthread_rw_lock_t *rwlock)
Pthread_rwlock_wrlock(pthread_rw_lock_t *rwlock)
Pthread_rwlock_unlock(pthread_rw_lock_t *rwlock)
Race: correctness depend on ordering of threads Deadlock: waiting for a condition that will never be true (can be on one thread by locking already locked code)
mutual exclusion
circular waiting
hold and wait
no pre-emption
Deadlock Example:
Livelock: threads change state, but remain in deadlock trajectory Starvation: one or more thread temporarily unable to make progress
thread-safe: always produce correct result when called from multiple concurrent threads
Classes of thread-unsafe functions:
Class 1: Functions that do not protect shared variables
Class 2: Functions that keep state across multiple
Class 3: Functions that return a pointer to a static variable
Class 4: Functions that call thread-unsafe functions
reentrant: function accesses no shared variables when called by multiple threads (subset of thread safe functions)
All functions in the Standard C Library (at the back of your K&R text) are thread-safe, except the following:
Signal: single execution in concurrent
can occur at any point in program execution, unless signal is blocked
signal handler runs within same thread (Must run to completion and then return to regular program execution)
Thread: multiple execution in parallel
cheaper than processes
easy to share data between threads
easy to introduce subtle synchronization errors
Many library functions use lock-and-copy for thread safety, therefore signal received while library function holds lock can produce deadlock.
Table of Content