have overhead
each process has own logical control flow, stack, local variables, registers
can run concurrently, support multi-core
child processes (process)
in different virtual memory space (different copy of code, data, kernel context)
very hard to share memory between process
identify by process id (PID)
more expensive than threads
multi-threaded program (thread)
in the same virtual memory space (same code, data, kernel context)
easy to share memory between process
identify by thread id (TID)
less expensive than processes
thread-safety and race need to be considered due to data sharing
Answer: There are two critical sections.
Justify: Because count1
and count2
is shared, and their operations are not atomic (can be break down into load and store instructions), it is possible for two load instructions to interleave to produce the following patterns:
Thread 1: load count1 == 0
Thread 2: load count1 == 0
Thread 1: store count1 = count1 + 1 = 1
Thread 2: store count1 = count1 + 1 = 1
Therefore, when count1 == 2
is expected, we will observe that count1 == 1
holds. (count2
follows in the same logic)
Synchronize: there are following methods
mutex
or semaphores
to synchronize code by surrounding count1++
and count2++
with chosen lock and unlock functions.This is a livelock. Observe that the pot is actually being passed around and therefore apparent progress is made to serve both Abi K
and Rashmi
(therefore not a deadlock). However, no actual progress is made (therefore not a starvation) since neither of two people are getting their requested stew.
Table of Content