
Concurrency is the act of multiple processes (or threads) executing at the same time. When multiple physical CPUs are available, the processes may execute in parallel. On a single CPU, concurrency may be achieved by time-sharing.
When concurrent processes access a common data area, the data must be protected from simultaneous change by two or more processes. Otherwise the updated area may be left in an inconsistent state.
Race condition
The situation where several processes access and manipulate shared data concurrently. The final value of the shared data depends on the order of execution. The final value of the shared data depends on the order of execution. To prevent race conditions, concurrent processes must be synchronized. Ensure that only one process at a time is manipulating the shared variable. Shared data access and manipulation should be performed atomically. Atomic operation means an operation without interruption.
Critical section
A critical section is a segment of code that cannot be entered by a process while another process is executing a corresponding segment of the code.
**— When we call some code segment as a Critical section?**A critical section is used when multiple threads or processes need exclusive access to shared resources to prevent race conditions. We use critical section mostly on when we are accessing or modifying any shared resource (like variables, data structures, files, etc.).
**— Where we have Critical section code segments?**A critical section is used in multithreaded applications, operating systems, and database systems where shared resources need to be accessed by multiple threads or processes to ensure data consistency and prevent race conditions.
A solution to the Critical Section (CS) problem
Any solution to the critical section (CS) problem must satisfy the following requirements:
- Guarantee mutual exclusion: Only one process may be executing within the CS.
- Prevent lockout: A process not attempting to enter the CS must not prevent other processes from entering the CS.
- Prevent starvation: A process (or a group of processes) must not be able to repeatedly enter the CS while other processes are waiting to enter.
- Prevent deadlock: Multiple processes trying to enter the CS at the same time must not block each other indefinitely.
Process cooperation
In addition to the CS problem, where processes compete for access to a shared resource, many applications require the cooperation of processes to solve a common goal. A typical scenario is when one process produces data needed by another process. The producer process must be able to inform the waiting process whenever a new data item is available. In turn, the producer must wait for an acknowledgement from the consumer, indicating that the consumer is ready to accept the next data item.
Semaphores
The software solution to the CS problem has several major shortcomings:
- The solutions works only for 2 processes. When 3 or more processes need to share a CS, a different solution must be developed.
- The solution is inefficient. While one process is in the CS, the other process must wait by repeatedly testing and setting the synchronization variables. The waiting process is only wasting CPU and memory resources without accomplishing any useful work.
- The solution addresses only competition among processes. To address problems of process cooperation, entirely different solutions must be devised.
Basic principles of semaphores
Semaphores are general-purpose primitives that allow solving a variety of synchronization problems in a systematic manner.
A semaphore **S **is a non-negative integer variable that can be accessed using only two special operations, P and V.
- P(S): if S > 0, decrement S by 1, otherwise wait until S > 0
- V(S): increment S by 1
Classic Implementation (using busy-waiting)
Semaphores without Busy-Waiting
Types of Semaphores
- Binary Semaphore - The value of a binary semaphore can take only the values 0 or 1. On other systems, binary semaphores are known as mutex locks, as they are locks provide mutual exclusion.
- Counting Semaphore - Its value can range over an unrestricted domain. It is used to control access to a resource that has multiple instances.
Solving the Critical Section Problem using Semaphores
Semaphores can solve the Critical Section (CS) problem efficiently.
A binary semaphore, initialized to 1, ensures mutual exclusion among processes or threads. Only one process can enter the CS at a time.
- wait(mutex) blocks if another process is already in the CS.
- signal(mutex) allows other waiting processes to enter the CS.
This method guarantees mutual exclusion, progress, and bounded waiting.
Limitations of Semaphores
While semaphores are powerful, they have drawbacks:
- Code can become complex and error-prone.
- Deadlocks and priority inversion can occur.
- Hard to manage as the system grows.
This leads to the need for higher-level synchronization constructs like Monitors.
Monitors: A High-Level Synchronization Construct
A Monitor is a high-level synchronization mechanism that combines shared data, functions, and synchronization rules into a single unit.
In a monitor:
- Only one process can execute any function inside the monitor at a time (automatic mutual exclusion).
- Processes can wait inside the monitor if some condition is not true and get signaled when the condition becomes true.
Monitors internally use condition variables with two main operations:
- wait(): Process voluntarily leaves the monitor and waits.
- signal(): Wakes up one waiting process.
Thus, Monitors solve concurrency problems easily without needing explicit wait() and signal() for locks everywhere.
A simple monitor for a bounded buffer:
- insert() waits if the buffer is full.
- remove() waits if the buffer is empty.
- Mutual exclusion is automatically enforced.
Hardware Instructions (Test-and-Set, Compare-and-Swap)
Sometimes, solving the Critical Section problem directly using software (like semaphores or monitors) can be too slow or complex at the operating system or hardware level.
So, modern CPUs provide special low-level instructions that can help achieve mutual exclusion very fast and safely - without needing extra software layers.
These special instructions work atomically - meaning they cannot be interrupted once they start.
The two most important ones are:
1. Test-and-Set (TS)
Test-and-Set is a basic hardware instruction used for building locks.
-
It reads a value from a memory location (like a lock variable),
-
Then sets it to a new value (usually true or 1),
-
Both steps happen together, atomically (without any other process interfering).
-
If the lock was false, the process gets the lock.
-
If the lock was already true, the process must wait or try again.
How to use Test-and-Set for locking:
2. Compare-and-Swap (CAS)
Compare-and-Swap is a more powerful atomic instruction.
-
It checks if a memory location has a specific expected value.
-
If yes, it changes it to a new value.
-
If no, it does nothing.
-
It compares the actual value in memory with an expected value.
-
Only if they match, it swaps it.
How CAS helps in synchronization:
- Used in lock-free and wait-free programming.
- Very important for building high-performance concurrent systems (like atomic variables in Java: AtomicInteger uses CAS).
Example:
Conclusion
Concurrency introduces challenges like race conditions and deadlocks. We explored key solutions - semaphores, monitors, and hardware primitives - that ensure safe and efficient process synchronization.
Choosing the right synchronization technique is essential for building reliable, concurrent systems.
Thank you!!