top of page
shubhangisingh453

Understanding Deadlocks and Threads in Operating Systems: Causes, Prevention, and Solutions



Operating systems (OS) are complex software systems that manage the resources and provide an interface for applications to run on a computer. One of the critical challenges in developing operating systems is managing the processes and threads that execute on the system. Deadlocks and threads are two critical concepts in operating systems that can impact system performance and stability. Deadlocks occur when two or more processes or threads are blocked, waiting for each other to release resources.


Threads are the smallest unit of processing that the OS can manage, and they provide a way for applications to execute concurrently. In this blog, we will explore deadlocks and threads in detail, including their causes, prevention, and solutions. We will also examine various techniques that can be used to prevent deadlocks and improve thread management in operating systems.


📙 Bakery Algorithm


The Bakery Algorithm is a mutual exclusion algorithm that allows multiple processes to access a shared resource in a mutually exclusive manner. It was developed by Leslie Lamport in 1974 and is used in various operating systems and parallel processing systems. The Bakery Algorithm is a simple and efficient way to ensure that only one process can access a critical section at a time. In this tutorial, we will discuss the Bakery Algorithm, its implementation, and its working.


Algorithm


The Bakery Algorithm is a simple mutual exclusion algorithm that uses a ticket-based mechanism. Each process that wishes to access the critical section is assigned a unique ticket number, and the process with the lowest ticket number is granted access to the critical section. If two or more processes have the same ticket number, then the process with the lowest process ID is given priority.


The Bakery Algorithm consists of two parts: the entry section and the exit section. The entry section is the part of the code that requests access to the critical section, and the exit section is the part of the code that releases access to the critical section.


Implementation


To implement the Bakery Algorithm, we need to use two variables: a ticket variable and a boolean variable for each process. The ticket variable is used to assign a unique ticket number to each process, and the boolean variable is used to indicate whether a process is currently waiting to enter the critical section.

Here is the pseudocode for the Bakery Algorithm:


int ticket[N];
bool choosing[N];

// The initialization function
void initialize() {
    for (int i = 0; i < N; i++) {
        ticket[i] = 0;
        choosing[i] = false;
    }
}

// The entry section function
void lock(int i) {
    choosing[i] = true;
    int max_ticket = 0;
    for (int j = 0; j < N; j++) {
        if (ticket[j] > max_ticket) {
            max_ticket = ticket[j];
        }
    }
    ticket[i] = max_ticket + 1;
    choosing[i] = false;
    for (int j = 0; j < N; j++) {
        while (choosing[j]);
while ((ticket[j] != 0) && ((ticket[j] < ticket[i]) || ((ticket[j] == ticket[i]) && j < i)));
    }
}

// The exit section function
void unlock(int i) {
    ticket[i] = 0;
}

In the above pseudocode, N represents the number of processes that wish to access the critical section. The lock() function is called by a process when it wants to enter the critical section, and the unlock() function is called by the process when it wants to release access to the critical section.


Working


The Bakery Algorithm works in the following way:

  1. Each process sets its choosing variable to true to indicate that it wants to access the critical section.

  2. The process then sets its ticket number to the maximum ticket number of all processes that have set their choosing variable to true and adds one to it to get its own unique ticket number.

  3. The process sets its choosing variable to false to indicate that it is done choosing its ticket number.

  4. The process then waits until all processes with a lower ticket number or the same ticket number but with a lower process ID have entered the critical section.

  5. When the process has finished executing in the critical section, it sets its ticket number to 0 to indicate that it is no longer accessing the critical section.

Example


Let's say we have three processes, P1, P2, and P3, and they all wish to access a critical section. Here is how the Bakery Algorithm would work:

  1. Process P1 sets its choosing variable to true and sets its ticket number to 0 (since it has not yet been assigned a ticket number). 2. Process P1 sets its choosing variable to false and waits until all other processes with a lower or equal ticket number have entered the critical section.

  2. Process P2 sets its choosing variable to true and sets its ticket number to 1 (since P1 has a ticket number of 0).

  3. Process P2 sets its choosing variable to false and waits until P1 has finished executing in the critical section.

  4. Process P3 sets its choosing variable to true and sets its ticket number to 2 (since P1 and P2 have lower ticket numbers).

  5. Process P3 sets its choosing variable to false and waits until P1 and P2 have finished executing in the critical section.

  6. Process P1 finishes executing in the critical section and sets its ticket number to 0.

  7. Process P2 is now the process with the lowest ticket number and enters the critical section.

  8. Process P2 finishes executing in the critical section and sets its ticket number to 0.

  9. Process P3 is now the process with the lowest ticket number and enters the critical section.

  10. Process P3 finishes executing in the critical section and sets its ticket number to 0.

📙 Thread Pool


A thread pool is a technique used in computer programming to manage a group of threads that are available to perform a set of tasks. It allows for the creation of a fixed number of threads that can be reused to execute multiple tasks, improving the efficiency of the application. In this tutorial, we will discuss the thread pool, its implementation, and its working.


The Thread Pool


A thread pool is a collection of pre-initialized threads that are available to perform a set of tasks. Instead of creating and destroying threads every time a new task needs to be performed, a thread pool can reuse threads, reducing the overhead associated with thread creation and destruction.


A thread pool consists of a pool of threads, a task queue, and a mechanism for controlling the number of threads in the pool. When a task is submitted to the thread pool, it is added to the task queue. A thread from the pool picks up the task from the queue and executes it. When the task is completed, the thread returns to the pool and becomes available to perform another task.


Implementation in Java


To implement a thread pool, we need to create a pool of threads and a task queue. We also need a mechanism for controlling the number of threads in the pool. The following code snippet shows an example of how to create a thread pool in Java:


import java.util.concurrent.*;

public class ThreadPoolExample {
    public static void main(String[] args) {
        // Create a thread pool with 10 threads
    ExecutorService executor = Executors.newFixedThreadPool(10);

        // Submit tasks to the thread pool
for (int i = 0; i < 20; i++) {
            executor.submit(new Task());
        }

        // Shutdown the thread pool
        executor.shutdown();
    }
}

class Task implements Runnable {
    public void run() {
        // Perform the task here
    }
}

In the above code, we create a thread pool with 10 threads using the Executors.newFixedThreadPool() method. We then submit 20 tasks to the thread pool using the executor.submit() method. Finally, we shut down the thread pool using the executor.shutdown() method.


Working


When a task is submitted to the thread pool, it is added to the task queue. A thread from the pool picks up the task from the queue and executes it. If all the threads in the pool are busy, the task is placed in the queue until a thread becomes available.

When a thread completes a task, it returns to the pool and becomes available to perform another task. If a thread is idle for a specified amount of time, it may be terminated to reduce resource usage. The number of threads in the pool can be adjusted dynamically to match the workload.


Example


Let's say we have a web server that needs to handle multiple requests concurrently. Instead of creating a new thread for each request, we can use a thread pool to handle the requests more efficiently.

We create a thread pool with a fixed number of threads, let's say 10. When a request comes in, we submit it to the thread pool, and one of the available threads picks up the request and handles it. If all the threads are busy, the request is placed in the queue until a thread becomes available.

When a thread completes a request, it returns to the pool and becomes available to handle another request. If the workload increases, we can increase the number of threads in the pool dynamically to handle the increased load. When the workload decreases, we can reduce the number of threads to conserve resources.


Advantages and Disadvantages


The thread pool has several advantages over creating and destroying threads for each task:

  1. Efficiency: Thread creation and destruction are expensive operations. By reusing threads, the thread pool reduces the overhead associated with these operations.

  2. Scalability: The thread pool can dynamically adjust the number of threads in the pool to match the workload, improving the scalability of the application.

  3. Resource management: The thread pool can manage the resources used by the threads, such as memory and CPU usage.

However, the thread pool also has some disadvantages:

  1. Complexity: Implementing a thread pool can be complex, especially when dealing with shared resources and synchronization issues.

  2. Overhead: The thread pool has some overhead associated with maintaining the pool of threads and the task queue.

In conclusion, the thread pool is a powerful technique for managing a group of threads that are available to perform a set of tasks. It can improve the efficiency and scalability of the application by reusing threads and managing resources. However, implementing a thread pool can be complex, and there is some overhead associated with maintaining the pool of threads and the task queue.


Thanks for reading, and happy coding!


Understanding Deadlocks and Threads in Operating Systems: Causes, Prevention, and Solutions -> Synchronization Hardware and Posix Threads: Optimizing Multithreaded Applications

bottom of page