top of page
shubhangisingh453

Understanding System Calls and Threads in Operating System: A Comprehensive Guide



📘 System calls and its working


In an operating system, a system call is a mechanism used by a process to request a service from the operating system kernel. System calls provide a way for user-level processes to interact with the low-level functions provided by the operating system, such as file I/O, network communication, and process management. In this tutorial, we will discuss system calls in detail, including their types, implementation, and common examples.


📘 Types of System Calls


There are five main categories of system calls:

  1. Process Control: These system calls are used to manage processes, such as creating or terminating a process, changing process priority, and waiting for a process to finish.

  2. File Management: These system calls are used to manage files and directories, such as opening, reading, writing, and closing files, and creating or deleting directories.

  3. Device Management: These system calls are used to manage input/output (I/O) devices, such as printers, keyboards, and disk drives. Examples include initializing a device, reading or writing data to a device, and controlling device operations.

  4. Information Maintenance: These system calls are used to gather or manipulate system information, such as system time, process status, or system configuration.

  5. Communication: These system calls are used for inter-process communication (IPC) or network communication, such as sending or receiving messages or establishing network connections.

📘 Implementation of System Calls


System calls are typically implemented as a software interrupt or trap instruction. When a process calls a system call, it transitions from user mode to kernel mode, and the system call is handled by the operating system kernel. The kernel performs the requested operation and returns control to the calling process.


Examples of System Calls:

  1. fork(): This system call creates a new process by duplicating the calling process. The new process, known as the child process, is an exact copy of the calling process, known as the parent process.

  2. open(): This system call opens a file for reading or writing, creating a new file if necessary. The file is identified by its name or file descriptor.

  3. read(): This system call reads data from a file into a buffer.

  4. write(): This system call writes data from a buffer to a file.

  5. exec(): This system call replaces the current process image with a new process image. This is commonly used to execute a new program or script.

📘 Working of System Calls


System calls are an essential mechanism for processes to request services from the operating system kernel. They provide a way for user-level processes to interact with the low-level functions provided by the operating system, such as file I/O, network communication, and process management. In this section, we will discuss the working of system calls in detail.

  1. User-Level Code: The process starts in user mode, which is a restricted mode of operation that limits the process's access to system resources. The process can execute user-level code, which is code that runs in the user mode and cannot access kernel resources directly.

  2. System Call Request: When a process needs to perform a privileged operation or access a system resource, it requests the operation through a system call. A system call request is initiated by executing a specific instruction, which triggers a software interrupt or trap.

  3. Interrupt/Trap Handler: When a system call request is made, the processor switches from user mode to kernel mode, and the interrupt or trap handler is invoked. The handler is a small piece of code that runs in kernel mode and is responsible for executing the requested operation.

  4. Checking Parameters: The handler first checks the system call parameters to ensure they are valid and authorized. For example, if the system call requests access to a file, the handler checks if the file exists and if the process has permission to access it.

  5. Execution of System Call: Once the parameters are validated, the handler executes the requested system call. The system call implementation can be complex, involving multiple kernel functions and data structures. For example, if the system call requests to open a file, the handler will look up the file descriptor, create an entry in the process's file table, and return the file descriptor to the process.

  6. Return Value: When the system call completes, the handler returns the result to the process. The result is typically an integer value that indicates the success or failure of the operation, or the number of bytes read or written.

  7. Return to User Mode: Finally, the processor switches back to user mode, and the process continues to execute user-level code. The return value is typically stored in a register or memory location that the process can access to determine the outcome of the system call.

System calls provide a way for user-level processes to interact with the low-level functions provided by the operating system kernel. When a process needs to access a system resource, it requests the operation through a system call, which triggers an interrupt or trap handler that executes the requested operation in kernel mode. Once the operation is complete, the result is returned to the process, and the processor switches back to user mode.


In summary, system calls are a critical component of operating systems, providing a way for processes to interact with the kernel and access low-level resources. Understanding system calls and their implementation is essential for operating system developers and users alike.


📘 Threads in OS


Threads are the smallest unit of execution within a process. A thread is a lightweight process that can run in parallel with other threads within the same process, sharing the same memory space and system resources.


📘 Definition


In computing, a thread is a sequence of instructions within a program that can be executed independently of other code. Each thread has its own stack and instruction pointer, but shares the same memory space and system resources as other threads within the same process.


Examples: Consider a web server application that handles multiple requests simultaneously. Instead of creating a new process for each request, the application can use threads to handle each request independently, improving the overall performance and efficiency of the system.


📘 Types of Threads

  1. User-Level Threads: User-level threads are managed entirely by the user-level library without the support of the operating system. The kernel sees only one thread per process, regardless of how many user-level threads are created. User-level threads are lightweight and fast to create, but they suffer from several drawbacks, such as no support for multiprocessor systems and the inability to block on system calls.

  2. Kernel-Level Threads: Kernel-level threads are managed by the operating system kernel and have full support for system calls and multiprocessor systems. Each thread is represented by a separate task control block (TCB), and the kernel scheduler is responsible for allocating processor time to each thread.

📘 Components of Thread

  1. Thread Identifier: Each thread is identified by a unique thread identifier, which is used by the operating system to manage the thread.

  2. Program Counter: The program counter is a register that contains the memory address of the next instruction to be executed.

  3. Stack Pointer: The stack pointer is a register that contains the address of the top of the thread's stack.

  4. Thread State: The thread state indicates the current state of the thread, such as running, waiting, or blocked.

  5. Thread Priority: The thread priority determines the order in which threads are scheduled for execution.

📘 Why Multithreading?

  1. Improved Performance: Multithreading allows a program to perform multiple tasks simultaneously, improving the overall performance and efficiency of the system.

  2. Responsiveness: Multithreading allows a program to remain responsive to user input even when performing long-running tasks in the background.

  3. Resource Sharing: Multithreading allows multiple threads to access the same resources simultaneously, improving resource utilization and reducing contention for shared resources.

  4. Modularity: Multithreading allows a program to be broken down into smaller, more manageable parts, making it easier to develop, test, and maintain.

Threads are a critical component of modern operating systems and are used extensively in multi-tasking environments. Threads provide a way to improve the performance, responsiveness, and resource utilization of a system by allowing multiple tasks to run in parallel within the same process. Understanding the different types and components of threads is essential for building efficient and scalable multi-threaded applications.


📘 Difference between Process and Thread

​Parameter

​Process

Thread

​Definition

​An instance of a program that is being executed by the operating system

​A lightweight sub-process that operates within the context of a process

Resource allocation

​Processes have their own memory space, system resources, and scheduling context

​Threads share the same memory space and system resources as the parent process

​Overhead

​Creation and termination of processes is relatively slow due to the need to allocate and de-allocate system resources

​Creation and termination of threads is faster because they share the same resources as the parent process

Communication

Processes use IPC mechanisms like pipes, message queues, sockets to communicate

​Threads use shared memory to communicate with each other

​Context Switching

​Context switching between processes is slower because it requires saving and restoring a large amount of information

​Context switching between threads is faster because it requires saving and restoring only the CPU registers

​Parallelism

​Processes can run in parallel on multiple CPUs or cores

​Threads can run in parallel within a single CPU or core

Concurrency

​Processes are inherently concurrent because they have their own memory space and can execute independently of each other

Threads are not inherently concurrent, but they can be used to implement concurrency within a process

Failure isolation

​If a process crashes, it does not affect other processes or the operating system itself

​If a thread crashes, it can potentially crash the entire process

Scalability

​Processes are less scalable than threads because of the overhead of creating and managing them

​Threads are more scalable than processes because they are lightweight and easier to create and manage

This table provides a summary of the key differences between processes and threads, highlighting their respective strengths and weaknesses.


Thanks for reading, and happy coding!


Understanding System Calls and Threads in Operating System: A Comprehensive Guide ->

Understanding Interrupts in Operating System: Definition, Types, and Examples

bottom of page