A model that maps many user-level threads to a single kernel-level thread, with thread management done in user space.
When a thread makes a blocking system call, the entire process will be blocked.
The heart and core of an operating system that interacts with hardware to execute processes.
Start, Ready, Running, Waiting, Terminated (or Exit).
They determine the order and duration of process execution.
A process that does not affect or impact any other process and does not share data with them.
Because it is fair, allowing every process an equal share of CPU.
It takes less time to create, share common data, and terminate a thread compared to a process.
It adds them to the end of the ready queue.
Schedules tasks based on priority.
Starvation, if shorter processes keep coming.
They allow data and information sharing, leading to faster execution.
The process is being created.
Based on the scheduling algorithm and their priority.
Processes that can affect or get affected by other processes under execution.
The OS scheduler assigns a processor to the process, and it executes instructions.
Java threads, POSIX threads, etc.
By using shared resources like data, memory, variables, and files.
FCFS supports both non-preemptive and preemptive scheduling, tasks are executed on a first-come, first-serve basis, and it is easy to implement but not very efficient.
They prevent processes from monopolizing system resources while waiting for external events.
A list of the process’s I/O devices.
Waiting for messages or data from other processes.
It indicates the outcome of its execution, whether completed successfully or encountered an error.
Multiple CPUs and input/output devices.
To ensure that opened or temporary files are properly closed and removed, preventing data corruption.
The process of deciding which process will own the CPU while another process is suspended.
A mechanism to exchange data and information across multiple processes.
When a consumer process does not receive a message it needs to execute a task.
They allow data to be sent between processes on the same computer or across different computers on the same network.
The scheduler may decide to spend more CPU time on processes with a large number of threads.
To allow one process to write to a file while another reads from it, affecting each other.
Because context switch time is shorter than that of kernel-level threads.
The Starvation Problem, where a process waits too long to be scheduled.
A data structure managed by the OS that stores all information required to track a process.
Multiple processes can read and write data to a message queue, with messages stored until retrieved by their recipient.
The waiting process with the smallest execution time to execute next.
To ensure that different processes can communicate and run smoothly without interrupting each other.
OS/2, Windows NT, and Windows 2000.
The whole process is blocked.
Hardware status, RAM, CPU, and other attributes.
To have a great blend of operations in the ready queue.
To improve resource utilization and avoid CPU idleness.
Information from the page table, memory limitations, and segment table.
Maximize CPU utilization, ensure fair allocation, maximize throughput, minimize turnaround time, minimize waiting time, and minimize response time.
A thread is often referred to as a lightweight process.
Process management, file management, memory management, and I/O management.
Kernel level thread and User-level thread.
A socket is the endpoint for sending or receiving data in a network.
Serial mode and parallel mode.
The process waits for removal from main memory after execution or termination.
Shared memory is memory that can be accessed simultaneously by multiple processes for communication.
FCFS is easy to implement and follows a first-come, first-serve method.
It is the preemptive version of the First Come First Serve CPU Scheduling algorithm.
To ensure messages are sent and received in the correct order.
Process priority, execution history, and the scheduling algorithm employed by the OS.
To ensure efficient utilization of system resources, multitasking, and a responsive computing environment.
To determine which work will be dispatched for execution.
The OS releases the resources allocated to the process, preventing resource leaks.
Switched-out processes.
To inform it of the termination and the exit status of the child process.
It breaks complex tasks into modules, improving efficiency and speed.
The process is waiting for a processor to be assigned to it.
Various CPU registers.
To ensure that multiple processes are coordinated and do not interfere with each other.
Stack, heap, text, and data.
It requires very little overhead since decisions are made only when a process completes or a new process is added.
By splitting a process into many threads, increasing the number of jobs done in unit time.
To count the number of times a shared resource is being used and prevent overuse.
The producer process sends a message to the kernel, which then sends it to the consumer process.
The creation of the process.
To ensure that only one process handles a request at a time.
Processes are allocated small time slices of CPU time, and when a time slice expires, the currently running process is preempted.
The kernel can schedule another thread for execution.
Throughput is the total number of processes completed per unit of time, representing the total work done by the CPU.
Turnaround time is the total time taken for a process to arrive in the ready queue and complete.
A process is a running program that serves as the foundation for all computation.
Swapping.
They can be more easily implemented than kernel threads.
Process A sends a message to the kernel, which then sends it to Process B.
Cooperation by Sharing and Cooperation by Message Passing.
FCFS is the simplest scheduling algorithm where the process that requests the CPU first is allocated the CPU first, implemented using a FIFO queue.
Signals are system messages sent from one process to another, typically used for remote commands rather than data transfer.
It occurs when a process has to wait for a message from a previous process.
A section that provides data integrity and avoids data inconsistency.
The contents present in the processor’s registers and the current activity reflected by the value of the program counter.
The entire process is blocked.
Global and static variables.
It points to the address of the process’s next instruction.
To select the best mix of IO and CPU bound processes from the pool of jobs.
The process is waiting to be assigned to any processor.
'Ready' or 'waiting' state, indicating it is prepared for execution but hasn't started running yet.
A process has completed execution.
A single sequential flow of execution of tasks of a process, also known as a thread of execution or thread of control.
Subsequent jobs will have to wait in a ready queue for a long period, leading to hunger.
The currently running process is preempted, and control is returned to the operating system.
The state must be changed from running to waiting.
The PCB is marked as 'terminated' and removed from the list of active processes.
A set of instructions that perform a certain purpose when executed by a computer.
It has the potential for process starvation, as long processes may be held off indefinitely.
An element of a computer program that performs a certain task.
Temporary data like method or function parameters, return address, and local variables.
By directly sharing a logical space or through files or messages.
The time difference between completion time and arrival time of a process.
To uniquely identify each process in the operating system.
It ensures that the CPU is utilized to its maximum, ideally 100% of the time.
It selects processes from the pool and maintains them in the primary memory's ready queue.
Multiple-thread communication is simpler because threads share the same address space.
Saving the state of the currently running process into its process control block (PCB) before executing the selected process.
Process Scheduling.
Determining the order in which processes are scheduled based on their priority levels.
A model where multiple user-level threads are multiplexed with multiple kernel-level threads, allowing parallel execution on multiprocessor machines.
More than one thread can exist inside a process.
Preemptive, as processes are given limited time on the CPU.
By using two pipes to create a two-way data channel.
To ensure that whenever the CPU remains idle, the OS has selected one of the processes available in the ready-to-use line.
The kernel-level thread is fully aware of all threads.
A communication method to exchange data and information.
Preemptive Scheduling and Non-Preemptive Scheduling.
When it needs to wait for a resource, such as user input or a file.
It is removed from the CPU's execution queue and placed in a waiting queue.
Registers, PC, stack, and mini thread control blocks stored in the user-level process's address space.
To select a processor process based on a scheduling method and remove a processor process.
Process A writes information to the shared region, and Process B reads it.
More than one thread can be scheduled on multiple processors.
The kernel recognizes and manages all threads.
A program is a piece of code, while a process is the representation of that code currently running.
Processes with higher priorities are given preference in execution, allowing the OS to preempt lower-priority processes.
That multiple processes can operate without interfering with each other.
It is simple, easy to use, and starvation-free as all processes get balanced CPU allocation.
It has the minimum average waiting time among all operating system scheduling algorithms.
A pipe is a unidirectional data channel used for communication between processes.
Using the concept of aging.
A scheduling algorithm can minimize the waiting time of a process, although it cannot change the execution time required by the process.
A process is an 'active' entity, while a program is often regarded as a 'passive' entity.
It processes jobs faster than the Shortest Job First (SJF) algorithm.
A process is a dynamic instance of a computer program.
Blocked or waiting.
It takes the place of the less priority process, which is then suspended.
To wait for a certain condition to be met before proceeding.
It is the memory that is dynamically allocated to a process during its execution.
It allows multiple processes to share the CPU using temporal multiplexing.
A request from a user or a system component.
To ensure that only one process can write to the data at a time.
Ensuring that all processes get a fair share of CPU time without starvation.
It chooses one job from the ready queue and sends it to the CPU for processing.
Initializations required for the process, such as initializing variables and setting default values.
Because it ensures that higher-priority tasks meet strict timing requirements by preempting lower-priority tasks.
It improves computation speed by allowing parallel execution.
A model where each user-level thread corresponds to a single kernel-level thread, allowing more concurrency.
It allows another thread to run when a thread makes a blocking system call.
Deadlock.
It is the preemptive version of Shortest Job First, allocating the processor to the job closest to completion.
Sharing data, coordinating activities, managing resources, and achieving modularity.
A situation where a process is temporarily suspended until a specific event or condition occurs.
Files serve as data records that can be accessed by multiple processes as needed.
A mechanism that ensures only one process can access a shared resource at a time.
In serial mode, processes execute one after the other; in parallel mode, multiple processes execute simultaneously.
A collection of computer programs, libraries, and related data.
CPU use for process execution, time constraints, and execution ID.
Synchronization points that all processes must reach before they can proceed.
Waiting Time = Turnaround Time - Burst Time.
Context switch time is longer in kernel threads.
The OS loads the saved state of the selected process from its PCB and restores the program counter and CPU registers.
The actual running of a process on the CPU.
Creating a user thread requires the corresponding kernel thread.
It reduces the degree of multiprogramming.
Operating systems that do not support threads at the kernel level.
The implementation of kernel threads is more difficult than user threads.
Information Sharing, Computation Speed, Convenience, and Modularity.
It may lead to data damage or unintended sharing of sensitive information.
NEW, READY, RUNNING, WAITING, TERMINATED.
Processes can respond as soon as a thread completes its execution.
Many to many, many to one, and one to one relationships.
It multiplexes any number of user threads onto an equal or smaller number of kernel threads.
Response time is the time taken from the submission of a process until its first response is produced, important for interactive systems.
It manages user-level threads as if they are single-threaded processes.
Process priority and additional scheduling information.
Deadlock can occur if a consumer process waits for a message that is not received.
SRTF performs context switches more frequently, consuming valuable CPU time.
Program counter, register set, and stack space.
Each process is cyclically assigned a fixed time slot.
I/O operations, where a process waits for data from an I/O device.
FCFS suffers from the Convoy effect, has a higher average waiting time compared to other algorithms, and is not very efficient.
To implement the virtual machine so that each process appears to be running on its own computer.
The program code and data associated with the process are loaded into the allocated memory space.
Window Solaris.
Execution of the instructions.
The CPU executes the instructions of the selected process, utilizing system resources.
Process Creation.
Hardware or software interrupts, such as those generated by a hardware device or a system call request.
It allows different processes to access the same file concurrently, making execution more efficient.
They lack coordination between the thread and the kernel.
Process state, process privileges, process ID, pointer to parent process, and program counter.
Synchronization primitives like semaphores or mutexes.
The context switching period between threads is less than that for processes, which incurs more overhead.
Memory space, a unique process identifier (PID), a process control block (PCB), and other essential data structures.
The execution of other kernel threads can continue.
The OS sets up the initial execution environment for the process.
To select the next process to execute on the CPU, essential for efficient multitasking and resource allocation.
A higher-priority process becoming available or the expiration of a time slice.
Process Termination.
When a process has completed its task, is no longer needed, or when an error or exception occurs.
Independent processes and cooperating processes.
To receive messages from the producer process and send them to the consumer process.
The time required by a process for CPU execution.
The process is waiting for some event to occur.
#include <stdio.h> int main() { printf(“Hi, Subhadip! ”); return 0; }
The act of temporarily interrupting the execution of a currently running process to allocate the CPU to another process.
The process of storing and restoring the state of a CPU so that multiple processes can share a single CPU resource.
The orderly and controlled cessation of a running process's execution, including cleanup and resource reclamation.
A mechanism that allows processes to communicate and synchronize their actions.
To keep track of the current state of each process (e.g., running, waiting, terminated).
It prevents any process from being unfairly blocked from accessing CPU time, allowing even low-priority processes to execute.
Tracking resource usage and performance metrics of processes.