A process is an 'active' entity, while a program is often regarded as a 'passive' entity.
Process Creation.
They can be more easily implemented than kernel threads.
It improves computation speed by allowing parallel execution.
The OS releases the resources allocated to the process, including memory and file handles.
The currently running process is preempted, and control is returned to the operating system.
The kernel-level thread is fully aware of all threads.
A request from a user or a system component.
Process A writes information to the shared region, and Process B reads from it.
Throughput is the total number of processes completed per unit of time, representing the total work done by the CPU.
The implementation of kernel threads is more difficult than user threads.
The coordination of processes to ensure that they operate in a correct and predictable manner.
The order and duration of process execution.
An endpoint for sending or receiving data in a network.
An element of a computer program that performs a certain task.
Multiple CPUs and input/output devices.
Registers, PC, stack, and mini thread control blocks stored in the user-level process's address space.
To allow one process to write to a file while another reads from it, affecting each other.
The PCB is marked as 'terminated' and removed from the list of active processes.
The kernel recognizes and manages all threads through a thread control block.
To determine which work will be dispatched for execution.
By using two pipes to create a two-way data channel.
Subsequent jobs will have to wait in a ready queue for a long period, leading to hunger.
It indicates the outcome of its execution, whether completed successfully or encountered an error.
Deadlock.
Switched-out processes.
It is the memory that is dynamically allocated to a process during its execution.
It has a minimum average waiting time among all operating system scheduling algorithms.
To receive messages from the producer process and send them to the consumer process.
The OS sets up the initial execution environment for the process.
Start, Ready, Running, Waiting, Terminated (or Exit).
A communication method to exchange data and information.
A unidirectional data channel used for communication between processes.
Kernel level threads and user-level threads.
It maps many user-level threads to a single kernel-level thread, with thread management done in user space.
Their state is changed from running to waiting.
The process is removed from the system and its resources are released.
The kernel-level thread manages user-level threads as if they are single-threaded processes.
It allows breaking complex tasks into modules for more efficient and faster execution.
They allow multiple processes to read and write messages without being connected.
It allows processes to execute faster and more efficiently by accessing the same files concurrently.
The heart and core of an operating system that interacts with hardware to execute processes.
The process is waiting for a processor to be assigned to it.
A set of instructions that, when executed by a computer, perform a certain purpose.
Creating a new thread takes less time than creating a new process, threads can share common data, context switching is faster, and terminating a thread takes less time than terminating a process.
The waiting process with the smallest execution time to execute next.
The process of deciding which process will own the CPU while another process is suspended.
The process has completed execution or has been terminated and is waiting for removal from memory.
To allow different processes to communicate and run smoothly without interrupting each other.
A higher-priority process becoming available or the expiration of a time slice.
Program counter, register set, and stack space.
Each thread of the same process has a separate program counter and a stack of activation records and control blocks.
Synchronization points that all processes must reach before they can proceed.
Context switch time is longer in kernel threads.
The Starvation Problem, where a process waits too long to be scheduled.
Processes can respond as soon as a thread completes its execution.
Chooses one job from the ready queue and sends it to the CPU for processing.
The time required by a process for CPU execution.
It consumes valuable CPU time for processing, diminishing its advantage of fast processing.
A process is a running program that serves as the foundation for all computation.
To ensure efficient utilization of system resources, multitasking, and a responsive computing environment.
The process is being created.
Managing the order and allocation of CPU time to processes.
By using shared resources like data, memory, variables, and files.
Operating systems that do not support threads at the kernel level.
Process priority and additional scheduling information.
The scheduler may decide to spend more CPU time on processes with a large number of threads.
Information from the page table, memory limitations, and segment table.
The producer process sends a message to the kernel, which then sends it to the consumer process.
Blocked or waiting state.
A data structure managed by the operating system that contains all information required to track a process.
CPU use for process execution, time constraints, and execution ID.
A list of the process’s I/O devices.
To improve resource utilization and avoid CPU idleness.
Serial mode and parallel mode.
The context switching period between threads is less than that for processes, which incurs more overhead.
Waiting for messages or data from other processes through mechanisms like message queues or pipes.
FCFS is easy to implement and follows a first-come, first-serve method.
The time at which a process completes its execution.
The OS scheduler assigns a processor to the process, and it executes instructions.
Various CPU registers.
The act of temporarily interrupting the execution of a currently running process to allocate the CPU to another process.
Memory that can be simultaneously accessed by multiple processes for communication.
Responsible for selecting a processor process based on a scheduling method and removing a processor process.
It reduces the degree of multiprogramming.
To wait for a certain condition to be met before proceeding.
Schedules tasks based on priority, executing the most important processes first.
To track resource usage and performance metrics of processes.
To select the next process to execute on the CPU.
Cyclically assigning each process a fixed time slot.
The process is waiting for some event to occur.
The OS loads the saved state of the selected process from its PCB.
Hardware status, RAM, CPU, and other attributes.
Process management, file management, memory management, and I/O management.
A process refers to a dynamic instance of a computer program.
To allow processes to communicate and synchronize their actions.
To inform it of the termination and the exit status of the child process.
A scheduling algorithm can minimize the waiting time of a process, although it cannot change the execution time required by the process.
Java threads and POSIX threads.
Processes that can affect or get affected by other processes under execution.
Maximize CPU utilization, ensure fair allocation, maximize throughput, minimize turnaround time, minimize waiting time, and minimize response time.
Processes are executed one after the other; the next process cannot start until the previous one terminates.
Process priority, execution history, and the scheduling algorithm employed by the OS.
Preemptive, as processes are given limited time on the CPU.
FCFS suffers from the Convoy effect, has a higher average waiting time compared to other algorithms, and is not very efficient.
A program is a piece of code, while a process is the running representation of that code.
Stack, heap, text, and data.
The process of saving and restoring the state of a CPU so that multiple processes can share a single CPU resource.
A section that provides data integrity and avoids data inconsistency.
Because context switch time is shorter than that of kernel-level threads.
A situation where a process is temporarily suspended until a specific event or condition occurs.
Signals are system messages sent between processes, typically for remote commands.
When a thread makes a blocking system call, the entire process will be blocked.
SJF reduces the average waiting time, making it better than FCFS.
Response time is the time taken from the submission of a process until its first response is produced, important for interactive systems.
Initializations such as initializing variables, setting default values, and preparing the process for execution.
Long-Term Scheduler, Short-Term Scheduler, and Medium-Term Scheduler.
The creation of the process.
OS/2, Windows NT, and Windows 2000.
By directly sharing a logical space or through files or messages.
The extent to which the CPU is used effectively, ideally aiming for 100% utilization.
Many to many, many to one, and one to one relationships.
The time at which the process arrives in the ready queue.
A process that does not affect or impact any other process and does not share data with them.
A single sequential flow of execution of tasks of a process, also known as a thread of execution or thread of control.
Cooperation by Sharing and Cooperation by Message Passing.
It allocates CPU time to processes based on the scheduling algorithm and their priority.
A thread that the operating system does not recognize, easily implemented by the user, but if it blocks, the whole process is blocked.
Contents present in the processor’s registers and the current activity reflected by the value of the program counter.
Turnaround time is the total time taken for a process to arrive in the ready queue and complete.
They lack coordination between the thread and the kernel.
It establishes a one-to-one relationship between user-level threads and kernel-level threads, allowing more concurrency.
The entire process is blocked.
They prevent processes from monopolizing system resources while waiting for external events.
The assignment of priority levels to processes to determine their scheduling order.
More than one thread can be scheduled on multiple processors.
A condition where a process must wait until a message is received by a previous process.
Waiting Time = Turnaround Time - Burst Time.
To ensure that only one process can write to the data at a time.
#include <stdio.h> int main() { printf(“Hi, Subhadip! ”); return 0; }
A thread is often referred to as a lightweight process and can share common data without needing Inter-Process Communication.
It allows multiple user-level threads to multiplex with multiple kernel-level threads, enabling parallel execution on multiprocessor machines.
The kernel can schedule another thread for execution.
When a consumer process does not receive a message it needs to execute a task.
To ensure that opened or temporary files are properly closed and removed, preventing data corruption.
Swapping.
To ensure that whenever the CPU remains idle, the OS has selected one of the processes available in the ready-to-use line.
By splitting a process into many threads, increasing the number of jobs done in unit time.
Processes are allocated small time slices (quantum) of CPU time, and when a time slice expires, the currently running process is preempted.
It preempts the currently executing lower priority process.
Selects processes from the pool and maintains them in the primary memory’s ready queue.
The process is waiting to be assigned to any processor.
Because it ensures that higher-priority tasks meet strict timing requirements by preempting lower-priority tasks.
SRTF makes processing faster by selecting the process with the smallest amount of time remaining until completion.
It requires very little overhead since decisions are made only when a process completes or a new process is added.
Temporary data like method or function parameters, return address, and local variables.
When it needs to wait for a resource, such as user input or a file.
Files serve as data records that multiple processes can access as needed.
To ensure that multiple processes are coordinated and do not interfere with each other.
The program code and data associated with the process are loaded into the allocated memory space.
It is removed from the CPU's execution queue and placed in a waiting queue associated with the resource it is waiting for.
Process state, process privileges, process ID, pointer to parent process, and program counter.
Information Sharing, Computation Speed, Convenience, and Modularity.
To have a great blend of operations in the ready queue.
Preemptive Scheduling and Non-Preemptive Scheduling.
Creating a user thread requires the corresponding kernel thread.
It is simple, easy to use, and starvation-free as all processes get balanced CPU allocation.
Because it provides fair CPU allocation to all processes.
Multiple-thread communication is simpler because threads share the same address space.
It ensures that only one process handles a request at a time.
It multiplexes any number of user threads onto an equal or smaller number of kernel threads.
A mechanism to exchange data and information across multiple processes.
Memory space, a unique process identifier (PID), a process control block (PCB), and other essential data structures.
Process A sends a message to the kernel, which then sends it to Process B.
A collection of computer programs, libraries, and related data.
Starvation, if shorter processes keep coming.
Global and static variables.
Processes with higher priorities are given preference in execution, allowing the OS to preempt lower-priority processes.
To uniquely identify each process in the operating system.
It allows different processes to access the same file concurrently, making execution more efficient.
A 'ready' or 'waiting' state.
Synchronization primitives like semaphores or mutexes.
To select the best mix of IO and CPU bound processes from the pool of jobs.
Execution of the instructions.
A mechanism that ensures only one process can access a shared resource at a time.
Tracking the current state of each process (e.g., running, waiting, terminated).
NEW, READY, RUNNING, WAITING, TERMINATED.
Many tabs in a browser can be viewed as threads.
FCFS supports both non-preemptive and preemptive scheduling, executes tasks on a first-come, first-serve basis, and is easy to implement but not very efficient.
It can lead to data damage due to improper handling of shared data.
Sensitive user data may be unintentionally shared with other processes.
To implement the virtual machine so that each process appears to be running on its own computer to the user.
Messages are stored in the queue until their recipient retrieves them.
I/O operations, where a process may be blocked until requested data is ready.
Preemption triggered by hardware or software interrupts, such as a hardware device interrupt or a system call request.
The execution of other kernel threads can continue.
It is the preemptive version of FCFS, focusing on time-sharing.
Independent processes and cooperating processes.
It is the preemptive version of Shortest Job First, allocating the CPU to the job closest to completion.
It has the potential for process starvation, as long processes may be held off indefinitely.
It allows another thread to run when a thread makes a blocking system call and supports parallel execution on multiprocessors.
It points to the address of the process's next instruction.
FCFS is the simplest scheduling algorithm where the process that requests the CPU first is allocated the CPU first, implemented using a FIFO queue.
The time difference between completion time and arrival time of a process.
The orderly and controlled cessation of a running process's execution.
They are added to the end of the ready queue.
The CPU executes the instructions of the selected process, utilizing system resources.
Deadlock can occur if a consumer process waits for a message that is not received.
To count the number of times a shared resource is being used and prevent overuse.
It prevents any process from being unfairly blocked from accessing CPU time, allowing even low-priority processes to execute.
Window Solaris.
Saving the state of the currently running process into the process control block (PCB) before executing a new process.
When a process has completed its task, is no longer needed, or when an error occurs.
It ensures messages are sent and received in the correct order.