MindMap Gallery 3. Processor scheduling and deadlock
408 Postgraduate Entrance Examination Collection (Operating System 3), including jobs and job scheduling, goals of processor scheduling algorithms, real-time scheduling (HRT and SRT tasks), deadlock overview, etc.
Edited at 2024-03-07 14:42:52This is a mind map about bacteria, and its main contents include: overview, morphology, types, structure, reproduction, distribution, application, and expansion. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about plant asexual reproduction, and its main contents include: concept, spore reproduction, vegetative reproduction, tissue culture, and buds. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about the reproductive development of animals, and its main contents include: insects, frogs, birds, sexual reproduction, and asexual reproduction. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about bacteria, and its main contents include: overview, morphology, types, structure, reproduction, distribution, application, and expansion. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about plant asexual reproduction, and its main contents include: concept, spore reproduction, vegetative reproduction, tissue culture, and buds. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about the reproductive development of animals, and its main contents include: insects, frogs, birds, sexual reproduction, and asexual reproduction. The summary is comprehensive and meticulous, suitable as review materials.
Chapter 3: Processor Scheduling and Deadlock
Goals of Processor Scheduling Algorithms
Common Goals of Processor Scheduling Algorithms
Resource utilization: CPU utilization = CPU effective working time/(CPU effective working time CPU idle waiting time)
fairness
balance
policy enforcement
Goals of batch processing systems
Short average turnaround time
High system throughput
High processor utilization
The goal of a time-sharing system
Fast response time
Balance
real-time system goals
deadline guarantee
predictability
Processor Scheduling Hierarchy
Advanced Scheduling (Job Scheduling)
Time-sharing systems do not require job scheduling because interaction is required
Batch processing systems require job scheduling
Intermediate scheduling (related to suspension)
Low-level scheduling (process scheduling)
Process scheduling is the most basic scheduling, and any operating system has process scheduling.
Three basic mechanisms of low-level scheduling
queuer
Dispatcher
context switch
Process scheduling method
Non-preemptive mode
Preemption method
priority principle
Short process priority principle
time slice principle
Tasks scheduled by the process
Save processor site information
Select processes according to an algorithm
Assign processor to process
Process scheduling algorithm
priority scheduling algorithm
Types of priority scheduling algorithms
Non-preemptive priority scheduling algorithm
Wait for the current process to finish executing, and then execute another process with the highest priority.
This scheduling algorithm is mainly used in batch processing systems; it can also be used in some real-time systems that do not have strict real-time requirements.
Preemptive priority scheduling algorithm
Don't wait for the current process to end, directly grab the processor
It is often used in real-time systems with stricter requirements, as well as batch processing and time-sharing systems with higher performance requirements.
Type of priority
static priority
Priority is determined when a process is created and remains unchanged throughout the lifetime of the process. Generally, priority is represented by an integer within a certain range, for example, an integer from 0 to 7 or 0 to 255, which is also called a priority number.
You can refer to the BIOS system to set the boot priority.
dynamic priority
The priority assigned when a process is created can be changed as the process advances or as its waiting time increases, in order to obtain better scheduling performance.
round robin scheduling algorithm
Basic principle: In the rotation (RR) method, the system arranges all ready processes into a ready queue according to the FCFS policy, and can be set to generate an interrupt every certain time interval (such as 30ms) to activate the process scheduling in the system The program completes a schedule and allocates the CPU to the team leader process to execute it.
Process switching timing
The time slice has not been used up and the process is completed.
The time slice is up and the process is not completed.
Determination of time slice size
Too small is conducive to short jobs and increases system switching overhead.
If it is too long, it will degenerate into the FCFS algorithm.
General choice: q is slightly larger than the time required for an interaction, so that most processes are completed within a time slice
Generally speaking, the average turnaround time will be longer than SJF, but with better response times
Multi-queue scheduling algorithm
Multi-level feedback queue scheduling algorithm
Scheduling mechanism
Set up multiple ready queues
Each queue uses the FCFS algorithm
Schedule according to the queue priority, and run in the nth queue in a time slice rotation manner
Scheduling Algorithm Performance
For terminal users, they feel satisfied due to the small workload.
Turnaround time is also smaller for users with short batch jobs
Users with long batch jobs can also be executed
Scheduling algorithm based on fairness principle
Guaranteed Scheduling Algorithm
fair share scheduling algorithm
Jobs and Job Scheduling
Operation
A job not only contains programs and data, but also comes with a job description, and the system controls the operation of the program according to the description. The batch processing system is dropped from external storage into memory in units of jobs.
Job control block JCB
A JCB is set for each job, which saves all information about job management and scheduling. It is a sign that the job exists.
Operation step
Each operation must go through a number of relatively independent and interrelated sequential steps to obtain results. Each step is a job step.
Three stages of job running
containment phase
running phase
completion stage
Three statuses of job running
backup status
Operating status
finished condition
The main tasks of job scheduling
How many jobs are accepted
What jobs are accepted?
First-come first-served (FCFS) scheduling algorithm
It is more conducive to long work than short work.
It is good for CPU-busy jobs but not I/O-busy jobs.
Short job first (SJF) scheduling algorithm
advantage
Compared with FCFS, the average turnaround time and average weighted turnaround time are improved, and the waiting time of jobs is shortened;
Improve system throughput;
shortcoming
The running time of the job must be known in advance
It is very disadvantageous for long jobs. The turnaround time of long jobs will increase significantly.
When using the SJF algorithm, human-machine interaction cannot be achieved
This scheduling algorithm does not consider the urgency of the job at all, so it cannot guarantee that urgent jobs can be processed in a timely manner.
Priority–scheduling algorithm (PSA)
Highest Response Ratio Next (HRRN)
principle
Each time a job is selected for operation, the response ratio RP of each job in the backup job queue is first calculated and then the job with the largest value is selected and put into operation.
Priority = (waiting time required service time) / required service time = response time / required service time = 1 waiting time / required service time
Features
If the waiting time of the jobs is the same, the shorter the time required for service, the higher the priority. Therefore, it is similar to the SJF algorithm and is conducive to short jobs.
When the required service time is the same, the priority of the job is determined by its waiting time, so the algorithm is similar to the FCFS algorithm.
For long-term priorities, the priority can be increased as the waiting time increases. When the waiting time is long enough, the processor can also be obtained.
Real-time scheduling (HRT and SRT tasks)
Basic conditions for realizing real-time scheduling
provide necessary information
Ready time
Start deadline and finish deadline
processing time
Resource requirements
priority
System processing capability is strong
∑(Ci/Pi)≤1
N processors: ∑(Ci/Pi)≤N
Adopt preemptive scheduling mechanism
Has a quick switching mechanism
Quick response to interruptions
Fast task dispatching capability
Classification of real-time scheduling algorithms
Non-preemptive scheduling algorithm
Non-preemptive round-robin scheduling algorithm
Non-preemptive priority scheduling algorithm
Preemptive scheduling algorithm
Preemptive priority scheduling algorithm based on clock interrupt
Immediate preemption priority scheduling algorithm
Earliest Deadline First EDF (Earliest Deadline First) algorithm
Prioritize tasks based on their start and end times
The earlier the deadline, the higher the priority
Non-preemptive scheduling is used for non-periodic real-time tasks
Preemptive scheduling is used for periodic real-time tasks
LLF (Least Laxity First) algorithm
Similar to EDF
The algorithm determines the priority of tasks according to their urgency (or slack). The higher the urgency of a task, the higher the priority given to the task so that it can be executed first.
slackness example
For example, a task must be completed in 200ms, and its required running time is 100ms. Therefore, the scheduler must schedule execution before 100ms, and the urgency (slack) of the task is 100ms.
Priority inversion problem
The formation of priority inversion
High-priority processes are delayed or blocked by low-priority processes.
Solution to Priority Inversion
Simple: If process P3 enters the critical section, the processor occupied by P3 is not allowed to be preempted.
Practical: built on dynamic priority inheritance
Deadlock overview
resource issues
Reusability resources
computer peripherals
consumable resources
data, news
Preemptible resources
Does not cause deadlock
CPU, memory
non-preemptible resources
CD-ROM drive, printer
deadlock in computer system
Deadlock caused by competition for non-preemptible resources
Deadlock caused by competition for consumable resources
Improper process advancement sequence causes deadlock
The definition, necessary conditions and handling methods of deadlock
Definition: A group of processes is deadlocked if each process in the group is waiting for an event that can only be raised by other processes in the process.
Necessary conditions for deadlock to occur
Mutually exclusive conditions
Request and save conditions
No preemption conditions
Loop wait condition
If there is only one instance of each resource, the loop wait condition is a necessary and sufficient condition for the existence of deadlock.
How to deal with deadlock
Prevent deadlock
Static methods are measures taken before the process is executed to prevent deadlock by setting certain restrictions to destroy one of the four conditions that cause deadlock.
Strategies to prevent deadlocks
Breaking the "request and save" condition
The first agreement
Before all processes start running, they must apply for all the resources they need throughout the entire running process.
Advantages: simple, easy, safe
shortcoming
Resources are seriously wasted and resource utilization is seriously deteriorated.
The process often suffers from starvation
The second type of agreement
It allows a process to start running after getting only the resources it needs for its initial run. While the process is running, it gradually releases all the resources that have been allocated to itself and have been used, and then requests new required resources.
Breaking the "no preemption" condition
When a process that has saved certain resources that cannot be preempted makes a new resource request that cannot be satisfied, it must release all the resources it has kept and apply again when needed in the future.
Breaking the "Loop Wait" condition
Linearly sort all resource types in the system and assign different serial numbers
For example, let the serial number of the input machine be 1, the serial number of the printer be 2, the serial number of the disk drive be 3, etc. All process requests for resources must be made strictly in the order of increasing resource serial numbers.
avoid deadlock
The dynamic method takes measures during the execution of the process without taking restrictive measures in advance to destroy the necessary conditions for deadlock. Instead, it uses a certain method to prevent the system from entering an unsafe state when the process applies for resources, thereby avoiding the occurrence of deadlock. Lock. banker's algorithm
Strategies to avoid deadlock
System security status
safe state
At a certain moment, for n processes executing concurrently, if the system can allocate the required resources to each process in a certain order such as <p1, p2...pn>, up to the maximum demand, so that each process can be completed smoothly, then the system is considered to be in a safe state at that moment, and such a sequence is a safe sequence
Example of safe state
Transition from safe state to unsafe state
Avoid deadlock using banker's algorithm
Meaning: When each new process enters the system, it must declare the maximum number of units of each resource type that may be required during operation, and the number should not exceed the total amount of resources owned by the system. When a process requests a set of resources, the system must first determine whether sufficient resources are available for the process. If so, then further calculate whether allocating these resources to the process will put the system in an unsafe state. If not, allocate resources to it, otherwise let the process wait
Data Structures in Banker's Algorithm
Available resource vector Available[m]: m is the number of resource types in the system, and Available[j]=k means that the number of jth type resources in the system is k.
Maximum demand matrix Max[n,m]: n is the number of processes in the system, Max[i,j]=k means that the maximum demand for type j resources by process i is k.
Allocation matrix Allocation[n, m]: It defines the number of resources of each type of resource currently allocated to each process in the system. Allocation[i,j] = k means that process i has been allocated k types of resources of type j. .
Demand matrix Need[n,m]: It represents the number of various types of resources that each process still needs. Need[i,j]=k means that process i still needs k resources of type j. Need[i,j]=Max[i,j] - Allocation[i,j]
banker's algorithm
security algorithm
Example of Banker’s Algorithm
Solve problems
matrix
list
Detect deadlock
Deadlock detection and release
Deadlock detection
Resource Allocation Map
Simplified steps
Select a non-blocking process p
Remove p, including all its request edges and allocation edges
Repeat steps 1 and 2 until you can no longer continue
Deadlock Theory
If a series of simplifications cannot make all process nodes become isolated nodes
Detection time
Detect deadlocks while processes are waiting (the disadvantage is high system overhead)
Regular detection
Detect deadlocks when system resource utilization drops
Data structures in deadlock detection
Deadlock release
Seize resources
Terminate (or cancel) a process
How to terminate a process
Terminate all deadlocked processes
Terminate processes one by one
Minimal cost
Process priority size
How long has the process been executed and how much time remains
How many resources the process has used while running and how many more resources are needed
The nature of the process is interactive or batch processing
The least costly deadlock relief algorithm
Is to use an effective suspend and release mechanism to suspend some deadlocked processes
Release deadlock