Process Scheduling: Difference between revisions
Yuron [PHRhYmxlIGNsYXNzPSJ0d3BvcHVwIj48dHI+PHRkIGNsYXNzPSJ0d3BvcHVwLWVudHJ5dGl0bGUiPkdyb3Vwczo8L3RkPjx0ZD51c2VyPGJyIC8+YnVyZWF1Y3JhdDxiciAvPmludGVyZmFjZS1hZG1pbjxiciAvPnN5c29wPGJyIC8+PC90ZD48L3RyPjwvdGFibGU+] (talk | contribs) m (1 revision imported) |
pc>Yuron No edit summary |
||
Line 1: | Line 1: | ||
{{#set: Priority=4 | Summary=The process of sharing the processor (or other) time amongst competing needs.}}<!-- | {{#set: Priority=4 | Summary=The process of sharing the processor (or other) time amongst competing needs.}}<!-- | ||
-->{{Path|OS Topics|5}}{{Path|Processes| | -->{{Path|OS Topics|5}}{{Path|Processes|7}}<!-- | ||
-->{{#invoke:Dependencies|add|Processes,4|Virtualisation,4|Concepts,5|Resources,2}} | -->{{#invoke:Dependencies|add|Processes,4|Virtualisation,4|Concepts,5|Resources,2}} | ||
In a multi-tasking operating system not all the [[processes]] (tasks) are really running simultaneously; at most the maximum number of running processes equals the number of processor ‘cores’ available (‘traditionally’ one, but nowadays liable to be a few more than that). Any <em>illusion</em> of many simultaneous processes is maintained by switching attention amongst them rapidly (by human standards). | In a multi-tasking operating system not all the [[processes]] (tasks) are really running simultaneously; at most the maximum number of running processes equals the number of processor ‘cores’ available (‘traditionally’ one, but nowadays liable to be a few more than that). Any <em>illusion</em> of many simultaneous processes is maintained by switching attention amongst them rapidly (by human standards). | ||
The [[scheduler]] is the part of an operating system which | The [[scheduler]] is the part of an operating system which decides which process(es) to run at any given time. The algorithm(s) used by a scheduler to pick the ‘next’ process are many and varied and can depend on the particular circumstances. Only a few examples are mentioned here and, if you want more, you should consult the literature to a depth you feel is appropriate. | ||
decides which process(es) to run at any given time. The algorithm(s) | |||
used by a scheduler to pick the ‘next’ process are many | |||
and varied and can depend on the particular circumstances. Only a few | |||
examples are mentioned here and, if you want more, you should consult | |||
the literature to a depth you feel is appropriate. | |||
The approach to scheduling usually varies according to the purpose of | The approach to scheduling usually varies according to the purpose of the computer system – sometimes referred to as a “[https://en.wikipedia.org/wiki/Processing_mode processing mode]”. | ||
the computer system – sometimes referred to as a “[https://en.wikipedia.org/wiki/Processing_mode processing mode]”. | |||
=== [https://en.wikipedia.org/wiki/Interactive_computing Interactive processing] === | === [https://en.wikipedia.org/wiki/Interactive_computing Interactive processing] === | ||
This will be a familiar mode where the user constantly interacts with | This will be a familiar mode where the user constantly interacts with the computer which, nevertheless, occasionally has some time-consuming tasks: think of editing and then compiling a big program. Scheduling is arranged to give reasonable responsiveness to various jobs as the user changes focus – plus there may be some <em>background</em> processing which is not as apparent. | ||
the computer which, nevertheless, occasionally has some time-consuming | |||
tasks: think of editing and then compiling a big program. | |||
Scheduling is arranged to give reasonable responsiveness to various | |||
jobs as the user changes focus – plus there may be some <em>background</em> | |||
processing which is not as apparent. | |||
In interactive systems a common approach is “round robin” | In interactive systems a common approach is “round robin” scheduling. This is an attempt to give all processes equal | ||
scheduling. This is an attempt to give all processes equal | ‘turns’ by taking the next process from the front of a FIFO queue whilst appending a de-scheduled process to the back. This | ||
‘turns’ by taking the next process from the front of a | is most obviously applicable when time-slicing, (or cooperative ‘yielding’) when a running process is de-scheduled without | ||
FIFO queue whilst appending a de-scheduled process to the back. This | being [[Process_States|<strong>blocked</strong>]]. The scheduler needs to try to be ‘fair’ to avoid [[Starvation|starving]] any process of a share of processor time, although the details might be influenced by the process’ [[Process_Priority|priority]]. | ||
is most obviously applicable when time-slicing, (or cooperative | |||
‘yielding’) when a running process is de-scheduled without | |||
being [[Process_States|<strong>blocked</strong>]]. The scheduler needs to try to be | |||
‘fair’ to avoid [[Starvation|starving]] any process of a | |||
share of processor time, although the details might be influenced by | |||
the process’ [[Process_Priority|priority]]. | |||
<blockquote> | <blockquote> | ||
Example: the browser you’re using right now. | Example: the browser you’re using right now. | ||
Line 37: | Line 20: | ||
=== [https://en.wikipedia.org/wiki/Real-time_computing Real-time processing] === | === [https://en.wikipedia.org/wiki/Real-time_computing Real-time processing] === | ||
[[Real_Time|Real-time]] computing requires a guaranteed response with a | [[Real_Time|Real-time]] computing requires a guaranteed response with a certain, maximum <em>latency</em>. Contrary to some opinions, this does not make it fast: in fact it is essential that the processor is regularly becoming [[idle]] because if it is not it means it hasn’t completed everything it needed to do in time. | ||
certain, maximum <em>latency</em>. Contrary to some opinions, this does not | |||
make it fast: in fact it is essential that the processor is regularly | |||
becoming [[idle]] because if it is not it means it hasn’t | |||
completed everything it needed to do in time. | |||
Real-time schedulers typically rely heavily on | Real-time schedulers typically rely heavily on [[Process_Priority|<strong>priority</strong>]], with some processes being identified as more urgent than others (i.e. <em>low latency</em> is required); these will typically be run in preference to lower priority (but still time-constrained) jobs. | ||
[[Process_Priority|<strong>priority</strong>]], with some processes being identified | |||
as more urgent than others (i.e. <em>low latency</em> is required); these | |||
will typically be run in preference to lower priority (but still | |||
time-constrained) jobs. | |||
Real-time tasks are sometimes (somewhat arbitrarily) divided between | Real-time tasks are sometimes (somewhat arbitrarily) divided between “hard” and “soft” constraints: | ||
“hard” and “soft” constraints: | |||
*A <em>hard</em> real-time task is one where failure to meet a constraint leads to a system failure. | *A <em>hard</em> real-time task is one where failure to meet a constraint leads to a system failure. | ||
Line 58: | Line 32: | ||
<blockquote> | <blockquote> | ||
Real-time example: an aircraft control system may need to react to | Real-time example: an aircraft control system may need to react to control inputs within (say) 10 ms whereas updating the pilot’s radar display may require more work but is okay as long as it’s done once per second. | ||
control inputs within (say) 10 ms whereas updating the pilot’s | |||
radar display may require more work but is okay as long as it’s | |||
done once per second. | |||
</blockquote> | </blockquote> | ||
=== [https://en.wikipedia.org/wiki/Batch_processing Batch processing] === | === [https://en.wikipedia.org/wiki/Batch_processing Batch processing] === | ||
Batch scheduling picks a particular job and <em>completes</em> that before | Batch scheduling picks a particular job and <em>completes</em> that before proceeding to the next. It is therefore not responsive to interactions but is efficient at processing. It is often appropriate for big data processing jobs. | ||
proceeding to the next. It is therefore not responsive to | |||
interactions but is efficient at processing. It is often appropriate | |||
for big data processing jobs. | |||
A common approach in batch jobs is ‘shortest job first’. | A common approach in batch jobs is ‘shortest job first’. A simple analysis will show that this finishes more jobs, sooner. For | ||
A simple analysis will show that this finishes more jobs, sooner. For | example, if job A takes 1 hour and job B takes two hours, A then B gives an <em>overall</em> latency of 1 + (1+2) = 4 hours (<em>average</em> = 2 hours) whereas B then A gives an overall latency of 2 + (2+1) = 5 (<em>average</em> = 2.5 hours). This is worth remembering when scheduling your own, personal tasks! | ||
example, if job A takes 1 hour and job B takes two hours, A then B | |||
gives an <em>overall</em> latency of 1 + (1+2) = 4 hours (<em>average</em> = 2 | |||
hours) whereas B then A gives an overall latency of 2 + (2+1) = 5 | |||
(<em>average</em> = 2.5 hours). This is worth remembering when scheduling | |||
your own, personal tasks! | |||
<blockquote> | <blockquote> | ||
Batch processing is probably less familiar to you than some other | Batch processing is probably less familiar to you than some other processing modes, but is useful in (for example) | ||
processing modes, but is useful in (for example) | supercomputer/mainframe applications where a process has a lot of data distributed across many processing elements. (The usual example is “processing the payroll” although ‘ordinary’ computers can probably manage that fairly quickly, these days.) | ||
supercomputer/mainframe applications where a process has a lot of data | |||
distributed across many processing elements. | |||
(The usual example is “processing the payroll” although | |||
‘ordinary’ computers can probably manage that fairly | |||
quickly, these days.) | |||
</blockquote> | </blockquote> | ||
---- | ---- | ||
{{BookChapter|3.2|110-116}} | |||
{{PageGraph}} | {{PageGraph}} | ||
{{Category|Concepts}} | {{Category|Concepts}} | ||
{{Category|Major concepts}} | {{Category|Major concepts}} | ||
{{Category|Processes}} | {{Category|Processes}} |
Revision as of 10:07, 3 August 2019
On path: OS Topics | 1: Concepts • 2: Cache • 3: Context • 4: Processor Privilege • 5: Process Scheduling • 6: Exceptions • 7: Interrupts • 8: Direct Memory Access (DMA) • 9: Virtualisation • 10: Hypervisor |
---|
On path: Processes | 1: Processes • 2: Context • 3: Process Control Block (PCB) • 4: Multi Threading • 5: Threads • 6: Interprocess Communication • 7: Process Scheduling • 8: Scheduler • 9: Process States • 10: Process Priority |
---|
Depends on | Processes • Virtualisation • Concepts • Resources |
---|
In a multi-tasking operating system not all the processes (tasks) are really running simultaneously; at most the maximum number of running processes equals the number of processor ‘cores’ available (‘traditionally’ one, but nowadays liable to be a few more than that). Any illusion of many simultaneous processes is maintained by switching attention amongst them rapidly (by human standards).
The scheduler is the part of an operating system which decides which process(es) to run at any given time. The algorithm(s) used by a scheduler to pick the ‘next’ process are many and varied and can depend on the particular circumstances. Only a few examples are mentioned here and, if you want more, you should consult the literature to a depth you feel is appropriate.
The approach to scheduling usually varies according to the purpose of the computer system – sometimes referred to as a “processing mode”.
Interactive processing
This will be a familiar mode where the user constantly interacts with the computer which, nevertheless, occasionally has some time-consuming tasks: think of editing and then compiling a big program. Scheduling is arranged to give reasonable responsiveness to various jobs as the user changes focus – plus there may be some background processing which is not as apparent.
In interactive systems a common approach is “round robin” scheduling. This is an attempt to give all processes equal ‘turns’ by taking the next process from the front of a FIFO queue whilst appending a de-scheduled process to the back. This is most obviously applicable when time-slicing, (or cooperative ‘yielding’) when a running process is de-scheduled without being blocked. The scheduler needs to try to be ‘fair’ to avoid starving any process of a share of processor time, although the details might be influenced by the process’ priority.
Example: the browser you’re using right now.
Real-time processing
Real-time computing requires a guaranteed response with a certain, maximum latency. Contrary to some opinions, this does not make it fast: in fact it is essential that the processor is regularly becoming idle because if it is not it means it hasn’t completed everything it needed to do in time.
Real-time schedulers typically rely heavily on priority, with some processes being identified as more urgent than others (i.e. low latency is required); these will typically be run in preference to lower priority (but still time-constrained) jobs.
Real-time tasks are sometimes (somewhat arbitrarily) divided between “hard” and “soft” constraints:
- A hard real-time task is one where failure to meet a constraint leads to a system failure.
- Example: car’s braking system.
- A soft real-time task could be summarised as one where undue delay leads to annoyance.
- Example: bank’s cash dispenser.
Real-time example: an aircraft control system may need to react to control inputs within (say) 10 ms whereas updating the pilot’s radar display may require more work but is okay as long as it’s done once per second.
Batch processing
Batch scheduling picks a particular job and completes that before proceeding to the next. It is therefore not responsive to interactions but is efficient at processing. It is often appropriate for big data processing jobs.
A common approach in batch jobs is ‘shortest job first’. A simple analysis will show that this finishes more jobs, sooner. For example, if job A takes 1 hour and job B takes two hours, A then B gives an overall latency of 1 + (1+2) = 4 hours (average = 2 hours) whereas B then A gives an overall latency of 2 + (2+1) = 5 (average = 2.5 hours). This is worth remembering when scheduling your own, personal tasks!
Batch processing is probably less familiar to you than some other processing modes, but is useful in (for example) supercomputer/mainframe applications where a process has a lot of data distributed across many processing elements. (The usual example is “processing the payroll” although ‘ordinary’ computers can probably manage that fairly quickly, these days.)
Also refer to: | Operating System Concepts, 10th Edition: Chapter 3.2, pages 110-116 |
---|