Scheduler: Difference between revisions
pc>Yuron No edit summary |
![]() ![]() ![]() m (1 revision imported) |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
{{#set: Priority=4 | Summary=The schedule is the routine or algorithm which decides what to do next.}}<!-- | {{#set: Priority=4 | Summary=The schedule is the routine or algorithm which decides what to do next.}}<!-- | ||
-->{{Path|Processes| | -->{{Path|Processes|8}}<!-- | ||
-->{{#invoke:Dependencies|add|Process Scheduling,4}} | -->{{#invoke:Dependencies|add|Process Scheduling,4}} | ||
<blockquote> | <blockquote> | ||
Including timeslicing! | Including timeslicing! | ||
</blockquote> | </blockquote> | ||
There are several principles combined and illustrated in this simple | There are several principles combined and illustrated in this simple process scheduler: | ||
process scheduler: | |||
*[[Process_States|Process states]]: <span style="color:green">running</span>, <span style="color:#C0A830">ready</span> and <span style="color:red">blocked</span>. | *[[Process_States|Process states]]: <span style="color:green">running</span>, <span style="color:#C0A830">ready</span> and <span style="color:red">blocked</span>. | ||
Line 15: | Line 14: | ||
=== Interactive Demonstration === | === Interactive Demonstration === | ||
The demonstration will run up to five ‘concurrent’ | The demonstration will run up to five ‘concurrent’ processes on a single processor. The processes are shown as circles, | ||
processes on a single processor. The processes are shown as circles, | except the currently running one (if any) which is a star. A process can be in one of the following states: | ||
except the currently running one (if any) which is a star. A process | |||
can be in one of the following states: | |||
*Terminated/non-existent. | *Terminated/non-existent. | ||
Line 27: | Line 24: | ||
** There are two I/O devices available here, arbitrarily called “Input” and “output”. | ** There are two I/O devices available here, arbitrarily called “Input” and “output”. | ||
Processes are coloured and linked into lists (where appropriate) to | Processes are coloured and linked into lists (where appropriate) to show their state. | ||
show their state. | |||
<blockquote> | <blockquote> | ||
Hint for interpretation: <em>at first</em>, follow a single feature, such | Hint for interpretation: <em>at first</em>, follow a single feature, such as the behaviour of a particular process or one of the linked list queues. | ||
as the behaviour of a particular process or one of the linked list | |||
queues. | |||
</blockquote> | </blockquote> | ||
There are some pre-prepared demonstration scenarios to look at (we | There are some pre-prepared demonstration scenarios to look at (we recommend you do) plus you can set up your own (time and inclination permitting). | ||
recommend you do) plus you can set up your own (time and inclination | |||
permitting). | |||
{{#widget:Scheduling}} | {{#widget:Scheduling}} | ||
Line 42: | Line 34: | ||
---- | ---- | ||
==== Demo. 1 ==== | ==== Demo. 1 ==== | ||
Processes simply time-slice at regular intervals; nothing blocks | Processes simply time-slice at regular intervals; nothing blocks although processes do complete. This illustrates the classic | ||
although processes do complete. This illustrates the classic | |||
‘round robin’ scheduling. | ‘round robin’ scheduling. | ||
Line 67: | Line 58: | ||
==== Demo. 4 ==== | ==== Demo. 4 ==== | ||
A whole bunch of processes behaving in various ways, using the | A whole bunch of processes behaving in various ways, using the techniques described above. Probably harder to follow any particular action! | ||
techniques described above. Probably harder to follow any particular | |||
action! | |||
<blockquote> | <blockquote> | ||
You can modify these or write your own experiments in the boxes | You can modify these or write your own experiments in the boxes below and step through the behaviour of the scheduler. | ||
below and step through the behaviour of the scheduler. | <strong>NOTE</strong>: the IO devices {input, output} should only each be used in <em>one</em> process (which may be the same one) as these are not queues. | ||
<strong>NOTE</strong>: the IO devices {input, output} should only each be used | |||
in <em>one</em> process (which may be the same one) as these are not queues. | |||
</blockquote> | </blockquote> | ||
---- | ---- | ||
The scheduler is closely tied to [[Process_States|<em><strong>process states</strong></em>]] and [[ | The scheduler is closely tied to [[Process_States|<em><strong>process states</strong></em>]] and [[Context Switching|<em><strong>context switching</strong></em>]]. It will typically be called whenever a process changes state. Its job is to select which process(es) should be run at a given time. | ||
whenever a process changes state. Its job is to select which | |||
process(es) should be run at a given time. | |||
Most processes will spend some time busily computing and other times | Most processes will spend some time busily computing and other times waiting. Waiting may be: | ||
waiting. Waiting may be: | |||
*for input from a file or device (such as a keyboard) or another process or … | *for input from a file or device (such as a keyboard) or another process or … | ||
Line 92: | Line 76: | ||
In many of these cases the process is <strong>blocked</strong>, needing some external event (including time to elapse) before it even could run. If the | In many of these cases the process is <strong>blocked</strong>, needing some external event (including time to elapse) before it even could run. If the process is <strong>ready</strong> then the scheduler may choose to run it when a processor becomes available. | ||
process is <strong>ready</strong> then the scheduler may choose to run it when a | |||
processor becomes available. | |||
A processor will become available: | A processor will become available: | ||
Line 104: | Line 86: | ||
== Why are we scheduling? == | == Why are we scheduling? == | ||
A computer may have multiple processes to run but the choice of how to | A computer may have multiple processes to run but the choice of how to run them may be influenced by how they are being used. | ||
run them may be influenced by how they are being used. | |||
=== Interactive systems === | === Interactive systems === | ||
These will be the most familiar applications. The computer has a | These will be the most familiar applications. The computer has a number of things to do and the user wants to see them all running ‘simultaneously’. As there are often more processes (ready to run) than processors, the processes are <em>time-shared</em>. This relies on <strong>pre-empting</strong> running processes with a <strong>time-slice interrupt</strong>. | ||
number of things to do and the user wants to see them all running | |||
‘simultaneously’. As there are often more processes (ready to run) | |||
than processors, the processes are <em>time-shared</em>. This relies on | |||
<strong>pre-empting</strong> running processes with a <strong>time-slice interrupt</strong>. | |||
Whilst giving the illusion of keeping many processes executing | Whilst giving the illusion of keeping many processes executing simultaneously, this is not the most efficient form of scheduling because it <strong>context switches</strong> more often than is strictly necessary. However it does reduce the <strong>latency</strong> of any particular process. | ||
simultaneously, this is not the most efficient form of scheduling | |||
because it <strong>context switches</strong> more often than is strictly necessary. | |||
However it does reduce the <strong>latency</strong> of any particular process. | |||
=== Real-time systems === | === Real-time systems === | ||
A real-time system – e.g. one which flies an aeroplane – needs a | A real-time system – e.g. one which flies an aeroplane – needs a guaranteed response time. This means that it must always be able to complete all pending jobs within a certain time. In turn, this implies that a real time system should regularly find that all its jobs are up-to-date and it has nothing to run for a while. | ||
guaranteed response time. This means that it must always be able to | |||
complete all pending jobs within a certain time. In turn, this | |||
implies that a real time system should regularly find that all its | |||
jobs are up-to-date and it has nothing to run for a while. | |||
Because of this, it may be decided not to time-slice processes; any | Because of this, it may be decided not to time-slice processes; any running process should block ‘soon’ so there is no point in wasting time context switching before that happens. Processes may still be prioritised to reduce the response latency of particular tasks. | ||
running process should block ‘soon’ so there is no point in wasting | |||
time context switching before that happens. Processes may still be | |||
prioritised to reduce the response latency of particular tasks. | |||
=== Batch systems === | === Batch systems === | ||
A batch system is used to run big, processor intensive jobs. Latency | A batch system is used to run big, processor intensive jobs. Latency is not the most important issue; processing efficiency is of primary importance. The classic example is the ‘payroll’ calculation but many supercomputer jobs – such as weather forecasting – may be run this way. Pre-emption is not an issue – install processes and run until blocked – although a choice of what to run ‘next’ may still be significant. | ||
is not the most important issue; processing efficiency is of primary | |||
importance. The classic example is the ‘payroll’ calculation but many | |||
supercomputer jobs – such as weather forecasting – may be run this | |||
way. Pre-emption is not an issue – install processes and run until | |||
blocked – although a choice of what to run ‘next’ may still be | |||
significant. | |||
---- | ---- | ||
The scheduler may also be influenced by the | The scheduler may also be influenced by the [[Process_Priority|priority]] of a task and take account of its [[Process_Scheduling|scheduling]] history. | ||
[[Process_Priority|priority]] of a task and take account of its | |||
[[Process_Scheduling|scheduling]] history. | |||
---- | ---- | ||
{{BookChapter|3.2|110-115}} | |||
{{PageGraph}} | {{PageGraph}} | ||
{{Category|Processes}} | {{Category|Processes}} |
Latest revision as of 10:03, 5 August 2019
On path: Processes | 1: Processes • 2: Context • 3: Process Control Block (PCB) • 4: Multi Threading • 5: Threads • 6: Interprocess Communication • 7: Process Scheduling • 8: Scheduler • 9: Process States • 10: Process Priority |
---|
Depends on | Process Scheduling |
---|
Including timeslicing!
There are several principles combined and illustrated in this simple process scheduler:
- Process states: running, ready and blocked.
- Linked lists.
- Timeslicing and scheduling.
- Processes sleeping.
- I/O synchronisation
Interactive Demonstration
The demonstration will run up to five ‘concurrent’ processes on a single processor. The processes are shown as circles, except the currently running one (if any) which is a star. A process can be in one of the following states:
- Terminated/non-existent.
- Running.
- Ready to run.
- Blocked – asleep … until some future time.
- Blocked – waiting for I/O.
- There are two I/O devices available here, arbitrarily called “Input” and “output”.
Processes are coloured and linked into lists (where appropriate) to show their state.
Hint for interpretation: at first, follow a single feature, such as the behaviour of a particular process or one of the linked list queues.
There are some pre-prepared demonstration scenarios to look at (we recommend you do) plus you can set up your own (time and inclination permitting).
CALL RESET BELOW!
Process 1 | |
---|---|
Process 2 | |
Process 3 | |
Process 4 | |
Process 5 |
Demo. 1
Processes simply time-slice at regular intervals; nothing blocks although processes do complete. This illustrates the classic ‘round robin’ scheduling.
Points to note:
- the cyclic order of picking the current running process.
- the ‘Ready’ queue is a linked list, with the preempted process added to the tail and the new process taken from the head on each timeslice.
- as fewer processes remain, the apparent speed on the remaining processes – the time they are running vs. the time they spend ready/waiting – increases.
Demo. 2
Like the first demonstration but two processes (#2, #4) deliberately ‘sleep’ for periods; this causes a change in the ordering of the round-robin scheduling. Processes are ‘woken’ by the clock and become ready.
Additional points to note:
- there can be several processes ‘asleep’ at the same time which (here, at least) are stored on a linked list; this list is ordered (to aid waking processes at the correct time) with the next process(es) to wake up at its head.
Demo. 3
Two processes (#2 and #4 again) are interacting with I/O devices. They will block (indefinitely) when they attempt an input or output operation. The Environment (i.e. you) can interact and reschedule them by clicking the relevant box. You may discover that The Environment can reach the synchronisation point before the appropriate process attemps the rendezvous; that’s okay too. There is some colour-coding on I/O states: red means blocked, yellow means unblocked (thus ready) and green is the active I/O transfer.
Note the labels “Input” and “Output” are arbitrary: we could have called them “Printer” and “Network” for example. Only one process should ever interact with a particular I/O stream.
Demo. 4
A whole bunch of processes behaving in various ways, using the techniques described above. Probably harder to follow any particular action!
You can modify these or write your own experiments in the boxes below and step through the behaviour of the scheduler. NOTE: the IO devices {input, output} should only each be used in one process (which may be the same one) as these are not queues.
The scheduler is closely tied to process states and context switching. It will typically be called whenever a process changes state. Its job is to select which process(es) should be run at a given time.
Most processes will spend some time busily computing and other times waiting. Waiting may be:
- for input from a file or device (such as a keyboard) or another process or …
- for output to a file or device or …
- for the ‘right time’ to run (e.g. ‘sleep()’)
- due to some ‘system’ issue, such as paging
- for permission to proceed in ‘sensitive’ code
- for a processor to become available
In many of these cases the process is blocked, needing some external event (including time to elapse) before it even could run. If the process is ready then the scheduler may choose to run it when a processor becomes available.
A processor will become available:
- if the process it is running terminates or blocks.
- with cooperative scheduling, when the running process 'yields'.
- in a time-sliced system, when it has been running 'long enough' for the OS to 'give someone else a turn'.
- in a pre-emptive system if a process with a higher priority becomes _ready_.
Why are we scheduling?
A computer may have multiple processes to run but the choice of how to run them may be influenced by how they are being used.
Interactive systems
These will be the most familiar applications. The computer has a number of things to do and the user wants to see them all running ‘simultaneously’. As there are often more processes (ready to run) than processors, the processes are time-shared. This relies on pre-empting running processes with a time-slice interrupt.
Whilst giving the illusion of keeping many processes executing simultaneously, this is not the most efficient form of scheduling because it context switches more often than is strictly necessary. However it does reduce the latency of any particular process.
Real-time systems
A real-time system – e.g. one which flies an aeroplane – needs a guaranteed response time. This means that it must always be able to complete all pending jobs within a certain time. In turn, this implies that a real time system should regularly find that all its jobs are up-to-date and it has nothing to run for a while.
Because of this, it may be decided not to time-slice processes; any running process should block ‘soon’ so there is no point in wasting time context switching before that happens. Processes may still be prioritised to reduce the response latency of particular tasks.
Batch systems
A batch system is used to run big, processor intensive jobs. Latency is not the most important issue; processing efficiency is of primary importance. The classic example is the ‘payroll’ calculation but many supercomputer jobs – such as weather forecasting – may be run this way. Pre-emption is not an issue – install processes and run until blocked – although a choice of what to run ‘next’ may still be significant.
The scheduler may also be influenced by the priority of a task and take account of its scheduling history.
Also refer to: | Operating System Concepts, 10th Edition: Chapter 3.2, pages 110-115 |
---|