Scheduler

From COMP15212 Wiki
Revision as of 07:38, 27 June 2019 by pc>Yuron
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
On path: Processes 1: Processes • 2: Context • 3: Process Control Block (PCB) • 4: Multi Threading • 5: Threads • 6: Interprocess Communication • 7: Process Scheduling • 8: Scheduler • 9: Process States • 10: Process Priority
Depends on Process Scheduling

Including timeslicing!

There are several principles combined and illustrated in this simple process scheduler:

Interactive Demonstration

The demonstration will run up to five ‘concurrent’ processes on a single processor. The processes are shown as circles, except the currently running one (if any) which is a star. A process can be in one of the following states:

  • Terminated/non-existent.
  • Running.
  • Ready to run.
  • Blocked – asleep … until some future time.
  • Blocked – waiting for I/O.
    • There are two I/O devices available here, arbitrarily called “Input” and “output”.

Processes are coloured and linked into lists (where appropriate) to show their state.

Hint for interpretation: at first, follow a single feature, such as the behaviour of a particular process or one of the linked list queues.

There are some pre-prepared demonstration scenarios to look at (we recommend you do) plus you can set up your own (time and inclination permitting).

CALL RESET BELOW!

Process 1
Process 2
Process 3
Process 4
Process 5


Demo. 1

Processes simply time-slice at regular intervals; nothing blocks although processes do complete. This illustrates the classic ‘round robin’ scheduling.

Points to note:

  • the cyclic order of picking the current running process.
  • the ‘Ready’ queue is a linked list, with the preempted process added to the tail and the new process taken from the head on each timeslice.
  • as fewer processes remain, the apparent speed on the remaining processes – the time they are running vs. the time they spend ready/waiting – increases.

Demo. 2

Like the first demonstration but two processes (#2, #4) deliberately ‘sleep’ for periods; this causes a change in the ordering of the round-robin scheduling. Processes are ‘woken’ by the clock and become ready.

Additional points to note:

  • there can be several processes ‘asleep’ at the same time which (here, at least) are stored on a linked list; this list is ordered (to aid waking processes at the correct time) with the next process(es) to wake up at its head.

Demo. 3

Two processes (#2 and #4 again) are interacting with I/O devices. They will block (indefinitely) when they attempt an input or output operation. The Environment (i.e. you) can interact and reschedule them by clicking the relevant box. You may discover that The Environment can reach the synchronisation point before the appropriate process attemps the rendezvous; that’s okay too. There is some colour-coding on I/O states: red means blocked, yellow means unblocked (thus ready) and green is the active I/O transfer.

Note the labels “Input” and “Output” are arbitrary: we could have called them “Printer” and “Network” for example. Only one process should ever interact with a particular I/O stream.

Demo. 4

A whole bunch of processes behaving in various ways, using the techniques described above. Probably harder to follow any particular action!

You can modify these or write your own experiments in the boxes below and step through the behaviour of the scheduler. NOTE: the IO devices {input, output} should only each be used in one process (which may be the same one) as these are not queues.


The scheduler is closely tied to process states and context switching. It will typically be called whenever a process changes state. Its job is to select which process(es) should be run at a given time.

Most processes will spend some time busily computing and other times waiting. Waiting may be:

  • for input from a file or device (such as a keyboard) or another process or …
  • for output to a file or device or …
  • for the ‘right time’ to run (e.g. ‘sleep()’)
  • due to some ‘system’ issue, such as paging
  • for permission to proceed in ‘sensitive’ code
  • for a processor to become available


In many of these cases the process is blocked, needing some external event (including time to elapse) before it even could run. If the process is ready then the scheduler may choose to run it when a processor becomes available.

A processor will become available:

  • if the process it is running terminates or blocks.
  • with cooperative scheduling, when the running process 'yields'.
  • in a time-sliced system, when it has been running 'long enough' for the OS to 'give someone else a turn'.
  • in a pre-emptive system if a process with a higher priority becomes _ready_.

Why are we scheduling?

A computer may have multiple processes to run but the choice of how to run them may be influenced by how they are being used.

Interactive systems

These will be the most familiar applications. The computer has a number of things to do and the user wants to see them all running ‘simultaneously’. As there are often more processes (ready to run) than processors, the processes are time-shared. This relies on pre-empting running processes with a time-slice interrupt.

Whilst giving the illusion of keeping many processes executing simultaneously, this is not the most efficient form of scheduling because it context switches more often than is strictly necessary. However it does reduce the latency of any particular process.

Real-time systems

A real-time system – e.g. one which flies an aeroplane – needs a guaranteed response time. This means that it must always be able to complete all pending jobs within a certain time. In turn, this implies that a real time system should regularly find that all its jobs are up-to-date and it has nothing to run for a while.

Because of this, it may be decided not to time-slice processes; any running process should block ‘soon’ so there is no point in wasting time context switching before that happens. Processes may still be prioritised to reduce the response latency of particular tasks.

Batch systems

A batch system is used to run big, processor intensive jobs. Latency is not the most important issue; processing efficiency is of primary importance. The classic example is the ‘payroll’ calculation but many supercomputer jobs – such as weather forecasting – may be run this way. Pre-emption is not an issue – install processes and run until blocked – although a choice of what to run ‘next’ may still be significant.


The scheduler may also be influenced by the priority of a task and take account of its scheduling history.



Articles on Processes
About this resource • Atomicity • Containers • Context • Context Switching • Daemons • Fork Unix • Hypervisor • Idle • Interprocess Communication • Multi Threading • Mutual exclusion • Pipes • Pointer Arithmetic • Process Control Block (PCB) • Process Priority • Process Scheduling • Process States • Processes • Queues • Queues Extra • Race Conditions • Real Time • Resources • Scheduler • Signal and Wait • Sleep • Starvation • Synchronisation • Thrashing • Threads • Unix Signals