Queues Extra

From COMP15212 Wiki
Revision as of 10:23, 3 August 2019 by pc>Yuron
Depends on QueuesProcess States

A deeper look at interaction of processes with queues.

Definitions:

  • Producer – the process putting items into the queue
  • Consumer – the process taking items out of the queue

Start with an empty queue; both producer and consumer are (asynchronously) running processes but neither has yet used this queue. You can work through the scenario if you wish. (And we encourage you to do so!)

Clicking any of the ‘symbols’ {A-E} will try to insert that item into the queue; clicking “Output” will try to take the next item out of the queue. These are your system calls.

The “Consumer” area simply shows a history of items which have been read.

The producer tries to put an item into the queue (“pipe”). It succeeds and carries on running. Maybe it puts more items in, too.

Now imagine the consumer tries to remove some items. At first it succeeds and continues to run. If it keeps trying the queue will become empty. Notice that items in the “Pipe” – which represents an area of RAM – don’t move.

If the consumer tries again it is unable to proceed. At this point the consumer process becomes blocked and the scheduler is called. This is referred to as “sleep” or “wait” and it will persist as long as the queue is empty.

The producer, we hope, continues to run and eventually puts another item into the queue. The ‘put’ process checks to see if there is a process waiting (sleeping) and, if there is, it sends a “signal” (or “wakeup”) which unblocks the blocked process. In this case the wakeup will alter the consumer’s state to ready; the consumer will run again sometime in the (near) future. This will be very quick in the demo!

In the demonstration, the blocked state is indicated in red.

Conversely, if the producer manages to fill the buffer and then tries to enter another item, it has to sleep (it blocks) until the consumer wakeups it on the removal of an item.

In an ideal world, the queue stays part-full so no one blocks – but the world is not always perfect.


I/O queues

A common use of queues is to buffer I/O streams. In these cases one of the ‘processes’ may be an Interrupt Service Routine (ISR) rather than a free-running process. The objective is to use the buffers to ‘smooth out’ the rate of flow to keep the I/O devices working at maximum efficiency. The principles are still the same although there is a difference in the detail of sleep and wakeup.

Receiver

Here the producer is an input device. When data arrives it causes an interrupt and the ISR adds it to the queue before returning to… whatever it was that was interrupted.

There is some embarrassment if the buffer gets full because there is nothing which can be done – you can’t stop the user typing, for example – so this should be prevented by higher level protocols.

Transmitter

Here the consumer is an output device. It does not know when to send so the assumption is that if there is something sleeping (queue not empty) and the device is not busy, then send the next item.

The interrupt is therefore caused by “transmitter available” (rather than the receiver’s data available).

This is okay until the queue is empty, at which point the next interrupt needs to sleep until there is something in the queue again. In this case this is done by selectively disabling that particular interrupt and returning; it will not now trouble us for a while.

When the producer (a normal process in this case) puts new data into the buffer its wakeup enables the transmitter interrupt again. As soon as it can (typically after the system call from the producer returns, as system calls usually disable all interrupts for their duration) the transmitter ISR will get the output stream busy again.


Hazards

Because two, potentially independent processes may be writing to and reading from a buffer in memory, there could be a possibility that they may interfere in a dangerous way. For example, if a count of items is maintained, with the writer incrementing a variable and the reader decrementing it, then the increment and decrement operations need to be ensured to be atomic.

Thought experiment: if not, what could happen?

Some form of mutual exclusion may be needed.

A mutex-free FIFO implementation is possible with careful ordering of the operations. If the writer only ever modifies the tail pointer and the reader only ever modifies the head pointer it can be made safe.

Load write pointer
Store data at write pointer
Increment write pointer
Store write pointer (so reader can see it)

Exercises: What could go wrong if the store operations were reversed?

Answer

If the code were written like this:

Load write pointer
Store write pointer (so reader can see it)
Store data at write pointer
Increment write pointer

The reader could try to read the data any time after the second statement, including before it has been written to memory.

If the timing were unfortunate then undefined data would be read. In practice this would cause an intermittent fault in the software – very hard to find and debug.

What is the equivalent (correct!) code sequence for the reader?

Answer
Load read pointer
Load data at read pointer
Increment read pointer
Store read pointer (so writer can see it)

The danger here is even more subtle: the problem occurs because, once the read pointer is written back the location (in a cyclic buffer) can be reused and thus overwritten with a new value. The correct value must have been loaded before this is allowed.

Try to work these out before revealing the answers!

For a further hint, see memory barrier.

(When dealing with unsynchronised processes, sometimes the order in which operations are performed can make the difference between correct code and code with subtle, non-deterministic fault cases.)



Articles on IO
Cacheability • Device Drivers • Direct Memory Access (DMA) • IO • Interprocess Communication • Interrupt Controller • Interrupt Service Routines (ISRs) • Interrupts • Libraries • Peripheral devices • Pipes • Queues • Queues Extra • Resources • Shell • Sockets • Spooling and Buffering • Starvation • Streams • System Calls • Thrashing • Timers • Using Peripherals • Virtualisation • Write Buffer
Articles on Processes
About this resource • Atomicity • Containers • Context • Context Switching • Daemons • Fork Unix • Hypervisor • Idle • Interprocess Communication • Multi Threading • Mutual exclusion • Pipes • Pointer Arithmetic • Process Control Block (PCB) • Process Priority • Process Scheduling • Process States • Processes • Queues • Queues Extra • Race Conditions • Real Time • Resources • Scheduler • Signal and Wait • Sleep • Starvation • Synchronisation • Thrashing • Threads • Unix Signals