Difference between revisions of "Memory Management Unit (MMU)"
pc>Yuron |
P38360jk [PHRhYmxlIGNsYXNzPSJ0d3BvcHVwIj48dHI+PHRkIGNsYXNzPSJ0d3BvcHVwLWVudHJ5dGl0bGUiPkdyb3Vwczo8L3RkPjx0ZD51c2VyPGJyIC8+PC90ZD48L3RyPjwvdGFibGU+] (talk | contribs) m (→Architecture: fix minor grammar mistake) |
||
(5 intermediate revisions by 3 users not shown) | |||
Line 6: | Line 6: | ||
<blockquote> | <blockquote> | ||
− | It would be a good strategy to be familiar at the simplified | + | It would be a good strategy to be familiar at the simplified principles of [[Memory_Mapping|memory mapping]] and [[Memory_Protection|memory protection]] before getting too deeply into this article, as the MMU is where they meet. |
− | principles of [[Memory_Mapping|memory mapping]] and [[Memory_Protection|memory protection]] before getting too deeply into this | ||
− | article, as the MMU is where they meet. | ||
</blockquote> | </blockquote> | ||
<blockquote> | <blockquote> | ||
− | Caveat: different MMUs will be implemented in ways which differ in | + | Caveat: different MMUs will be implemented in ways which differ in detail. This page is intended to illustrate the general principles in <em>a</em> particular way. |
− | detail. This page is intended to illustrate the general principles | ||
− | in <em>a</em> particular way. | ||
</blockquote> | </blockquote> | ||
== MMU Definition == | == MMU Definition == | ||
− | A [https://en.wikipedia.org/wiki/Memory_management_unit Memory Management Unit] (MMU) is | + | A [https://en.wikipedia.org/wiki/Memory_management_unit Memory Management Unit] (MMU) is the hardware system which performs both virtual memory mapping and checks the current [[Processor_Privilege|privilege]] to keep user processes separated from the operating system — and each other. In addition it helps to prevent [[caching]] of ‘volatile’ memory regions (such as areas containing [[Peripheral devices|I/O peripherals]]. |
− | the hardware system which performs both virtual memory mapping and | ||
− | checks the current [[Processor_Privilege|privilege]] to keep user | ||
− | processes separated from the operating system — and each other. In | ||
− | addition it helps to prevent [[caching]] of ‘volatile’ memory regions (such as areas containing [[Peripheral devices|I/O peripherals]]. | ||
==== MMU inputs ==== | ==== MMU inputs ==== | ||
Line 43: | Line 35: | ||
== Example == | == Example == | ||
− | A typical MMU in a [[Virtual_Memory|virtual memory]] system will use a | + | A typical MMU in a [[Virtual_Memory|virtual memory]] system will use a [[Memory_Pages|paging system]]. The <strong>page tables</strong> specify the translation from the virtual to the physical page addresses; only one <em>set</em> of page tables will be present at any time (<span style="color:green">green</span>, in the figure below) although other pages may still be present until the physical memory is ‘overflowing’, after which they may need to be “[[Paging|paged out]]”. |
− | [[Memory_Pages|paging system]]. The <strong>page tables</strong> specify the | ||
− | translation from the virtual to the physical page addresses; only one | ||
− | <em>set</em> of page tables will be present at any time (<span style="color:green">green</span>, in the figure below) although other | ||
− | pages may still be present until the physical memory is | ||
− | ‘overflowing’, after which they may need to be “[[Paging|paged out]]”. | ||
[[Image:page_mapping.png|link=|alt=Page Mapping]] | [[Image:page_mapping.png|link=|alt=Page Mapping]] | ||
− | All the current process’ pages must be in the page tables but they | + | All the current process’ pages must be in the page tables but they <em>need</em> not all be physically present: they may have been ‘backed off’ onto disk. In this case the MMU notes the fact and the O.S. will have to fetch them <em>on demand</em>. |
− | <em>need</em> not all be physically present: they may have been ‘backed | ||
− | off’ onto disk. In this case the MMU notes the fact and the | ||
− | O.S. will have to fetch them <em>on demand</em>. | ||
− | As was observed in the [[Memory_Mapping|memory mapping]] article, the | + | As was observed in the [[Memory_Mapping|memory mapping]] article, the page table entries have some ‘spare’ space. Part of this |
− | page table entries have some ‘spare’ space. Part of this | + | indicates things like “this page is writeable” and the MMU checks each access request against this. Only if there is a valid |
− | indicates things like “this page is writeable” and the MMU | + | mapping and the operation is legitimate will the MMU let the processor continue, otherwise it will indicate a [[Memory_Fault|memory fault]]. |
− | checks each access request against this. Only if there is a valid | ||
− | mapping and the operation is legitimate will the MMU let the processor | ||
− | continue, otherwise it will indicate a [[Memory_Fault|memory fault]]. | ||
== Architecture == | == Architecture == | ||
− | The figure below shows a typical MMU ‘in situ’ This | + | The figure below shows a typical MMU ‘in situ’ This translates virtual to physical addresses, <em>usually</em> fairly quickly |
− | translates virtual to physical addresses, <em>usually</em> fairly quickly | + | using a look-up (TLB). This also returns some extra information – copied from the page tables – such as [[Memory_Protection|access permissions]]. If the virtual address is found in the (virtually addressed) level 1 cache then the address translation is discarded as it is not needed; the permission check is still performed though because (for example) the particular access could be a user application hitting some cached operating system (privileged) data. |
− | using a look-up (TLB). This also returns some extra information – | ||
− | copied from the page tables – such as [[Memory_Protection|access permissions]]. If the virtual address is found in the (virtually addressed) level 1 cache then the address translation is discarded as it is not needed; the permission check is | ||
− | still performed though because (for example) the particular access could a user application hitting some cached operating system (privileged) data. | ||
[[Image:MMU.png|link=|alt=MMU]] | [[Image:MMU.png|link=|alt=MMU]] | ||
− | The look-up takes some time, so it is usually done in parallel | + | The look-up takes some time, so it is usually done in parallel (“lookaside”) with the first level cache, which is why the |
− | (“lookaside”) with the first level cache, which is why the | + | level 1 cache is keyed with virtual addresses – something which is of importance during [[Context_Switching|context switching]]. |
− | level 1 cache is keyed with virtual addresses – something which | ||
− | is of importance during [[Context_Switching|context switching]]. | ||
== Example page table structure == | == Example page table structure == | ||
− | The figure below shows a simple page table for a 32-bit machine using | + | The figure below shows a simple page table for a 32-bit machine using 4 KiB pages (2<sup>12</sup> bytes), leaving 20 bits to select the page (2<sup>20</sup> = 1048576 pages). |
− | 4 KiB pages (2<sup>12</sup> bytes), leaving 20 bits to select the page | ||
− | (2<sup>20</sup> = 1048576 pages). | ||
[[Image:page_table_example.png|link=|alt=Example page table structure]] | [[Image:page_table_example.png|link=|alt=Example page table structure]] | ||
== Implementation == | == Implementation == | ||
− | Because page tables are quite large – and there must be a set for | + | Because page tables are quite large – and there must be a set for each process – they, themselves need to be stored in memory. This means that they are accessible for the O.S. software to maintain them – they must be in O.S. space for security, of course – but makes the memory access process very slow (and energy inefficient, too) because (in principle) there is one (or more) O.S. look-ups <em>before</em> every user data transfer takes place. <span style="color:red"><em>If this really had to happen the computer would be horrendously inefficient!</em></span> |
− | each process – they, themselves need to be stored in memory. This | ||
− | means that they are accessible for the O.S. software to maintain them | ||
− | – they must be in O.S. space for security, of course – but makes the | ||
− | memory access process very slow (and energy inefficient, too) because | ||
− | (in principle) there is one (or more) O.S. look-ups <em>before</em> every | ||
− | user data transfer takes place. | ||
− | <span style="color:red"><em>If this really had to happen the computer would | ||
− | be horrendously inefficient!</em></span> | ||
[[Image:page_table_operation.png|link=|alt=Page table operation]] | [[Image:page_table_operation.png|link=|alt=Page table operation]] | ||
− | In the ‘definition’, above, the MMU function was defined | + | In the ‘definition’, above, the MMU function was defined as a ‘black box’; a common set of page translations can be |
− | as a ‘black box’; a common set of page translations can be | + | [[Caching|cached]] locally to avoid the extra accesses, most of the time. This is the function of the [[Translation Look-aside Buffer (TLB)|TLB]]. |
− | [[Caching|cached]] locally to avoid the extra accesses, most of the | ||
− | time. This is the function of the [[Translation Look-aside Buffer (TLB)|TLB]]. | ||
− | In practice the TLB will satisfy most memory requests without needing | + | In practice the TLB will satisfy most memory requests without needing to check the ‘official’ page tables. Occasionally, the |
− | to check the ‘official’ page tables. Occasionally, the | + | TLB misses: this then causes the MMU to stall the processor whilst it looks up the reference and (usually) updates the TLB contents. This process is known as “[https://www.youtube.com/watch?v=mBOdZ6nhDJg table walking]”; it is usually a hardware job so you don’t have to worry about the details in this course unit. |
− | TLB misses: this then causes the MMU to stall the processor whilst it | ||
− | looks up the reference and (usually) updates the TLB contents. This | ||
− | process is known as “[https://www.youtube.com/watch?v=mBOdZ6nhDJg table walking]”; it is usually a hardware job so you don’t have to worry about the details in this | ||
---- | ---- | ||
− | + | {{BookChapter|9.1.3|353-355}} | |
{{PageGraph}} | {{PageGraph}} | ||
{{Category|Virtual Memory}} | {{Category|Virtual Memory}} | ||
{{Category|Memory}} | {{Category|Memory}} |
Latest revision as of 09:36, 20 May 2024
On path: Memory | 1: Memory • 2: Memory Management • 3: Memory Sizes • 4: Memory Mapping • 5: Memory Segmentation • 6: Memory Protection • 7: Virtual Memory • 8: Paging • 9: Memory Management Unit (MMU) • 10: Caching • 11: Cache • 12: Translation Look-aside Buffer (TLB) |
---|
Depends on | Memory Mapping • Virtual Memory • Memory Protection • Context |
---|
It would be a good strategy to be familiar at the simplified principles of memory mapping and memory protection before getting too deeply into this article, as the MMU is where they meet.
Caveat: different MMUs will be implemented in ways which differ in detail. This page is intended to illustrate the general principles in a particular way.
MMU Definition
A Memory Management Unit (MMU) is the hardware system which performs both virtual memory mapping and checks the current privilege to keep user processes separated from the operating system — and each other. In addition it helps to prevent caching of ‘volatile’ memory regions (such as areas containing I/O peripherals.
MMU inputs
- a virtual memory address
- an operation: read/write, maybe a transfer size
- the processor’s privilege information
MMU outputs
- a physical memory address
- cachability (etc.) information
or
- a rejection (memory fault) indicating:
- no physical memory (currently) mapped to the requested page
- illegal operation (e.g. writing to a ‘read only’ area)
- privilege violation (e.g. user tries to get at O.S. space)
Example
A typical MMU in a virtual memory system will use a paging system. The page tables specify the translation from the virtual to the physical page addresses; only one set of page tables will be present at any time (green, in the figure below) although other pages may still be present until the physical memory is ‘overflowing’, after which they may need to be “paged out”.
All the current process’ pages must be in the page tables but they need not all be physically present: they may have been ‘backed off’ onto disk. In this case the MMU notes the fact and the O.S. will have to fetch them on demand.
As was observed in the memory mapping article, the page table entries have some ‘spare’ space. Part of this indicates things like “this page is writeable” and the MMU checks each access request against this. Only if there is a valid mapping and the operation is legitimate will the MMU let the processor continue, otherwise it will indicate a memory fault.
Architecture
The figure below shows a typical MMU ‘in situ’ This translates virtual to physical addresses, usually fairly quickly using a look-up (TLB). This also returns some extra information – copied from the page tables – such as access permissions. If the virtual address is found in the (virtually addressed) level 1 cache then the address translation is discarded as it is not needed; the permission check is still performed though because (for example) the particular access could be a user application hitting some cached operating system (privileged) data.
The look-up takes some time, so it is usually done in parallel (“lookaside”) with the first level cache, which is why the level 1 cache is keyed with virtual addresses – something which is of importance during context switching.
Example page table structure
The figure below shows a simple page table for a 32-bit machine using 4 KiB pages (212 bytes), leaving 20 bits to select the page (220 = 1048576 pages).
Implementation
Because page tables are quite large – and there must be a set for each process – they, themselves need to be stored in memory. This means that they are accessible for the O.S. software to maintain them – they must be in O.S. space for security, of course – but makes the memory access process very slow (and energy inefficient, too) because (in principle) there is one (or more) O.S. look-ups before every user data transfer takes place. If this really had to happen the computer would be horrendously inefficient!
In the ‘definition’, above, the MMU function was defined as a ‘black box’; a common set of page translations can be cached locally to avoid the extra accesses, most of the time. This is the function of the TLB.
In practice the TLB will satisfy most memory requests without needing to check the ‘official’ page tables. Occasionally, the TLB misses: this then causes the MMU to stall the processor whilst it looks up the reference and (usually) updates the TLB contents. This process is known as “table walking”; it is usually a hardware job so you don’t have to worry about the details in this course unit.
Also refer to: | Operating System Concepts, 10th Edition: Chapter 9.1.3, pages 353-355 |
---|